18 Comments

Not all influencers are seen as authorities. My analysis was of authorities, not everyone. Weather forecasters don't seem to have much competition for their role. If they did, being proven wrong so often, and changing their minds so often, would probably cut their authority.

Expand full comment

How much does this model vary with respect to to changes in peoples innate preferences in how their opinions relate to the powerful/weak?

Also, I feel like this model does not well predict e.g. the influence of random youtubers or guys like Joe Rogan, or maybe QAnon.

Also, don't people mostly follow the official sources for things that are "far"?

Oh yeah, what about the weather? We do grumble about how often the weatherman is "wrong" but ultimately pay attention to them. Yes, the weatherman can make genuine mistakes and perhaps would take poorly to attacks on their authority. But they don't need to have fixed predictions in order to maintain their reputation.

Expand full comment

Could the problem be reduced if the experts hedged more. In the case of Covid spreading mechanisms would it have helped if they had said, we don't know if Covid is spread on surfaces but for now assume that it is, we'll let you know when we get more information?

Expand full comment

A Bayesian, rational decision maker's (is that what you mean by maximally accurate?) beliefs would follow a martingale process more generally, not a random walk specifically. The difference is that a bounded martingale process (which is what beliefs would be) eventually converges as the decision maker learns the true state. So why isn't it the case that 'maximally accurate' (in scare quotes because I'm not 100% sure I know what you mean by that, taking it to mean they're Bayesian) sources haven't just accumulated a lot of knowledge and so update infrequently, totally consistent with the behavior of a Bayesian, rational decision maker?

One might argue that they clearly haven't just gotten so much information because, well, just look at some of the recommendations they make. Which, fine, but that's an observation about the particulars of public health recommendations, not some natural feature of the stochastic processes we see that's inconsistent with rational Bayesian updating.

Expand full comment

To be sure, it isn't strictly true that "future updates are equally likely", right?

It is true that the expected value of my future credence on some given proposition is equal to my present credence on that proposition.

But this does not imply that my present credence in an increase in the future is equal to my present credence in a decrease in the future.

Neither does it imply that my present expectation that my future credence will be greater is equal to my present expectation that my future credence will be lower.

The only thing that we can conclude is that my present expected size of a positive future deviation from my present credence is equal to my present expected size of a negative future deviation from my present credence.

At any rate, this is still consistent with the idea that one should expect to take a random walk in credal space. And consequently, that ideal inquirers do take a random walk in credal space.

Expand full comment

I like my explanation better.

Expand full comment

Excellent, now I see. Thank you!!

Expand full comment

Intuitively a "best" estimate includes all available information. If you know that a future estimate is likely to move in a given direction, you'd have already included that fact in your current estimate.

Therefore future updates ought to be equally likely to differ from your current estimate in all possible directions. Thus a random walk.

Expand full comment

You did. Sadly, I'm not able to figure out how the law of total expectation yields your claim. Can you just sketch a couple of sentences of the connection between the probability result and your claim?

Expand full comment

I gave a link at the phrase "random walk".

Expand full comment

Can you better explain why maximally accurate sources follow a random walk? It seems to me that a max accurate source will trivially have constant credence 1 on all truths and 0 on all falsehoods. And optimal inquirers will monotonically increase credence on all truths and decrease credence on all falsehoods. So where is the random walk structure coming from?

Expand full comment

Yes, I acknowledged separation, which you raised, but separation isn't enough.

Decision makers have to make decisions. Advisors (e.g., scientists), on the other hand, do not. They should express how (un)certain they are in their conclusions and then the decision-makers should make the decisions as they see fit. The advisors should not look to the "estimators" for decisions.

In your WHO example, it have been good to express how certain they felt in their initial conclusion that COVID-19 is not spread by airborne mean. Then, when they altered their view, how certain they were that it was indeed spread by airborne means.

I fear that naive use of "the scientific method" sometimes gets us into trouble. My guess is that their initial conclusion was based on test results that did not support airborne transmission; but an uncertainty estimate would or should have included the notion that "absence of evidence is not evidence of absence."

Expand full comment

But a lot of the problems arising here are not just from the "estimator" function of WHO-style orgs, but also from the "advisor" function. WHO issues a lot of normative statements along with descriptive ones.

Expand full comment

Definite decisions needing to be made, and separateing decision makers from estimate authorities, was all part of my analysis above.

Expand full comment

But my whole post was about a separate authority regarding estimates. Having it separate doesn't by itself solve the problems.

Expand full comment

You take an interesting approach to the question. You first give reasons why an authority would want to minimize updates, then consider factors that force more frequent updates.

The reverse is possible. You could note (as you do) that if the only goal is accuracy, then it’s optimal to update continuously, giving the random walk. Since we don’t observe that, there must be other goals and/or constraints that apply. Of course, this doesn’t invalidate your approach.

For a simple model of authority prediction frequency, perhaps the following two frictions would suffice. Start from the random walk model. First, change the goal to perceived accuracy by non-experts, not actual accuracy. Second, add a constraint: non-experts can’t distinguish between backtracking that is informationally optimal and backtracking that indicates a previous mistake. In that setting, an authority (or any pundit) must balance update frequency (and clarity) against the risk of having to backtrack. This is coming from my simple intuition that everyone is just trying to make themselves look as good as they can, to maximize their status.

Expand full comment