6 Comments

Also I should have written U1 = a*z' instead of a*y.

Ah yes, right, of course.

Well, I grant that it is a reasonable sounding model. A pretty good one for a few days work, absolutely. Perhaps you should submit it to the Mathematical Contest in Modeling as a problem for them? Reading it reminded me of my years competing there. :)

I'm not convinced that it's necessarily much more accurate than some other plausible models, but it does have the advantage of much nicer looking mathematics than many alternatives, and that counts for a lot.

Anyone want to co-author something a write up of this? :)

Well, I am in Fairfax County and a probabilist... ;)

Expand full comment

You are both right; my error; I meant max posterior instead of max likelihood. Also I should have written U1 = a*z' instead of a*y. I've corrected these in the text above.

So, Eliezer, U = a*z(s') + Int_y p(y|s) log(p(y|s')) dy.

John, I took Eliezer's argument to be that there was a low probability effect that wasn't due to an attention payoff at all.

Yes, other models might give different results; this was the first one I tried. Yes, an obvious easy generalization is a general quadratic U1, which can express the diminishing returns Eliezer suggests.

Anyone want to co-author something a write up of this? :)

Expand full comment

(Incidentally, what happens if agents anticipate the attempt to discount their claims, and pick the claim such that, after being discounted, it maximizes their reward function, etc...)

Expand full comment

Couple of comments: First, I think you meant to say MAP not MLE. The MLE is (of course) z=s. I check your figures on the MAP. But could you explicitly spell out your equation for the agent's combined payoff as a function of fixed s and variable claim?

Second, I think that this fails to capture the real problem of "extraordinary claims". Mathematics has to capture the structure of the environment before solving the equation actually helps us, and we all know the danger of treating humans as Bayesians.

One, "extraordinary claims" tend to be those that violate *qualitative* generalizations. When dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions. While building dams decreases the frequency of floods, damage per flood is so much greater afterward that the average yearly damage increases. [Burton, I., Kates, R. and White, G. 1978. Environment as Hazard. New York: Oxford University Press.] People don't model the quantitative power-law distribution in their heads - they regard the prospect of a large flood as violating the qualitative generalization, "We've never had a flood that large!" or "We haven't had a flood that large since we built the dam."

Two, social payoffs (such as attention) don't increase automatically as log-scope - there comes a point of diminishing returns, when your claim simply causes incredulity. (An environmentalist researcher announces that the greenhouse effect is more severe than expected, and is going to destroy the entire galaxy.) There is a claim-of-scope with some maximum social payoff, rather than an infinite increase. We may end up with a bimodal distribution, one mode centered around the rational estimate but biased in the directon of increased reward, and one mode centered around the maximum social reward but biased in the direction of the rational estimate. And note that I have said nothing about which mode is greater or less than the other.

On a model which yielded that sort of shape, people would be "rational" or "play to the public", biased maybe in the direction of the other reward, but tending to fall into one category or the other.

Expand full comment

The standard deviation c of this distribution depends on his rationality r and the strength of his payoffs U. For a claimer with a positive attention payoff, his z' (or s') is also normally distributed with the same standard deviation c, but the mean of this distribution is biased toward larger disasters, with a bias b that is proportional to a.

c = d/r and b = ad, I believe, assuming that my brief sketch is correct.

However, you have set up the model here yourself. It's easy enough to change your attention payoff or other model features so that it does depend on how unlikely a priori the claim is. Some of the alterations will leave it no longer a nice normal distributions, though. Other examples don't do that, but do change the model. One easy example is to suppose that the attention payoff is proportional to the square of the magnitude, so that U1 = a*y^2. In that case, the attention payoff does not affect the mean of the normal given, and introduces no bias term b, but affects the standard deviation of the given distribution.

We may also imagine an attention payoff that rather than depending on the magnitude alone, also depends on the probability of the unlikely event, perhaps incorporating just v or the entire probability. In that case then, yes, it would depend on the probability of the event.

It seems to me that you've excluded the probability of the event from the attention payoff, and then proceeded to show that under that model the probability of the event doesn't affect the prediction. I'm sorry, but I don't see how it's extremely useful as an argument.

Expand full comment

His maximum likelihood estimate (MLE) of y should rationally be z = s - d*v,

Technically, no. This is the posterior mode, or maximum a posteriori, the maximum value of the posterior distribution, which you have correctly calculated. The MLE, by its name, is applied to a likelihood ratio and would, in this case, be simply equal to s. The MLE and maximum a posteriori (MAP) are equal when the prior is uniform. (Also of course we're assuming that d is known a priori for this calculation to make sense.)

Expand full comment