20 Comments

The key point here is that "not any kind of objective fact about the world" isn't a coherent category for deciding whether probability estimates will or will not converge. Your objection applies just as easily to "What was the total number of people who visited Canterbury, defined as entering the boundary between midnight and midnight, on July 7, 1832."

That's clearly an objective fact, it's just uncertain. And obviously actual humans will not converge on an answer, but Aumann agreement shows very clearly that rational Bayesians must do so.

Expand full comment

Whether they "should" converge is not exactly relevant - if they don't in fact. (Why aren't British bettors bothered by the different odds on American prediction markets?)

But, yes, what I'm saying is a sort of challenge to Bayesianism. Probabiliy estimates only should converge if there is in fact a unique probability attached to the event in question. This is a condition that (I claim) is only approximated sometimes, when we say there is risk rather than when we call it uncertainty. (I am rejecting Bayesian probability when there is no coherent objective probability involved. (See "Epistemological implications of a reduction of theoretical implausibility to cognitive dissonance" - http://juridicalcoherence.b... )

An example. Let's say we have a prediction market on the result of the role of a die. Will "1" come up? However, no one is told, and no one can find out, how many sides the die has. To avoid problems, let's say the die toss is a simulation of randomness, and the die might have any number of sides, from 1 to a trillion.

We set up two distinct and separate prediction markets for this event. Would the two markets converge? No reason they should. (I'd guess that random events in the betting history would determine the end result.) With complete uncertainty there is no convergence. With large uncertainty there is little convergence.

Expand full comment

It is arbitrage that produces consistency, which results from people who can trade in both markets.

Misses my point. If without trading between markets, two otherwise identical markets produce sufficiently discrepant results (despite both having sufficiently many bettors), then there is no good reason to take either estimate or their average as having a privileged status. They're each clearly wrong, and there's no point in averaging them if you have no idea what makes them wrong.

This is basic convergent validation of any measure.

As to whether virtual reality changes the reality of torture. Perhaps it's unclear whether a qualitatively higher level of torture (practically eternal with the ability to run an emulation at high speed) changes anything. But torture is much easier if it can be performed without any interaction with the victim. (See Collins's recent book on violence.)

Expand full comment

I'm a big fan of your work - very gratified to receive a response. Thank you.

Expand full comment

Thank you for your well-thought-out comment on my comment.

Expand full comment

Yes, part of the reason disciplines don't work out their disagreements is that the function of academia is different from what academia usually say.

Expand full comment

+1

I think the live and let live analogy is the best one here. Within the theme of live and let live as well - even if compelling evidence was produced in an academic paper which proved one school of thought was wrong, I don't think this would change anyone's incentives. If you think of a school of thought as a business, ideas can find audiences across long distances and across time. Even the wildest conspiracy theories still find audiences, however small. And proof can always be contested, ignored, or undermined in different ways.

Also - big professional services firms can do the same thing, for the same set of reasons. One client wants advice from an urban planner/bureaucrat, another client wants advice from an economist/financial advisor, and they can offer conflicting advice. Crazy but true.

Expand full comment

That is said far better than my poor attempt to make that point. Thank you!

Expand full comment

I'm guessing Roger meant that expecting different communities of experts to strive to resolve inconsistencies with each other (as part of optimizing for discovering truth) is like expecting schools to optimize for education, hospitals to optimize for healing people, etc., as opposed to optimizing for the unstated goals you talk about.

So, ironically, if most practitioners of some given academic discipline are disinclined to take hidden motives seriously, despite the fact that hidden motives are taken seriously in other disciplines, that would itself constitute evidence that those very people are driven by hidden motives. And that seems to be the subtext of your post, now that I think about it.

Maybe it's good to make that point explicitly, because leaving it as subtext kind of made your post read to me like an attempt by you to claim higher status for yourself and your field relative to other academic fields. (Were you conscious of that? I honestly can't tell. Man, this shit is insidious!)

Expand full comment

Got a masters degree in public policy at the Kennedy School of Government, 20 years ago. (Haven't been impressed with most ed policy analysis then or since).

Question - what are implications of Elephant for such academic programs? Banish? Change somehow?

Expand full comment

I certainly agree with your points about the failures to achieve synthesis across disciplines, but I think the idea that there is a real difference and failure is somewhat of a weak-man argument.

For example, it is clearly standard policy analysis practice to pick revealed preference over stated values for policy design - and that can implicitly account for some hidden motives even if they don't clearly notice or mention them. Of course, I agree it would be valuable to call the hidden motives their attention more explicitly (as you have been doing.)

HOWEVER, the fact that you don't see many policy analysts making the points publicly isn't necessarily because they don't appreciate them, but can be (and I would say at least partially is) because they can't state the fact that they are targeting hidden motives without compromising their ability to publicly advocate for a policy. And I make that claim based on the fact that I know political scientists and psychologists working in policy who have said things to that effect about particular policies. But again, I'd agree that clarifying the hidden motives and the effects on policy decision making further is valuable, even if it's not discussed publicly.

Expand full comment

> A reason, if a rather abstract one, not to expect convergence is that "the probability that (e)" where e is an event (like Trump getting impeached) is not any kind of objective fact about the world.

No. Aumann agreement theorem (given subjective bayesian approaches to probability) show that people should converge on their probability estimates of future events.

Expand full comment

I'm not following you.

Expand full comment

I'd say policy analysts aren't today cleverly trying to give people what they say they want while taking into account that they really want other things. They are just blindly assuming that people want what they say they want. Thus there is in fact a real disagreement.

Expand full comment

Isn’t the last paragraph an example of the very phenomena you are describing? Apologies if that is a point you are making that was meant to be subtle and I am just explicitly stating something already expressed sufficiently for others to intuit.

Expand full comment

I will argue that this is in part based on a different dispute, not a lack of consensus. Policy people in fact believe that there is a normative good in many of the stated goals, and (clumsily) attempt to achieve those goals even at the cost of the unstated hidden motives. You point out why this won't work, but that doesn't eliminate the fundamental disagreement about whether we should attempt to build policies that help people achieve their hidden motive / revealed preference goals, or their stated but ignored goals.

I'll go further, and veer into the deeply murky and frequently useless waters of philosophy. If we care about the conscious observer portion of people, we *should* aim to achieved the stated but post-hoc rationalizations, despite the fact that the people have other "true" motivations. That's because we don't necessarily care that the non-conscious portion of people's brains have different goals. If, on the other hand, we think of people as consistent/coherent agents, we want to help them fulfill their unstated true goals.

Expand full comment