A common complaint about futarchy (see also):
Say you’re thinking about firing Elon Musk. … My objection is more basic: It doesn’t work. You can’t use conditional predictions markets to make decisions like this, because conditional prediction markets reveal probabilistic relationships, not causal relationships. There are solutions—ways to force markets to give you causal relationships. But those solutions are painful and I get the shakes when I see everyone acting like you can use prediction markets to conjure causal relationships from thin air, almost for free. (More)
Not true. If the decisions to trade in decision markets apply the same decision theory as do the decisions that those markets advise, then both should use the same probability concept, probably causal. Let me explain.
Most everyone agrees that decision theory recommends that decision maker d take the action A that maximizes expected utility E[U_d|A] = Sum_i U_d(O_i) p(O_i if A), where O_i is an outcome, and U_d(O) is the utility to d of that outcome. Many disagree, however, on what sort of chances should play the role of p(O_i if A) here. It is widely said that evidential decision theory recommends conditional chances P(O_i | A), while causal decision theory recommends including everything d knows on the causal structure of the decision context to estimate P(O_i caused by A).
Statistical analyses often take datasets that include various A and O and infer estimates of P(O_i | A). Such analyses typically assume no or max simple causal structures, and ignore what we know about relevant causal structures, in order to “let the data speak for themselves”. Regarding such models, we are often warned to distinguish the correlations that they estimate from the causal chances that we actually want to use when making decisions. Correlation does not imply causation.
However, contrary to the three writers linked above, speculative market prices are not generally equivalent to estimates from simple statistical analyses! Perhaps they are trying to credit markets for being “scientific” thinkers, and see such simple stat analysis as the proper scientific approach. But market prices are instead far more complex and subtler things.
Sure, some traders many make and use oversimplified stat models to inform their trades, but speculative markets are typically full of many kinds of naive traders whose biases are not reflected in market prices, as other traders counter and correct for their biases. Market prices are better thought of as combining trader info into prices that estimate asset value, according to decision theory calculations of that asset value.
Imagine that market traders all had exactly the same info, the same as the info of decision maker d. Further imagine that they all use the same kind of decision theory, be it evidential, causal, or something else. Given these assumptions, they would all agree on their estimates E[U_d | A], as well as on other E[X|A] = Sum_i X_i p(X_i if A), because they would agree on and use the same conditional chances p(X_i if A). Traders here would use their beliefs on the causal structure of d’s action A related to other events X, and if they have the same info they should have the same beliefs on that causal structure.
Thus for any asset that pays in proportion to X, traders would all estimate the risk-neutral financial value of a trade of that asset conditional on d choosing A via the same E[X|A], and thus that common E[X|A] should set the asset’s risk-neutral price in conditional asset markets. So market prices here would give exactly the sort of estimates that the decision maker wants for advice, be they evidential or causal. Though in this case the info isn’t actually useful, as the decision maker already has it.
Now assume that market traders all have the same info, which is strictly more than decision maker’s info. Now the market prices would be set by trader E[X|A], which embodies more info than held by the decision maker. By observing the market prices, the decision maker can here become better informed on their decision, via just accepting market price estimates E[X|A] as their personal estimate. And if there happens to be a market in an asset that pays in proportion to U_d, the decision maker can directly accept market estimates of E[U_d|A], and just pick the option A which gives the max value for this. Here decision markets directly aid decision makers.
If we instead assume that the decision maker has strictly more info than market traders, we face the potential problem of a decision selection bias, as I’ve discussed. A robust solution for that is to make the decision time clear, and allow decision makers or their associates to trade in the markets. Given these conditions, prices just before the decision should reflect full info E[X|A], not distorted by decision selection biases.
What if different traders, and decision makers, use different concepts of conditional chances P(O_i | A) to estimate the value of their trades? In that case the most accurate concept will tend to win out in trading, and come to dominate the population of traders. And that winning concept seems to be the correct decision theory concept, which decision makers are also well advised to use. And so conditional prediction markets would then offer good advice to decision makers re E[X|A].
My bet on the best conditional chance concept P(X if A) is inspired by this source. If one first collects one’s decision relevant info and then makes a final decision choice in a mechanical way using those inputs, the outcome of that mechanical process just can’t offer any more evidence than was embodied in its inputs. For that last step, evidential and causal chances are the same.
So it makes sense to first naively collect decision-relevant info, using everything one knows about causality, second reflect on how that info might embody evidence for hidden traits, third update that info to reflect such evidence, and forth use your mechanical decision process on those updated inputs. Here causal and evidential decision theory should give the same answers.
I think you've conflated different sorts of "do" here. Your argument about correlation vs causation is strong when the people making the predictions have the power to do or not do. It's weaker when the bettor has little to no control over the person(s) who might do. It holds virtually no water if the "do" in question is itself a statistic covering millions of people (e.g. a market conditional on the outcome of an election).
This post does not address the criticism in the mentioned article. Musk may be bad for Tesla, but firing him may indicate something even worse happened that hurts the company even more. For example, his archenemy manages to get him fired and is now out to destroy Tesla. Suppose: firing Musk is observable, but his archenemy prevailing is not (so, one cannot bet on the joint event: Musk gets fired and the archenemy prevails). The conditional prediction market suggests that keeping Musk is good for Tesla, not because Musk himself is good for the company (he is, in fact, bad) but because Musk keeping his job indicates that his archenemy failed to prevail in his attempt to hurt both Musk and Tesla.