I started to write on prediction markets, which I then called “idea futures”, about 34 years ago. A bit over 25 years ago, in the May/June issue of IEEE Intelligent Systems, I published on Decision Markets, with my main example being how they could answer the question of the effect of concealed firearm carry laws on murder rates (an issue where empirical literatures still seems unclear). A year later I posted the first version of my paper Shall We Vote on Values, But Bet on Beliefs?, later published in the Journal of Political Philosophy, which presented basically the same concept in the more prestigious context of running an entire government.
Since then I have written many things on the decision markets concept, and tried to pitch it to anyone who’d talk to me about prediction markets. In 2003, Hal Varian wrote a NYT article on decision markets, and they were mentioned in the NYT again in 2008, when the NYT also quite prematurely listed it as an official buzzword of that year. Also in 2008, Peter McCluskey did a trial re that year’s US presidential race.
But alas, I haven’t otherwise been able to get people to try the idea. Until the last year, that is! The MetaDAO has been trying it for key governance decisions it for about a year now, and has signed up other DAOs to also use it. And a shy foreign government health organization who hasn’t authorized me to tell you more has also been trying for about a year.
This all inspires me to go back to the concept and think about it more. Here are some thoughts.
If you recall, the basic idea is to define an ex-post-measurable outcome metric, and ask speculative markets to estimate this metric conditional on adopting, and on not adopting, a particular proposal. (Trades are called off if their condition is not met.) Approve a proposal if metric given proposal is higher than given not. For example, decide to fire a CEO via two conditional stock markets, giving the stock price if CEO does or doesn’t leave by quarter end.
But there are many details to consider. For example, there is the problem that if speculators don’t know when the price comparison will be made, they will guess what info might be revealed between now and decision time, which is a problem if now is actually the decision time. This problem is avoided by making the decision time very clear, and having markets make the decision, not just advise it.
Related doubts re if conditional estimates are really causal estimates are clearly solved by having trades also be conditional on randomly picking the proposal, say one percent of the time. But there’d be a real cost here, as those random decisions are far from optimal.
Some academics have focused on a related problem: traders might want to bias the market toward the outcome about which they have the most info, to profit the more from trades on the outcome measure in that scenario. Which would indeed be a problem if there were only one trader, or a coordinating cabal of traders. But when there are many competing traders, I find it hard to take this problem seriously.
Another problem is how to decide if the price difference is big enough to conclude it isn’t just noise. I suggest framing this via statistics. Make a statistical model of the two prices over/near the decision period, a model in which prices are due to a combination of a the two real price over time, and noise. Fit the actual price data to this model, and ask how confident this best-fit model is that the period-end price difference isn’t due to chance. Only approve the proposal if confident enough. Choose among possible stat models in the usual stat way.
Another issue is agenda control: who gets to make proposals to be evaluated? If too many proposals are made too frequently, speculators won’t be able to attend to them all, and bad one might slip through. If the outcome measure can be calibrated in cash terms, I prefer to hold an auction for regular proposal slots, such as once a week or once a day. Those who paid for approved proposals should then be paid a fraction (maybe a third) of the market estimate of the proposal’s value. Except, at the very start of the process I’d start that fraction out low, and gradually increase it, to avoid paying more than necessary for easy obvious proposals.
A related issue is the incentives for trading in these conditional markets. The possibility of markets influencing decisions should ensure liquidity, as some traders try to influence the outcomes, and other traders join to profit by trading against them. But then cutting the scope for selfish influence might cut liquidity too much. Better to just subsidize these markets. For example, one might set net subsidizes to on average roughly a third of the value that proposals add on average to the system.
The hardest issue I know of, where I’m still not sure what to do, is: redistribution. Imagine an futarchy-run org whose outcome metric is the capital invested in it over the next twenty years, with $100 invested so far. Imagine someone proposes to invest $1 more in the org, on the condition that 60% of firm ownership is transferred to them. If adopting this proposal had no effect on other future investments, we should expect it to increase total capital investment in the org, and thus to be approved by speculators. But that might encourage way too much effort to go into such redistribution proposals. (I’m grateful to @azsantosk for pointing out these issues.)
Yes, if we expect many proposals like this to be adopted, it could be hard to say who will end up owning the org, and that might discourage investment. And on this basis market speculators might not approve such proposals. But I don’t have great confidence in this prediction. Yes, one might pick a better outcome metric than amount invested, but I’m not confident redistribution can’t also happen with even the best metrics.
As far as I can tell, what real orgs do today to solve this problem is to have laws and social norms that limit redistribution proposals, though such redistributions clearly do happen even so at times. We might also rely on such laws and norms in futarchy org as well; but can we do better?
Is there a principled way to limit or prevent proposals mainly designed to redistribute among owners, rather than to increase the size of their total pie? The principle that occurs to me is: commitment. A key question in futarchy design is whether approved proposals can restrict future proposals. If yes, then proposals could be adopted early that prohibit certain defined redistribution proposals later.
Today, governments usually allow bills to cancel previous bills freely, and CEOs can overturn prior CEO instructions, though there can be large costs for breaching contracts with outsiders. These changes are only limited by constitutional constraints, which can be changed by a separate and more difficult constitutional change process. An analogous approach in futarchy would be to have two levels, a deeper level in which proposals are less frequent and harder to approve, and another level where it is easier to approve changes, except those changes are constrained by rules set at the deeper level.
A different approach would be to just have one level, but let approved proposals limit what future proposals can do. I’m not sure what is the best structure here, though in some sense they are all the same abstract structure of initial policies that limit future changes. So I’m not sure what are good ways to commit to policies to avoid redistribution and other potential problems.
Added 13Aug: The price difference conditional on accepting vs rejecting a proposal might be small if speculators expect it to be pretty surely adopted soon, even if it is rejected now. And that small difference might then prevent it from being adopted. A solution is to first propose to commit to not adopting this proposal for a long time if it isn’t adopted in this next proposal round. That commit proposal would show a big price difference, and then given it the proposal itself would also show a big difference.
Also, it makes sense to have an extra strong liquidity subsidy just before the period when when measures the price difference. That should induce info to be revealed then, making for less info revealed during the price measurement period, simplifying the task of inferring speculator estimates from noisy prices.
We use futarchy (we call it prognootling) for family decisions sometimes. I gave examples in my talk at Manifest which will supposedly be on YouTube soon.
On a more theoretical note, I've been mulling this thoughtful warning about limits on the application of decision markets: https://dynomight.net/prediction-market-causation/
It points out how a conditional market like "If we take action A, will outcome B happen?" might tell you there's a strong _correlation_ between A and B without telling you that A will cause B. For example, maybe a conditional market says that revenue will nosedive if we fire the CEO. Does that mean we shouldn't fire the CEO? Not necessarily! If we did fire the CEO then probably we're in a universe where the company is imploding. The causation could run the opposite way.
I know you've thought about that issue -- how much it matters and how to mitigate it -- but it would be great to see your direct response to Dynomight.
I am selecting quantified self experiments based on the effect size predicted by a couple of markets on Manifold, more detail at https://niplav.site/platforms. One experiment has finished already, the second is being run.