As futarchy interest and activity are way up lately, this seems a good time to elaborate on one of its most technical issues, one that @metaproph3t also discussed recently: decision selection bias.
> So I suggest that a futarchy system for considering and adopting proposals randomly reject say 5% of the changes that it would otherwise have accepted. This should ensure good estimates conditional on not adopting proposals
Wouldn't the market need to be made conditional on the decision being made randomly for this to mitigate decision selection bias?
Randomly not making a change does indeed sound better than randomly making a change. This is related to the idea that change is generally bad once optimization has gone on for a while.
> Assume that market speculators know only that the truth is uniformly distributed within the oval shown, but the actual decision to dump or keep will be made according to the actual point in this space.
This argument interestingly has the the structure of a proof by contradiction - if we were to assume that Futarchy always made the "right" decision, then it would be reasonable to assume that speculators conditional probability on the selection would be the linearly-separated subset of the oval, but this implies that the decision could be "wrong". So this argument proves futarchy has a flaw, but doesn't say much about what the scale of the flaw is - it could be that it's only a minor flaw. I'd be interested in seeing a more constructive reasoning about how this would actually manifest in terms of probabilities of different outcomes for concrete specifications of the speculators.
It's remarkable how poorly understood this stuff is. Metaculus has struggled for years to understand what should be a relatively simple point. I know for a fact that some people who worked at Metaculus understood it perfectly well but... the organization just didn't do anything with the knowledge, they continued to push "conditional forecasting" as a decision aid in a naive way.
> The two dimensions here are of the value of a company’s stock if the CEO is dumped, and if the CEO is kept. Assume that market speculators know only that the truth is uniformly distributed within the oval shown, but the actual decision to dump or keep will be made according to the actual point in this space.
I don't understand this. How could *anyone* know the actual point in the space? No one ever directly observes a (keep price, dump price) pair; if they keep, they observe only a keep price, and if they dump, they observe only a dump price. And before they keep or dump they have neither.
I think what you're saying is that the decision maker somehow has omniscient information about the outcome of the decision? Or in practice not omniscient but a lot more than the speculators have, so that the decision maker doesn't know the actual point but does know a small region the point will lie in.
I guess an MWE is -- if you believe choice B > choice A, but that A is heavily underpriced while B is correctly priced, you are incentivized to bet A up so that it gets chosen and you profit for this information.
What's the advantage of futarchy over just auctioning "the right to decide and collect reward" to the highest bidder -- or better, selling voting shares?
Prediction markets let thousands of traders contribute to each decision. Few could afford to buy an entire decision, and many would fear what they might do with such powers.
> So I suggest that a futarchy system for considering and adopting proposals randomly reject say 5% of the changes that it would otherwise have accepted. This should ensure good estimates conditional on not adopting proposals
Wouldn't the market need to be made conditional on the decision being made randomly for this to mitigate decision selection bias?
Randomly not making a change does indeed sound better than randomly making a change. This is related to the idea that change is generally bad once optimization has gone on for a while.
> Assume that market speculators know only that the truth is uniformly distributed within the oval shown, but the actual decision to dump or keep will be made according to the actual point in this space.
This argument interestingly has the the structure of a proof by contradiction - if we were to assume that Futarchy always made the "right" decision, then it would be reasonable to assume that speculators conditional probability on the selection would be the linearly-separated subset of the oval, but this implies that the decision could be "wrong". So this argument proves futarchy has a flaw, but doesn't say much about what the scale of the flaw is - it could be that it's only a minor flaw. I'd be interested in seeing a more constructive reasoning about how this would actually manifest in terms of probabilities of different outcomes for concrete specifications of the speculators.
> Here are many other correlations, showing that this problem is rare overall:
Does this assume decisions/proposals are constructed non-adversarially?
It's remarkable how poorly understood this stuff is. Metaculus has struggled for years to understand what should be a relatively simple point. I know for a fact that some people who worked at Metaculus understood it perfectly well but... the organization just didn't do anything with the knowledge, they continued to push "conditional forecasting" as a decision aid in a naive way.
The first chart seems to clearly argue that the CEO should be kept, with the “keep” outcomes corresponding to higher prices?
No, the average of the oval is in the dump region.
Copy. Took a while to get intuition to align. Still a bit counterintuitive that keeping the CEO has a wider range of outcomes than dumping.
Thanks for the mind bender!
Good post. More like this please.
> The two dimensions here are of the value of a company’s stock if the CEO is dumped, and if the CEO is kept. Assume that market speculators know only that the truth is uniformly distributed within the oval shown, but the actual decision to dump or keep will be made according to the actual point in this space.
I don't understand this. How could *anyone* know the actual point in the space? No one ever directly observes a (keep price, dump price) pair; if they keep, they observe only a keep price, and if they dump, they observe only a dump price. And before they keep or dump they have neither.
I think what you're saying is that the decision maker somehow has omniscient information about the outcome of the decision? Or in practice not omniscient but a lot more than the speculators have, so that the decision maker doesn't know the actual point but does know a small region the point will lie in.
It is a simple model whose virtue is that it allows us to make calculations based on it. Yes of course it assumes more knowledge than is plausible.
I guess an MWE is -- if you believe choice B > choice A, but that A is heavily underpriced while B is correctly priced, you are incentivized to bet A up so that it gets chosen and you profit for this information.
What's the advantage of futarchy over just auctioning "the right to decide and collect reward" to the highest bidder -- or better, selling voting shares?
Prediction markets let thousands of traders contribute to each decision. Few could afford to buy an entire decision, and many would fear what they might do with such powers.