8 Comments
User's avatar
James Hudson's avatar

You are assuming a clear distinction between facts (about what would happen under various circumstances) and values; but arguably the *goals* you list are mere *means* to what is ultimately valued. For example: *Stop global warming* is valued only because global warming is supposed to have various bad consequences. But would it really have those consequences (while not having unexpected good consequences that outweigh the bad)? That is the sort of question that should be put to the prediction markets.

Expand full comment
Robin Hanson's avatar

More abstract goals would work better, if they were measurable and close to what we really want. But more concrete goals seem to inspire more people to unite around them.

Expand full comment
Ivan Vendrov's avatar

The surprising conclusion for me is that futarchy is some sense centralizing: because the markets for large-scale decisions will be more liquid, the quality of advice available to large orgs will be better than that for small orgs, so we will on net entrust more decisions to large orgs.

Curious how this interplays with your concerns about reduced cultural variation; seems like futarchy will by default both decrease the number of orgs and also hurt every org's ability to maintain a unique culture, because they will pay for it in reduced advice quality.

Expand full comment
Robin Hanson's avatar

Even with a few or a single large futarchy orgs, their speculators might instruct them to induce great cultural variety internally, if they estimate that to lead to better outcomes.

Expand full comment
Krishangh Arjun's avatar

Wouldn't Goodhart's law come into play here. If you try to optimize for anything except the most legible and easily measurable goals you end up in a scenario where less measurable values are cast aside. Hell, this already happens often enough without adding in a massive incentive like in your model. This already happens with education and standardized testing. How do you propose creating reward systems immune to exploitation that reasonably approximate the actual goals which are most likely fuzzy and immeasurable and also avoid negative externalities unanticipated? I am told that safely aligning a powerful optimization system is difficult.

Expand full comment
cobey.williamson's avatar

There are already goals (SDGs) from a world government (UN). How is futarchy doing with those?

Expand full comment
Berder's avatar

You should start some prediction markets about the outcomes of futarchy: probability it will be implemented to varying degrees, GDP or other positive metrics conditional on degree of implementation.

Expand full comment
Robin Hanson's avatar

Someone would have to pay for such markets. And being long term, they'd be more expensive.

Expand full comment