I have often taken on the role of econ futurist, analyzing the social consequences of particular future techs. For example, see my book The Age of Em.
I am also the inventor of a new form of governance, futarchy, which is now finally seeing some substantial trials, 25 years after its invention. Which is making me realize that it’s high time I analyzed the social consequences of this as a futuristic tech. That is, how might the world change if this tech ends up being a promising as I’ve long thought?
Futarchy recruits speculative market traders to advise decisions. So its value is largest for high value decisions made by actors centralized enough to coordinate to pay for advice, and where some speculators are able to collect relevant info. The obvious top candidate here is: org, not personal, decisions.
My default scenario re any new tech is a mostly gradual spread of adoption, across industries, org types, and decision topic areas. Gradual because people don’t believe a new thing might work for them until they see it applied to something pretty close to their application area. And because typically each new area does need some new different implementation details, that need concrete experiments to work out.
First, let’s try to identify some key dimensions of uncertainty that make this all hard to predict.
One key variable is how many messy details we will have to invent or discover before futarchy can be effective across as wide scope of applications. The fewer such details, the faster autarchy might spread, and the bigger the jumps we might see from some application areas to others.
Another key variable is how futarchy will be framed politically, so that people will oppose it just to support their political side, regardless of its effectiveness. With different framings it gets different allies and detractors, and so is adopted faster among allies than among detractors. This may also influence the metrics by which it is judged.
A third key variable is how large a moat will futarchy supplying firms be able to generate. The larger the moat, the more concentrated will be this industry. I don’t see big moats yet, but we can expect firms to be creative in trying to generate them.
Okay, second, let’s try to predict the early applications of futarchy.
One predictor of where futarchy will be applied first is the value of advice on particular decisions. This is correlated with the size of an org, and the size of particular decisions at issue in that org. Futarchy is better at stopping a few big bad lumpy changes than at preventing a slow gradual decline due to many small decisions. And better at pushing a few big lumpy radical improvements, than promoting gradual gains. Futarchy works best outside the Overton window.
Decision value is also correlated with the dysfunction of the org’s existing processes. The more that pride or politics distorts decisions today, the better. By this measure government or charity orgs seem especially promising.
Another predictor of early applications is potential for learning. Early applications are not just where big value could be realized, but also where many similar decisions for which key outcomes are quickly revealed. As this is where the most experimentation can take place, to work out key institutional details, and prove effectiveness to skeptical audiences. Maybe new hire decisions, and project deadline decisions.
A third application predictor is powerful owners. When owners of the org are more in control, more eager to efficiently achieve their goals, and better able to understand the power of speculative markets, the more quickly they will adopt futarchy. This points most strongly to privately held for-profit orgs.
A fourth application predictor is where relevant outcome measures are simpler, harder to manipulate, and more agreed on. Such as with for-profit orgs with a public stock price, or crypto orgs with a public coin price. The thicker the markets in these value metrics, the better, as that adds subsidy to trading and makes it easier for market price differences to show smaller value differences.
A fifth application predictor is the number of people who could plausibly learn about a decision and trade on it without revealing their identity, to avoid org retaliation. So small secretive orgs where only a few can access relevant info are not good candidates.
A sixth predictor is how powerful, entrenched, and proud are existing deciders. The more that the people who now make these decisions feel that making or influencing such decisions is a big part of their power, prestige, and identity, the less they will be willing to let futarchy decide instead. This is why org funding seems a promising application; bosses have never been in simple control of their funding.
Two decades ago, prediction markets had the most success replacing focus groups and picking new innovation projects, both areas where existing managers are not used to nor much inclined to express opinions. (Such markets typically predicted end of market prices, not actual choice consequences.)
Okay, we’ve sought key variables of uncertainty, and tried to identify early application areas. Now let’s try to foresee longer term consequences. Imagine a world where futarchy has spread far and wide, to be as ubiquitous as is cost-accounting, statistics, or randomized trials today. How is that world different?
In such a world orgs should be larger, as their more effective governance reduces the scale diseconomies that limit org sizes today. Governments and nonprofits may also encompass more social activity, if they can learn to adopt simple robust futarchy outcome measures. This plausibly cuts their disadvantages relative to for profit orgs today. We might well even get bigger national alliances, or even a world government.
In this world, the social status of arguing over facts and casual effects should decline relative to that of arguing over values, especially values expressed as measurable outcomes. While fact disputes will be settled in the speculative markets, value disputes will still be aired in public conversation that influence collective choices over futarchy outcome measures. Instead of arguing over what is our situation, or over what levers have what effects, prestigious cultural conversation will focus on more on what outcomes do we want,
Finally, let’s imagine the most dramatic scenario: a world government, or at least a government encompassing a large fraction of the world, adopts futarchy tied to some widely shared sacred goals, expressed concretely as some long term outcome metrics. If these goals were inconsistent with a collapsing civilization, that might be enough to prevent such a collapse.
What sort of long term goals might gain widespread support? Here are priorities, relative to a max of 100, from two sets of polls, that got 2714 (left column) and 7788 (right column) responses.
You are assuming a clear distinction between facts (about what would happen under various circumstances) and values; but arguably the *goals* you list are mere *means* to what is ultimately valued. For example: *Stop global warming* is valued only because global warming is supposed to have various bad consequences. But would it really have those consequences (while not having unexpected good consequences that outweigh the bad)? That is the sort of question that should be put to the prediction markets.
You should start some prediction markets about the outcomes of futarchy: probability it will be implemented to varying degrees, GDP or other positive metrics conditional on degree of implementation.