Fortunately ‘liberal’ as a common label seems to have been replaced by ‘progressive’ (a viewpoint which is decidedly illiberal). But on the left, ‘liberal’ (and ‘democratic’) now effectively mean: anything promoting the growth and interests of the administrative state. If elections must be cancelled to achieve this, then cancelling them is democratic. If millions of poor, nonwhite students must be consigned to dangerous and dysfunctional public schools to achieve this, then that is liberal. If state and media collaborate to monitor and punish dissent in order to achieve this then THAT is liberal.
It’s almost as much of a mess as the label ‘conservative’ under Trump. But not quite…
What kind of "sacred goal" would your civilization be based on? The maximization of economic growth or profit? Creating widgets? Building pyramids for the pharaohs? It seems to me that any civilization based on pursuing some kind of sacred cow like that is doomed to failure. We've tried spreading Christianity, Islam, democracy and communism as goals to motivate individuals to participate in maintenance of empires in past. That hasn't stopped the decline of empires in the past, and it won't in the future. That's because a not insubstantial proportion of the population are intelligent enough to realize that at the outset that such "goals" are meaningless or because they learn through direct participation that those goals are meaningless for themselves and the ones they care about.
You state here in a potentially extreme example of what a liberal state might do to better the conditions of SOME of it's people:
"But are we allowed to kill people, or prevent them from existing, to achieve this?"
If your goals are "sacred" wouldn't that almost automatically justify killing (sacrificing) people if it was believed necessary to attain that goal? It was certainly "necessary" in the past for some sacred goals.
You go on to suggest that:
"Others say “liberal” means “rights are preserved”. This suggests minimizing a weighted rate of rights violations, with different kinds of violations getting different weights. But the obvious way to max this is to have zero people doing nothing. "
Why would the preservation of rights suggest that different rights or violations of rights be weighted differently? I agree this shouldn't be. But I would take issue I suspect with your consideration as to how the weighting occurs. In most Western cultures you need money to afford legal representation to protect your rights. Most people in this country can't afford decent legal representation, certainly if they are opposed by the government or more importantly corporate entities. The last sentence in your paragraph is likely an attempt at absurdist humor.
As for "easy exit" so let's say I live in Chicago and I really hate the job I have because my boss is a complete jerk so I decide to find another job in Piscataway, New York. But under your system maybe I don't have enough in the way of personal assets to sell off in order to "pay off my debt to the local governance unit" and because my boss is this complete jerk he decides he wants to also lower my social reputation, so he sends a letter to whatever local tax authority measures all of your individual metrics.
Now I have a choice. I can make the move to Piscataway but I will have to indenture a portion of my salary to Chicago to pay off my social debt to the city. (Your instituted form of slavery) or I could just stay in Chicago and work with the boss who now knows I hate his friggin' guts.
Great system you're thinking up there...for someone not sure who.
With either of your prefered goals, I imagine the result to be a WW2 style system (as implemented by the English speaking allies), but instead of trying to build 100 aircraft carriers and 20,000 aircraft and tanks to invade Japan we'd be building space stations, or something instead.
Perhaps the futarchy system would, in it's effort to avoid pissing too many people off and causing a revolution, dial the effort back a bit to somethiing more like the US Apollo program. Is that your Utopia?
Futarchy is so wildly different than anything that has ever succeeded as a government in the past. And our current government is really not so bad in the grand scheme of things. If we ditch the current pretty decent system for futarchy, the range of outcomes spans “a little bit better” to “terrible catastrophe”. Arnold’s post today captures the essence of my concern. https://open.substack.com/pub/arnoldkling/p/humilitism?r=f9q2l&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
The idea is to try it on smaller scales, and only apply at larger scales after success at smaller. You still worried that there are problems with futarchy that would only appear after a long time period? Problems worse that what democracy is showing after a long time period?
As a “humilitist” I am skeptical of grand theory, knowing there are so many unforeseen ways a beautiful idea can go wrong when it meets the real world. Successful real-world trials would definitely help.
One general concern - it sounds like a finance-based governance system (edited: might be seen as elites manipulating the system to represent elite interests, whether fair or not). You could say the current system does too in practice, but one person one vote gives the appearance of equal representation, and allows for the occasional populist uprising to maintain some balance. If political influence is transparently based on your ability to bet/invest, how would you prevent class resentment from building to a boiling point among the less wealthy who feel left out of the system?
What comes through strongly is the tension between measurability and adaptiveness. Price signals offer a way to aggregate plural ends without prescribing them, yet they also encode time preferences that quietly shape what futures remain legible. Any metric chosen ends up expressing a theory of what is allowed to matter and when. Good post!
Perhaps it would be possible to create an institutional structure that cultivated peaceful competition between sacred values? This having just occurred to me, I have no idea what metric would be appropriate, if any.
In the past, convergence was to some degree illusory, encouraged by mechanisms that discouraged and marginalized criticism. Should we be imagining something that achieves genuine convergence, or a new improved propaganda/gatekeeping mechanism?
Liberalism is not: welfare maximization, rights minimization, neutrality across identities, or asset liquidity per se.
Those are proxies, not the thing.
What liberal orders actually optimize for—when they work—is: the preservation of adaptive agency under disagreement. That cannot be captured by a single static metric like “total asset value” without pathological side effects.
A futarchy aligned with liberalism should optimize negative entropy in coordination, not wealth, rights counts, or exit value directly.
We’re “just” getting to that ephemeralization space. How does a place-rooted AI “shade” fight entropy without burning out?
Picture it as a metabolic engine, sort of like Existence’s embedded allies, minimizing disorder in community actions while tethered to human-scale energy flows.
It reduces conditional joint-action entropy (H(X|S))— …think townsfolk aligning on, say, resource sharing—using local signals (exoselves, sensors) processed via edge compute (<5W chips by 2025). Subtract the overhead (minimal, thanks to ephemeralized tech) and discount for true buy-in (no coerced “harmony”), yielding net negentropy: real order, not theater.
This can’t be a cloud-guzzling Skynet; it’s a solar-fed, LOCAL grid-tied system, optimizing trust-speed governance as reciprocal accountability— the sousveillance IDEAL.
Liquid holocracy vets custodians, ensuring AI evolves minds without feudal drift.
Thermodynamically, it’s sustainable: 2025’s federated learning and 6G meshes keep energy low, countering decay like an Enlightenment uplift engine. Feasible? Singapore’s smart grids and Estonia’s e-gov are halfway there.
Human-memory based civics are only as good as their collective memory… and we’re horrible with demanding receipts.
Root the system AND all open/volunteer/edge data “soulbound” in a PLACE… guarantee transparent custody… auto-log the receipts, complete all the math, show anomalies to local custodians voted/rotated through liquid holocracy…
How’d you tweak all this to keep AI’s soul ‘grounded’?
Settling orbit, the moon, and Mars seems like the way to go. It’s an ancestral instinct to explore and settle new land, and there are already people ready to jump on this sacred mission. The quest for immortality, on the other hand, is hard to be sacred because it’s hard to usefully contribute if you aren’t some sort of scientist or engineer or adjacent high performer.
To build a Cosmist super-AI that can resurrect the dead you need, essentially, complete control over the universe, not just the Milky Way galaxy. So you’re setting a goal that is far, far in the future and that still requires basic steps to completion:
1) Humans need to survive and thrive to continue the work. It’s no good if some other species builds up a Cosmist AI with no interest in humans. Good for them, not for us.
2) Humans need to colonize this and other galaxies to marshal the huge resources needed. It’s no good if we just stay on Earth enjoying virtual universes and letting robotic probes have a look at distant objects.
Basically, you’re setting very simple, easy-to-understand desired outcomes for your futarchy: the survival and cosmic expansion of the human race.
> ... set a “liberal” futarchy outcome metric to be a sum total of all assets owned by people within the governance unit ...
> However, my main reservation here is about adaptiveness. Market prices discount future returns at market rates of return, which quickly make future generations unimportant. So a region run by this sort of liberal futarchy would likely not much resist a civ decline, if its cultures tended in that direction.
> Which is why I’d instead prefer a futarchy that puts a big weight on achieving as soon as possible a sacred goal in conflict with civ decline over the next few centuries. Like say medical immortality or a million people living in space.
What about setting the "futarchy outcome metric to be a sum total of all assets owned by people within the governance unit" three hundred years from now, or some other future date? (For the purpose of avoiding "civ decline over the next few centuries.") Efficacy? Feasibility? Pros and cons?
Helpful nudge. Reread. Still don't see the difference.
e.g. democratically elected target metric is poverty (minimization). Prediction markets on corporate tax rate impact on poverty will see significant participation by corporates and investors.
The off-prediction-market value of the policy >> prediction market return. That's the core problem.
You could argue that policy betting markets could still be a cleaner and more transparent platform for pay-for-policy regulatory capture. Doubt it would lead lead to better social outcomes though.
Maybe a step-back and formal re-examination of the robustness and value of futarchy (liberal or otherwise) would help readers like me see where you're coming from?
You seem to assume functioning markets, and you model manipulators as small (or big) noise traders.
It is intuitive how manipulator participation can improve market price discovery if they are noisy, independent and uncorrelated, or form a minority of trade volumes. Off-market outcomes will lead to participation of manipulator whales who are highly correlated, and wield significantly higher market power (as a group) than the predictors.
Fortunately ‘liberal’ as a common label seems to have been replaced by ‘progressive’ (a viewpoint which is decidedly illiberal). But on the left, ‘liberal’ (and ‘democratic’) now effectively mean: anything promoting the growth and interests of the administrative state. If elections must be cancelled to achieve this, then cancelling them is democratic. If millions of poor, nonwhite students must be consigned to dangerous and dysfunctional public schools to achieve this, then that is liberal. If state and media collaborate to monitor and punish dissent in order to achieve this then THAT is liberal.
It’s almost as much of a mess as the label ‘conservative’ under Trump. But not quite…
https://jmpolemic.substack.com/p/leviathan
What kind of "sacred goal" would your civilization be based on? The maximization of economic growth or profit? Creating widgets? Building pyramids for the pharaohs? It seems to me that any civilization based on pursuing some kind of sacred cow like that is doomed to failure. We've tried spreading Christianity, Islam, democracy and communism as goals to motivate individuals to participate in maintenance of empires in past. That hasn't stopped the decline of empires in the past, and it won't in the future. That's because a not insubstantial proportion of the population are intelligent enough to realize that at the outset that such "goals" are meaningless or because they learn through direct participation that those goals are meaningless for themselves and the ones they care about.
You state here in a potentially extreme example of what a liberal state might do to better the conditions of SOME of it's people:
"But are we allowed to kill people, or prevent them from existing, to achieve this?"
If your goals are "sacred" wouldn't that almost automatically justify killing (sacrificing) people if it was believed necessary to attain that goal? It was certainly "necessary" in the past for some sacred goals.
You go on to suggest that:
"Others say “liberal” means “rights are preserved”. This suggests minimizing a weighted rate of rights violations, with different kinds of violations getting different weights. But the obvious way to max this is to have zero people doing nothing. "
Why would the preservation of rights suggest that different rights or violations of rights be weighted differently? I agree this shouldn't be. But I would take issue I suspect with your consideration as to how the weighting occurs. In most Western cultures you need money to afford legal representation to protect your rights. Most people in this country can't afford decent legal representation, certainly if they are opposed by the government or more importantly corporate entities. The last sentence in your paragraph is likely an attempt at absurdist humor.
As for "easy exit" so let's say I live in Chicago and I really hate the job I have because my boss is a complete jerk so I decide to find another job in Piscataway, New York. But under your system maybe I don't have enough in the way of personal assets to sell off in order to "pay off my debt to the local governance unit" and because my boss is this complete jerk he decides he wants to also lower my social reputation, so he sends a letter to whatever local tax authority measures all of your individual metrics.
Now I have a choice. I can make the move to Piscataway but I will have to indenture a portion of my salary to Chicago to pay off my social debt to the city. (Your instituted form of slavery) or I could just stay in Chicago and work with the boss who now knows I hate his friggin' guts.
Great system you're thinking up there...for someone not sure who.
With either of your prefered goals, I imagine the result to be a WW2 style system (as implemented by the English speaking allies), but instead of trying to build 100 aircraft carriers and 20,000 aircraft and tanks to invade Japan we'd be building space stations, or something instead.
Perhaps the futarchy system would, in it's effort to avoid pissing too many people off and causing a revolution, dial the effort back a bit to somethiing more like the US Apollo program. Is that your Utopia?
If it prevents civ collapse, I'm willing to sacrifice many things for that goal.
Futarchy is so wildly different than anything that has ever succeeded as a government in the past. And our current government is really not so bad in the grand scheme of things. If we ditch the current pretty decent system for futarchy, the range of outcomes spans “a little bit better” to “terrible catastrophe”. Arnold’s post today captures the essence of my concern. https://open.substack.com/pub/arnoldkling/p/humilitism?r=f9q2l&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
The idea is to try it on smaller scales, and only apply at larger scales after success at smaller. You still worried that there are problems with futarchy that would only appear after a long time period? Problems worse that what democracy is showing after a long time period?
As a “humilitist” I am skeptical of grand theory, knowing there are so many unforeseen ways a beautiful idea can go wrong when it meets the real world. Successful real-world trials would definitely help.
One general concern - it sounds like a finance-based governance system (edited: might be seen as elites manipulating the system to represent elite interests, whether fair or not). You could say the current system does too in practice, but one person one vote gives the appearance of equal representation, and allows for the occasional populist uprising to maintain some balance. If political influence is transparently based on your ability to bet/invest, how would you prevent class resentment from building to a boiling point among the less wealthy who feel left out of the system?
What comes through strongly is the tension between measurability and adaptiveness. Price signals offer a way to aggregate plural ends without prescribing them, yet they also encode time preferences that quietly shape what futures remain legible. Any metric chosen ends up expressing a theory of what is allowed to matter and when. Good post!
How does one cultivate sacred values?
Perhaps it would be possible to create an institutional structure that cultivated peaceful competition between sacred values? This having just occurred to me, I have no idea what metric would be appropriate, if any.
In the past, convergence was to some degree illusory, encouraged by mechanisms that discouraged and marginalized criticism. Should we be imagining something that achieves genuine convergence, or a new improved propaganda/gatekeeping mechanism?
Liberalism is not: welfare maximization, rights minimization, neutrality across identities, or asset liquidity per se.
Those are proxies, not the thing.
What liberal orders actually optimize for—when they work—is: the preservation of adaptive agency under disagreement. That cannot be captured by a single static metric like “total asset value” without pathological side effects.
A futarchy aligned with liberalism should optimize negative entropy in coordination, not wealth, rights counts, or exit value directly.
How exactly do you measure total "negative entropy in coordination"?
We’re “just” getting to that ephemeralization space. How does a place-rooted AI “shade” fight entropy without burning out?
Picture it as a metabolic engine, sort of like Existence’s embedded allies, minimizing disorder in community actions while tethered to human-scale energy flows.
It reduces conditional joint-action entropy (H(X|S))— …think townsfolk aligning on, say, resource sharing—using local signals (exoselves, sensors) processed via edge compute (<5W chips by 2025). Subtract the overhead (minimal, thanks to ephemeralized tech) and discount for true buy-in (no coerced “harmony”), yielding net negentropy: real order, not theater.
This can’t be a cloud-guzzling Skynet; it’s a solar-fed, LOCAL grid-tied system, optimizing trust-speed governance as reciprocal accountability— the sousveillance IDEAL.
Liquid holocracy vets custodians, ensuring AI evolves minds without feudal drift.
Thermodynamically, it’s sustainable: 2025’s federated learning and 6G meshes keep energy low, countering decay like an Enlightenment uplift engine. Feasible? Singapore’s smart grids and Estonia’s e-gov are halfway there.
Human-memory based civics are only as good as their collective memory… and we’re horrible with demanding receipts.
Root the system AND all open/volunteer/edge data “soulbound” in a PLACE… guarantee transparent custody… auto-log the receipts, complete all the math, show anomalies to local custodians voted/rotated through liquid holocracy…
How’d you tweak all this to keep AI’s soul ‘grounded’?
Settling orbit, the moon, and Mars seems like the way to go. It’s an ancestral instinct to explore and settle new land, and there are already people ready to jump on this sacred mission. The quest for immortality, on the other hand, is hard to be sacred because it’s hard to usefully contribute if you aren’t some sort of scientist or engineer or adjacent high performer.
Why not setting up a futarchy aiming for, say, Cosmism? We are in the AI era anyway.
What outcome metric would that use?
To build a Cosmist super-AI that can resurrect the dead you need, essentially, complete control over the universe, not just the Milky Way galaxy. So you’re setting a goal that is far, far in the future and that still requires basic steps to completion:
1) Humans need to survive and thrive to continue the work. It’s no good if some other species builds up a Cosmist AI with no interest in humans. Good for them, not for us.
2) Humans need to colonize this and other galaxies to marshal the huge resources needed. It’s no good if we just stay on Earth enjoying virtual universes and letting robotic probes have a look at distant objects.
Basically, you’re setting very simple, easy-to-understand desired outcomes for your futarchy: the survival and cosmic expansion of the human race.
The self-assessed value of human life is the only fair metric.
As that varies, do you want to target its average, its median, or its sum total across actual people?
The average proxied by a randomly selected sample, controlled by age and gender for changes from the previous such sample.
> ... set a “liberal” futarchy outcome metric to be a sum total of all assets owned by people within the governance unit ...
> However, my main reservation here is about adaptiveness. Market prices discount future returns at market rates of return, which quickly make future generations unimportant. So a region run by this sort of liberal futarchy would likely not much resist a civ decline, if its cultures tended in that direction.
> Which is why I’d instead prefer a futarchy that puts a big weight on achieving as soon as possible a sacred goal in conflict with civ decline over the next few centuries. Like say medical immortality or a million people living in space.
What about setting the "futarchy outcome metric to be a sum total of all assets owned by people within the governance unit" three hundred years from now, or some other future date? (For the purpose of avoiding "civ decline over the next few centuries.") Efficacy? Feasibility? Pros and cons?
Yes, if adopted and retained that would work. I just worry that people wouldn't feel emotionally attached enough to it to retain it.
The only reason prediction markets work is because they don't really influence outcomes.
Futarchy seems like pay-for-policy simplification of lobbying?
No, you don't understand yet; read more.
Helpful nudge. Reread. Still don't see the difference.
e.g. democratically elected target metric is poverty (minimization). Prediction markets on corporate tax rate impact on poverty will see significant participation by corporates and investors.
The off-prediction-market value of the policy >> prediction market return. That's the core problem.
You could argue that policy betting markets could still be a cleaner and more transparent platform for pay-for-policy regulatory capture. Doubt it would lead lead to better social outcomes though.
Maybe a step-back and formal re-examination of the robustness and value of futarchy (liberal or otherwise) would help readers like me see where you're coming from?
https://www.overcomingbias.com/p/rah-price-manipulatorshtml
You seem to assume functioning markets, and you model manipulators as small (or big) noise traders.
It is intuitive how manipulator participation can improve market price discovery if they are noisy, independent and uncorrelated, or form a minority of trade volumes. Off-market outcomes will lead to participation of manipulator whales who are highly correlated, and wield significantly higher market power (as a group) than the predictors.
Standard and established mode of market failure.
The math models cited above have only one whale of a manipulator.