All of the international bodies that might be considered precursors to a world government, like the UN, WTO, NATO, and the EU, are getting weaker over time. Technology and improved communications have eliminated much of these organizations' reasons for being. Take the EMU: When the world ran on cash, the Euro did much to lower trade friction within Europe. But what now in a world of electronic money?
A similar trend is happening in the United States. Political decision making at the federal level is getting more rancorous and less competent over time. Fiscal discipline is harder and harder to impose. At a certain point people will ask, "why does that thing need to exist, anyway?"
In short I think a world government is one thing Gene Roddenberry clearly got wrong.
"In such a world orgs should be larger, as their more effective governance reduces the scale diseconomies that limit org sizes today. Governments and nonprofits may also encompass more social activity, if they can learn to adopt simple robust futarchy outcome measures. This plausibly cuts their disadvantages relative to for profit orgs today. We might well even get bigger national alliances, or even a world government."
If you were attempting at write an argument against adopting it, those are some good ones. An even larger government, further intruding into the social sphere, in cartel with even more nations, potentially all of them, reads like the intro to a dystopian nightmare indeed.
1. It appears that we need to define ourselves against an enemy as much as part of a team. And most politics is teams, unfortunately. Once the single government had been formed, factions from within will develop.
2. The motivation for one world government would be "stability" or similar. In practice, that'll end up as stagnation, decay and extraction by those at the top. Plenty of precedent for that. That inevitably breeds resentment and eventually revolution (as there would be no alternative).
3. The other side of that, China Vs Europe in the 15th century. Europe has competition internally, becomes dominant. China stagnates. Obviously Jarrad Diamond said it best. Maybe this explains point 1 in a roundabout way.
4. The 20th century was about scale, business only talked about economies of scale. In the last 20 years I've heard more about diseconomies of scale, and seen them in practice. I'm leaning towards the temporary alliances and networks camp for the mid term future.
That said, never heard of Robin's futarchy idea before. Looking forward to considering that properly.
That might end up as a proper post at some point, so thanks!
It strikes me that there seems to be a tension between the apparent endorsement here of the plausibility of extension risk from a misaligned world futarchy versus your typical nonchalance regarding the possibility of extinction risk from misaligned ASI. But isn't it likely that ASI would adopt highly effective governence structures? And if the misalignment of such governence structures is a plausible extinction risk, then also the similar misalignment of the ASIs who set up those structures is a plausible risk. That is, unless there is some reason why ASIs would be much less likely to be subject to maladaptive cultural drift (but what is that reason?)
(Aside: should we call maladaptive cultural drift "cultural meltdown" in analogy with the existing term "mutational meltdown"?)
I distinguish general risks all our descendants pose to risks that are particular to or especially big re AI. There are many in the former category, not so many in the latter.
Yup. And if we look at large scale value aggregation so far, through UN frameworks, SDGs, population planning, and other globally endorsed goal sets. we already see the kind of shared long-term “sacred” objectives such systems converge toward. Stability, comfort, and risk aversion would almost certainly dominate if futarchy spread that far.
It would make those aims efficient, a competent optimizer for mediocrity, not going to the stars. Aggregation still compresses variance, narrowing the range of future adaptations and pulling toward the resilience tradeoffs that come with tightly optimized productive efficiency.
Curious whether you see any structural path for futarchy to maintain heterogeneity of goals over time, rather than needing to reprice it after the variance has already eroded. Any efficient scalar target eventually decays toward over-optimization, and once that diversity collapses, even fast correction can only explore a thinner space of futures, since lost variance cannot be re-generated by repricing alone. The timing of that adaptation seems critical.
I think saying that humans "seem so eager to collectively decide their future" is a result of falling prey to the fallacy of composition. What individuals want (the only entities that actually have wants) is the desire to decide their future and the future of everyone else. And they hate the lack of choice associated with monopoly-- even those that have been won by Superior performance in the market. The result is that you don't get voluntary world government-- look at the difficulty that even the anodyne WTO is having. It's possible you might world government by non-voluntary means, but that would be tough to keep together.
If there were a world government, especially one that prevented independent colonization of other worlds, there would be no place to emigrate to escape tyranny.
All of the international bodies that might be considered precursors to a world government, like the UN, WTO, NATO, and the EU, are getting weaker over time. Technology and improved communications have eliminated much of these organizations' reasons for being. Take the EMU: When the world ran on cash, the Euro did much to lower trade friction within Europe. But what now in a world of electronic money?
A similar trend is happening in the United States. Political decision making at the federal level is getting more rancorous and less competent over time. Fiscal discipline is harder and harder to impose. At a certain point people will ask, "why does that thing need to exist, anyway?"
In short I think a world government is one thing Gene Roddenberry clearly got wrong.
"In such a world orgs should be larger, as their more effective governance reduces the scale diseconomies that limit org sizes today. Governments and nonprofits may also encompass more social activity, if they can learn to adopt simple robust futarchy outcome measures. This plausibly cuts their disadvantages relative to for profit orgs today. We might well even get bigger national alliances, or even a world government."
If you were attempting at write an argument against adopting it, those are some good ones. An even larger government, further intruding into the social sphere, in cartel with even more nations, potentially all of them, reads like the intro to a dystopian nightmare indeed.
I'd bet against a one world government.
1. It appears that we need to define ourselves against an enemy as much as part of a team. And most politics is teams, unfortunately. Once the single government had been formed, factions from within will develop.
2. The motivation for one world government would be "stability" or similar. In practice, that'll end up as stagnation, decay and extraction by those at the top. Plenty of precedent for that. That inevitably breeds resentment and eventually revolution (as there would be no alternative).
3. The other side of that, China Vs Europe in the 15th century. Europe has competition internally, becomes dominant. China stagnates. Obviously Jarrad Diamond said it best. Maybe this explains point 1 in a roundabout way.
4. The 20th century was about scale, business only talked about economies of scale. In the last 20 years I've heard more about diseconomies of scale, and seen them in practice. I'm leaning towards the temporary alliances and networks camp for the mid term future.
That said, never heard of Robin's futarchy idea before. Looking forward to considering that properly.
That might end up as a proper post at some point, so thanks!
Where is religion? Race? The neurotic demi-god, bent on the narcistic and nihilistic?
It strikes me that there seems to be a tension between the apparent endorsement here of the plausibility of extension risk from a misaligned world futarchy versus your typical nonchalance regarding the possibility of extinction risk from misaligned ASI. But isn't it likely that ASI would adopt highly effective governence structures? And if the misalignment of such governence structures is a plausible extinction risk, then also the similar misalignment of the ASIs who set up those structures is a plausible risk. That is, unless there is some reason why ASIs would be much less likely to be subject to maladaptive cultural drift (but what is that reason?)
(Aside: should we call maladaptive cultural drift "cultural meltdown" in analogy with the existing term "mutational meltdown"?)
I distinguish general risks all our descendants pose to risks that are particular to or especially big re AI. There are many in the former category, not so many in the latter.
"Given our existing forms of government, however, these seem far from immediate concerns...". Man, you got that right. Lol.
Yup. And if we look at large scale value aggregation so far, through UN frameworks, SDGs, population planning, and other globally endorsed goal sets. we already see the kind of shared long-term “sacred” objectives such systems converge toward. Stability, comfort, and risk aversion would almost certainly dominate if futarchy spread that far.
It would make those aims efficient, a competent optimizer for mediocrity, not going to the stars. Aggregation still compresses variance, narrowing the range of future adaptations and pulling toward the resilience tradeoffs that come with tightly optimized productive efficiency.
Curious whether you see any structural path for futarchy to maintain heterogeneity of goals over time, rather than needing to reprice it after the variance has already eroded. Any efficient scalar target eventually decays toward over-optimization, and once that diversity collapses, even fast correction can only explore a thinner space of futures, since lost variance cannot be re-generated by repricing alone. The timing of that adaptation seems critical.
I think saying that humans "seem so eager to collectively decide their future" is a result of falling prey to the fallacy of composition. What individuals want (the only entities that actually have wants) is the desire to decide their future and the future of everyone else. And they hate the lack of choice associated with monopoly-- even those that have been won by Superior performance in the market. The result is that you don't get voluntary world government-- look at the difficulty that even the anodyne WTO is having. It's possible you might world government by non-voluntary means, but that would be tough to keep together.
Yup. A world government has been at the top of my list of worries for decades. Only rivalry and competition provides evolutionary pressure.
If there were a world government, especially one that prevented independent colonization of other worlds, there would be no place to emigrate to escape tyranny.