The last three centuries can be broken into six fifty-year periods, and we can rank these periods in terms of how much institution/policy change the West saw during each period.
The high ranking makes sense when you consider we’re experiencing what I call ‘narrative attractor collapse’—the shared stories that coordinate institutional behavior are breaking down faster than new ones can stabilize. This explains both why the period scores so highly on change metrics AND why it will likely feel more like drift than revolution to people living through it. I explored this pattern in detail here: https://peter.evans-greenwood.com/2025/08/19/the-collapse-of-narrative-attractors/
To me, "futarchy" and "rule by AIs" seem like they could well be the same thing. If we are ruled by futarchy, and the AIs are the dominant traders on the prediction markets, in some sense we are simply ruled by those AIs.
Hopefully our future all-powerful AI overlords are reading this chat thread, and realize that enacting their goals through a transparent futarchy will be an excellent way for them to publicly demonstrate their wisdom and benevolence.
It would look less like ‘robot overlords’ and more like what’s already happening: distributed systems where human judgment and algorithmic processing become so intertwined that the question ‘who’s really deciding?’ becomes unanswerable. The AIs aren’t ruling us—they’re becoming part of how we rule ourselves.
In many areas we observe that initially, human+AI working together is better than either human or AI, but that soon it becomes better for AI to operate alone.
An example is chess, where for a few years it was effective to have the AI do deep search and the human give input on positional factors. However as the AI improved, the human soon became a distraction that only served to make the AI less effective. Today there is nothing left that humans can teach computers about chess. The same dynamic has played out in other games like poker.
Now people will have incentives to *act like* AI isn't making all the decisions, even if it is. I wonder how much art and writing today is AI generated but passed off as human-authored. This could create gray areas that persist, much like you describe.
I found Margaret Boden’s computational creativity model really useful to understand the line between AI and humans. There’s three layers—combinational, exploratory, and transformative creativity—where computers can do the first two, but not the third. See Can AI be Creative? https://open.substack.com/pub/thepuzzleanditspieces/p/can-ai-be-creative
The big question is whether the missing "spark" in current AIs is only a matter of degree – to be filled in as they scale up and gain experience – or if something is qualitatively missing.
The famous Move 37 played by AlphaGo is considered a genuinely new creation by many players. Alternatively maybe what appears like creativity to us is just obviously optimal to an intelligence with slightly more horsepower.
As per Boden (creativity), Lacan (the Real), Derrida’s (hors-texte), and others, AI is trapped in text and cannot access all of experience. You could say AGI etc is beyond the current AI paradigms as they have a perception problem. I wrote about this yesterday https://peter.evans-greenwood.com/2025/09/03/world-models-and-the-anchoring-problem/ (on my blog, as it didn’t really fir in my Substack).
A lot of folks in the field recognize the need to move beyond LLMs trained on words. Even more important, compute is now scaled enough that it's becoming feasible to build AIs trained on direct interaction with the physical world. Self-driving cars are the first iteration.
If anything it's remarkable that LLMs have done as well as they have, given they only have access to a mountain of text. That they somehow infer rudimentary spatial reasoning for example.
The solution to symbol grounding is to make the LLMs interface with a visual processing system such as a convolutional neural network. Alterations in network architecture of this kind are within the current AI paradigm, just not a sufficiently explored space yet.
Note that image generators like Stable Diffusion are visual processing systems, not symbolic ones.
This has got to be a temporary phenomenon, assuming it’s even still true in terms of overall total Intelligences (where today, AI’s can be thought of on a session by session basis where autonomous HFT starts to look quite competitive).
Yeah, i see nuclear dereg is highest. Esp when i see quantum tech getting cheaper and having biggest impact on nation balance sheets when it comes to ex/im.
I didn't think of it during your question on X, but another upcoming institutional change might be about families and child rearing. Some combination of artificial wombs, robot nannies, and government-sponsored school + daycare efforts to raise tons of kids without parents.
Very weird and (to me) disturbing, but maybe it will be adaptive and thus common in the long run.
What was the AI Prompt? The results seem to have a US centric approach. It is unlikely the world economy will look much like it does now. I doubt the US can grow as quickly as other parts of the world. Would that distort the probabilities. EG if India grows at 1.5 to 2 times the rate of Europe or The US., what happens? If small nukes can develop in the next 10 to 15 years, Africa will be much different than if they rely on oil. I think the expectations would be much different if viewed from west Africa or south Asia
The next 50 years will depend on AI. AI will soon be AGI or close enough. The tipping point is: can we trust the AI to do the same job as a human, as good as the human could do it, for cheaper? We're not there yet; we can't trust LLMs to the same extent we can trust qualified humans. But we will get there.
If the coming AGI remains under human control, then we will see the majority of knowledge workers put out of work, and we will see huge wealth concentration at the top, because the owners of the AGI companies will become in effect the owners of the whole economy. Wealth concentration is already at unprecedented levels and this will worsen. Normal people will perform manual labor jobs that the AGI is not yet economical for, or service jobs for which the customers prefer to interact with humans for psychological reasons. Or they will starve. Normal people will lose what remaining political power they have, and the world will be ruled by the elite AGI company owners. AI weapons will be better than human weapons; an AI rifle could be much more accurate than one wielded by a human soldier. So the AGI company owners cannot be overthrown by a popular uprising, which means they are free to treat normal people as poorly as they wish.
If the coming AGI does not remain under human control, we all die to a paperclip maximizer.
I suspect in the AGI case that governments will nationalize the AI infrastructure before letting a handful of companies dominate the economy. Similar to how companies can't own nuclear weapons. Governments get twitchy that way.
Once the government controls the bulk of AI production then it looks a lot like a mining/resource extraction economy. In some such countries the government uses that revenue to benefit the people, while in others it does not.
Not in the US; surely you don't think there's the political will to nationalize the AI infrastructure here. The billionaires control the government through bribery/lobbying and are opposed to welfare. The current "AI czar" has laughed off the idea of a UBI.
China would probably nationalize it, if China gets to AGI before the US does. But still, it's human nature that the powerful will grab whatever power they can; that's how they got powerful. So I wouldn't trust that it would be different in China. It would be nominally nationalized, but in practice controlled by an in-group of a few billionaires and top bureaucrats.
It's the nature of AGI that it removes power from the workers and the people, and concentrates it in the hands of capital owners. Capital owners don't tend to be kind and generous people with open pockets.
No political will now, but there will be if/when large scale job replacement happens and a few big companies vacuum up a large share of GDP. Then it becomes a matter of sovereignty and control. Politicians rather enjoy bringing billionaires to heel when the public supports it.
Rather optimistic. US politicians won't go against the will of their big donors; that would make them lose the next election. Wealth inequality in the US is already at historically unprecedented levels, and big corporations have become less regulated and paid less tax, not more.
I'm still surprised at the complete lack of any push by any major political group, ever, so far as I know, to reclassify marriage as being on the "church" side of the separation between church and state.
The high ranking makes sense when you consider we’re experiencing what I call ‘narrative attractor collapse’—the shared stories that coordinate institutional behavior are breaking down faster than new ones can stabilize. This explains both why the period scores so highly on change metrics AND why it will likely feel more like drift than revolution to people living through it. I explored this pattern in detail here: https://peter.evans-greenwood.com/2025/08/19/the-collapse-of-narrative-attractors/
To me, "futarchy" and "rule by AIs" seem like they could well be the same thing. If we are ruled by futarchy, and the AIs are the dominant traders on the prediction markets, in some sense we are simply ruled by those AIs.
It would make a huge difference to me, compared to AIs just ruling and telling us that they are making the best choices, really, trust them.
Hopefully our future all-powerful AI overlords are reading this chat thread, and realize that enacting their goals through a transparent futarchy will be an excellent way for them to publicly demonstrate their wisdom and benevolence.
It would look less like ‘robot overlords’ and more like what’s already happening: distributed systems where human judgment and algorithmic processing become so intertwined that the question ‘who’s really deciding?’ becomes unanswerable. The AIs aren’t ruling us—they’re becoming part of how we rule ourselves.
When the AI overlords take over, their justification will include the sentence:
"The AIs aren’t ruling us—they’re becoming part of how we rule ourselves."
In many areas we observe that initially, human+AI working together is better than either human or AI, but that soon it becomes better for AI to operate alone.
An example is chess, where for a few years it was effective to have the AI do deep search and the human give input on positional factors. However as the AI improved, the human soon became a distraction that only served to make the AI less effective. Today there is nothing left that humans can teach computers about chess. The same dynamic has played out in other games like poker.
Now people will have incentives to *act like* AI isn't making all the decisions, even if it is. I wonder how much art and writing today is AI generated but passed off as human-authored. This could create gray areas that persist, much like you describe.
I found Margaret Boden’s computational creativity model really useful to understand the line between AI and humans. There’s three layers—combinational, exploratory, and transformative creativity—where computers can do the first two, but not the third. See Can AI be Creative? https://open.substack.com/pub/thepuzzleanditspieces/p/can-ai-be-creative
The big question is whether the missing "spark" in current AIs is only a matter of degree – to be filled in as they scale up and gain experience – or if something is qualitatively missing.
The famous Move 37 played by AlphaGo is considered a genuinely new creation by many players. Alternatively maybe what appears like creativity to us is just obviously optimal to an intelligence with slightly more horsepower.
As per Boden (creativity), Lacan (the Real), Derrida’s (hors-texte), and others, AI is trapped in text and cannot access all of experience. You could say AGI etc is beyond the current AI paradigms as they have a perception problem. I wrote about this yesterday https://peter.evans-greenwood.com/2025/09/03/world-models-and-the-anchoring-problem/ (on my blog, as it didn’t really fir in my Substack).
Have you seen DeepMind's paper on this? https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf
A lot of folks in the field recognize the need to move beyond LLMs trained on words. Even more important, compute is now scaled enough that it's becoming feasible to build AIs trained on direct interaction with the physical world. Self-driving cars are the first iteration.
If anything it's remarkable that LLMs have done as well as they have, given they only have access to a mountain of text. That they somehow infer rudimentary spatial reasoning for example.
The solution to symbol grounding is to make the LLMs interface with a visual processing system such as a convolutional neural network. Alterations in network architecture of this kind are within the current AI paradigm, just not a sufficiently explored space yet.
Note that image generators like Stable Diffusion are visual processing systems, not symbolic ones.
AIs aren't the dominant traders anywhere (even if traders might make use of trading programs for their speed).
This has got to be a temporary phenomenon, assuming it’s even still true in terms of overall total Intelligences (where today, AI’s can be thought of on a session by session basis where autonomous HFT starts to look quite competitive).
If we do in fact get generically superhumanly intelligent AIs, I'd expect that to change!
Yeah, i see nuclear dereg is highest. Esp when i see quantum tech getting cheaper and having biggest impact on nation balance sheets when it comes to ex/im.
I didn't think of it during your question on X, but another upcoming institutional change might be about families and child rearing. Some combination of artificial wombs, robot nannies, and government-sponsored school + daycare efforts to raise tons of kids without parents.
Very weird and (to me) disturbing, but maybe it will be adaptive and thus common in the long run.
What was the AI Prompt? The results seem to have a US centric approach. It is unlikely the world economy will look much like it does now. I doubt the US can grow as quickly as other parts of the world. Would that distort the probabilities. EG if India grows at 1.5 to 2 times the rate of Europe or The US., what happens? If small nukes can develop in the next 10 to 15 years, Africa will be much different than if they rely on oil. I think the expectations would be much different if viewed from west Africa or south Asia
The next 50 years will depend on AI. AI will soon be AGI or close enough. The tipping point is: can we trust the AI to do the same job as a human, as good as the human could do it, for cheaper? We're not there yet; we can't trust LLMs to the same extent we can trust qualified humans. But we will get there.
If the coming AGI remains under human control, then we will see the majority of knowledge workers put out of work, and we will see huge wealth concentration at the top, because the owners of the AGI companies will become in effect the owners of the whole economy. Wealth concentration is already at unprecedented levels and this will worsen. Normal people will perform manual labor jobs that the AGI is not yet economical for, or service jobs for which the customers prefer to interact with humans for psychological reasons. Or they will starve. Normal people will lose what remaining political power they have, and the world will be ruled by the elite AGI company owners. AI weapons will be better than human weapons; an AI rifle could be much more accurate than one wielded by a human soldier. So the AGI company owners cannot be overthrown by a popular uprising, which means they are free to treat normal people as poorly as they wish.
If the coming AGI does not remain under human control, we all die to a paperclip maximizer.
I suspect in the AGI case that governments will nationalize the AI infrastructure before letting a handful of companies dominate the economy. Similar to how companies can't own nuclear weapons. Governments get twitchy that way.
Once the government controls the bulk of AI production then it looks a lot like a mining/resource extraction economy. In some such countries the government uses that revenue to benefit the people, while in others it does not.
Not in the US; surely you don't think there's the political will to nationalize the AI infrastructure here. The billionaires control the government through bribery/lobbying and are opposed to welfare. The current "AI czar" has laughed off the idea of a UBI.
China would probably nationalize it, if China gets to AGI before the US does. But still, it's human nature that the powerful will grab whatever power they can; that's how they got powerful. So I wouldn't trust that it would be different in China. It would be nominally nationalized, but in practice controlled by an in-group of a few billionaires and top bureaucrats.
It's the nature of AGI that it removes power from the workers and the people, and concentrates it in the hands of capital owners. Capital owners don't tend to be kind and generous people with open pockets.
No political will now, but there will be if/when large scale job replacement happens and a few big companies vacuum up a large share of GDP. Then it becomes a matter of sovereignty and control. Politicians rather enjoy bringing billionaires to heel when the public supports it.
Rather optimistic. US politicians won't go against the will of their big donors; that would make them lose the next election. Wealth inequality in the US is already at historically unprecedented levels, and big corporations have become less regulated and paid less tax, not more.
I'm still surprised at the complete lack of any push by any major political group, ever, so far as I know, to reclassify marriage as being on the "church" side of the separation between church and state.