32 Comments
User's avatar
Kevin's avatar

Another way to look at this is just the markets. OpenAI is worth $300B, the rest of the private AI companies are worth maybe $200B combined. Plus some of the Google, Nvidia, etc market caps can be ascribed to AI. This all adds up to, AI is expected to be a big deal, but not "industrial revolution packed into the 10 years" big.

Expand full comment
Jack's avatar

The valuations IMHO are heavily discounted by the fact that nobody knows what sustainable advantage could look like in AI. OpenAI, Google, DeepSeek, Meta, xAI – they're all scrambling for "model of the week" status at this point.

In the dotcom era it was easier to predict what sustainable advantage would look like because it was building on existing network economies with low marginal cost, like software and telecom. In AI the marginal cost is high and owning standards doesn't really seem to apply.

Expand full comment
John Wittle's avatar

every time I try to think about how I might bet on "industrial revolution, packed into the next 10 years" outcomes, it's hard to tease that outcome apart from "history of life from prokaryote to evolution of generally intelligent protein-brains, packed into the next ten years"

any scenario that looks like the first might suddenly turn out to be the second, at least when we're talking about such low-probability high-variance events like this in the first place

and that makes it hard to tell exactly what rational market behavior would look like in the event that either of those scenarios is, in some sense, the 'correct' prediction

i'm not sure that ~$0.5T is incompatible with those possibilities... in my mind, it's like, all of the really strange high-variance outcomes just kinda blend together, and the fact that some of them obviate the idea of human markets at all makes it so hard to reason about!

Expand full comment
Robin Hanson's avatar

Surely such an event would influence a great many market prices. The odds that prices appropriate for nothing happens are also appropriate for crazy change seems very low.

Expand full comment
David J Higgs's avatar

Edit: forgot to mention something really important: scaling up compute to continue current rate of AI progress is *really* expensive. Many 10s of $Billions are priced in for a certainty, and >$100B market cap of compute/training clusters are quite likely within 3-4 years. Company valuations are based on profits requiring exceptionally high revenue past these levels of capital costs. And ongoing inference/electricity costs will be substantial even on serving finished models.

Higher risk and longer time horizons before majority of growth both lower prices compared to literal expected value calculations. Clearly which companies use which approaches to build AGI-like systems on what exact timescale is very high risk to bet on. It's definitely a good argument against it being all that likely that the super bullish AI predictions like 2-3 years till transformative impact are correct though.

I think that a model along these lines is reasonable: "investors expect that somewhere between (inclusive) GPT-5 and GPT-6 level of compute models (~2e27-2e29 FLOPs), which are priced in to be developed by default, we will see impressive results but with very high expected variance. They might not deliver enough value to justify current prices (a.k.a. we are in somewhat of an AI bubble), but they might also deliver quite a bit more as progress further accelerates towards AGI. A few/many current AI/compute companies will likely but are not guaranteed to capture significant amounts of the resulting value, and it could easily be something like a government private partnership in the US or China that pushes things over the edge."

Or another way of putting it is that there's a significant probability of a bubble with true value being substantial but not enough to justify current numbers, and another significant probability of current paradigms scaling to AGI by early 2030s with much higher upside than current prices suggest. The discrepancy is largely a result of risk intolerance, but also a result of the EMH being narrower than a crystal ball in regards to relatively rare and unprecedented events. And of course there are other risk factors than direct AI progress like the substantial likelihood of immanent recession, Taiwan embargo/invasion, Trump administration in general, etc.

Expand full comment
Jack's avatar

With most major innovations we tend to overestimate the short-term impact and underestimate the long-term impact.

There is the long, slow work of incorporating a new technology into how people (and businesses) function. One common example is the interstate highway system, which was created in the 50s and 60s but didn't have its full effect on logistics until the 80s and 90s. At some point AI might be smart enough to do some of that process re-engineering itself, but that's a long way off.

Expand full comment
Peter McCluskey's avatar

It sounds like you're mostly using the right theories.

But I don't see those theories as justifying strong claims about rates of innovation.

I'm forecasting faster change than you, apparently because I'm more focused on empirical measures of trends that seem relevant. My personal experience with recent AIs suggests that they're not quite transformative yet, but are improving at a rate that feels at least 5 times faster than internet technology was improving in the 90s.

Expand full comment
Robin Hanson's avatar

But rates of change in your personal experience of them just aren't the relevant rates. If you were using them to change big parts of our economy, that would be more relevant.

Expand full comment
Amit's avatar

The title successfully confused me, if that was on purpose, good job

Expand full comment
Robin Hanson's avatar

Not on purpose.

Expand full comment
Jack's avatar

The title made me wonder for the first time if OpenAI intentionally chose GPT (= generative pre-trained transformer) to allude to GPT = general purpose technology. We need someone in OpenAI marketing to weigh in.

Expand full comment
ZS's avatar

Thanks Robin. Comparing Artificial intelligence to the internet is misleading. The internet was a new way to communicate. AI by the end of this decade is likely to be a cheap substitute for human brains. That's going to be much more disruptive than any historic innovation. You do need to look at the details here, comparing one innovation to another doesn't always work.

Expand full comment
Robin Hanson's avatar

Each of the prior booms of excitement about automation ALSO promised cheap substitutes for human brains.

Expand full comment
ZS's avatar

Hmm, I can't agree with that. Nobody said that the steam engine or the internet would replace human brains. The steam engine replaced human muscles, the internet replaced some human communication. Artificial Intelligence by definition is literally a replacement for human brains.

Expand full comment
Tim Tyler's avatar

Yes, though computers are set to replace human brains. Computers have been around for 200 years. Machine intelligence - maybe for 70 years. These technologies have indeed taken quite a while to become transformative.

Expand full comment
warty dog's avatar

typo: "r keynote" (should be "or")

surely "smart-human level AGI + cheap robot bodies" is a GPT that would have quick economic impact. it's coming any time now

Expand full comment
Robin Hanson's avatar

Not at all sure.

Expand full comment
GenXSimp's avatar

A couple of things. Impacts are going faster. The web took a while, but the app store did not. Uber, then uber for everything. it was 5 years and eveyone had a phone with apps, and apps were changing our lives. I think AI is moving faster than apps. Slower than most forcasts, but there is a general acceleration. I think voice is the UI for GPTs and most folks haven't tried the voice version, but they will and soon, and the impact will be felt.

Expand full comment
Robin Hanson's avatar

Economic growth isn't higher on average now, so I'm skeptical that overall rates of innovative change are faster.

Expand full comment
GenXSimp's avatar

That is a noisy measure! From 2010-2020, Us pop grew 7%, but real GDP grew 21%, so, lots of stuff going on, but thinking like Solow, most growth is coming from higher productivity, or tech. So GDP on trend implies growth from tech. So maybe AI will just keep us on trend, but the economy would otherwise be shrinking.

Expand full comment
Jeremiah England's avatar

> Yes, we will likely eventually have transformative AI, though we might not get there before an innovation pause due to population fall.

Have you updated much on how likely an innovation pause since you were writing about it last year?

Expand full comment
Charlie's avatar

Trying to read the ai-2027.com forecast incorporating this post. It seems even granting they are right about computer & takeoff & super intelligence, incorporating AI into National security, factories, etc is just slower than they model. Even a super intelligent AI. Their forecast doesn’t incorporate enough standard econ and history on deploying new tech

Of course their other assumptions might be off too. I think other posts of yours seem to disagree about their forecast of takeoff timelines and how much an llm can improve itself

Expand full comment
Alvin Ånestrand's avatar

How large and fast impact would superintelligence bring, in your view?

Also, some changes may be much faster than applying AIs throughout the economy, like geopolitical circumstances when AIs get good enough for military applications like cyberwarfare, but maybe that's beside your point.

I hope that you're right about self-improvement risks being small, though I don't understand why. Do you think this won't happen, or that it will be done responsibly?

Expand full comment
Phil Getts's avatar

Other than AI, tech change, measured as a fraction (the fraction of everyday tech that's new and radically different), may be slower in the 21st century than it was in Europe in any of the past ten centuries. If things change over the next 10 years even as much as they did from 1920 to 1930, it will shock people.

Expand full comment
ZS's avatar

Robin maybe you'd like to offer a wager about the unemployment rate or the economic impact of AI at the end of this decade.

Expand full comment
Robin Hanson's avatar

I'd bet that they won't go far out of the usual ranges.

Expand full comment
Christopher F. Hansen's avatar

Good post.

About 15 years ago you had the "AI foom debate" with EY. You described the major locus of that debate as being whether classic econ models or new bespoke models, often predicting new and wild changes, would better describe the future course of AI. In my opinion, the course of AI since then has been better described by classic econ models.

Of course, there could soon be an "inflection point" where classic econ models are suddenly obsoleted, and many continue to predict this. 15 years later (and 60+ years since people began to loudly worry about the prospect of superhuman AI), there hasn't been much progress on determining how likely this is.

Expand full comment
Robin Hanson's avatar

Classic econ models are quite capable of modeling big dramatic changes to the economy. So there are two questions: will such a big change happen, and then will standard econ describe it well.

Expand full comment
Dave92F1's avatar

AI can do a lot of the (very necessary) adaptation and reorg work itself, so I expect it to go faster that the other GPTs you mention. But probably not as fast as the hype (that's almost always so). Moravec's Mind Children book is still relevant.😄

Expand full comment
Robin Hanson's avatar

No AIs aren't remotely up to the task of reorg work anytime soon.

Expand full comment
Dave92f1's avatar

I'm not sure if I prefer you to be right about that or not. But thanks for making a clear prediction.

Expand full comment
Leo Guinan's avatar

I agree that previous GPTs had slow rollout, but there's a huge difference here.

It's that this technology allows us to re-define what is valuable.

I outlined it in my research here: https://www.buildinpublicuniversity.com/the-memetic-foundation-of-human-value-a-new-economic-paradigm/

We are seeing weirdness because of memetic patterns spreading at different speeds and causing destructive interference in our ability to understand the information we are currently bombarded with.

Fix the incentives, and the technology spread will optimize itself because the risk will be properly accounted for.

Expand full comment