Back in the dot-com boom of the late 90s, many said a new economy loomed that invalidated all the old rules, and would soon cause big fast change. Economists or business experts who said otherwise just “didn’t get it”.
Just before the year 2000 crash, economists Carl Shapiro and Hal Varian published an excellent book, Information Rules, wherein they explained how old econ principles explained much of the new economy. Other economists explained how we had seem many prior “general purpose technologies” (GPTs) and understood how they took decades to have large wide impact. Extracting value from GPTs like steam, electricity, and PCs had required reorganizing processes and creating complementary capital.
These old school economists were basically right. While the internet did eventually have big impacts, it took a lot longer than enthusiasts said, and mostly took the forms that those old school experts suggested.
Since 1900, the world has now seen four roughly synched cycles of boom then bust of both the S&P500 and enthusiasm for automation, stock booms ending roughly ’29, ’73, ’00, and soon. I left grad school in ’84 to join the second to last boom, as an AI researcher til ’93, gullibly swallowing the hype then. Now I’m older and wiser and watching this current boom from the side. But I feel pretty confident that this one will play out like the others, and that its GPT will take decades to have its big impact. While I’m not personally tracking the tech details as much as I did last time, that isn’t actually needed to see the big picture here.
The big picture is that it will take decades for AI to have big econ impact. Yes even today millions are willing to pay roughly $10/mo, and many personal hours, to use LLMs. The question here is about vastly larger impact, for example enough to see noticeable effects on key GDP growth or workforce participation stats.
As GPTs are general, it is hard to predict details of their future applications. But you shouldn’t be trying to use those details; look instead for robust predictions and policies, that depend less on details.
For example, if you don’t have special expert insight, make flexible robust choices, not tied to specific scenarios. If there’s a risk of many losing their jobs all at once, set up robust insurance against that risk. If there’s a (small) risk of damage from self-improving AI agents, add extra legal liability in cases closer to that case. And to watch for AI maybe having huge impacts, track the key stats mentioned above, and factors that predict automation.
Now there are some today who know about GPT econ, yet still claim AI will have big fast impact. For example, Andrew McAfee:
Previous general-purpose technologies like the steam engine and electrification have brought their changes over decades. However, we anticipate that generative AI’s effects will be felt more quickly due to its ability to diffuse quickly via the internet and its ease of use owing to its natural language interface.
But successfully adding econ value via AI requires lots more than using language and being available on the internet! Many have reported finding this task to be quite hard.
My view here is just common sense, using standard econ. But you won’t hear it much in MSM, social media, or keynote speeches. Those rewards go more to those who make more dramatic forecasts. As they have for centuries.
Yes, we will likely eventually have transformative AI, though we might not get there before an innovation pause due to population fall. We might want to think about that in advance, like I did in my book Age of Em. Though using a sacred-AGI concept to do so is not realistic. Realize that AIs will eventually be our mind children, and for a best shot at fixing cultural drift they should be free to face Malthusian wages and evolve their own cultures.
Another way to look at this is just the markets. OpenAI is worth $300B, the rest of the private AI companies are worth maybe $200B combined. Plus some of the Google, Nvidia, etc market caps can be ascribed to AI. This all adds up to, AI is expected to be a big deal, but not "industrial revolution packed into the 10 years" big.
With most major innovations we tend to overestimate the short-term impact and underestimate the long-term impact.
There is the long, slow work of incorporating a new technology into how people (and businesses) function. One common example is the interstate highway system, which was created in the 50s and 60s but didn't have its full effect on logistics until the 80s and 90s. At some point AI might be smart enough to do some of that process re-engineering itself, but that's a long way off.