>Which is about now if that consensus happened 3 years ago. Or in about 3 years if that happened one year ago.
The level of coding ability AI demonstrates right now is dramatically different from where it was three *months* ago, let alone three years. A year ago it was still possible to believe (as many software engineers did) that AI would not greatly change the software development process soon. Now that's mostly no longer a defensible position to hold. Can you clarify what you're seeing as a consensus?
These types of statements are massively overblown in my opinion. What's changed in the last 3 months is mostly the collective awareness of AI's coding ability, which has increased steadily over the past few years.
The increase in fundamental coding ability has been fairly steady, yes. The awareness of the increase and current state, less so. This is true.
The impact of that increase on people's workflows, the size of the set of actual tasks that can be reliably delegated, the level of human expertise needed to make effective use of AI capabilities - these have not been nearly as smooth. There are thresholds across which small jumps in absolute ability (or ability per unit cost) create large jumps in usefulness.
The first was probably something like "It takes less time to ask AI than to do it myself," which I would agree started, for some tasks, in 2023, if you knew enough about AI at the time to know which tasks those were.
The last threshold, which we obviously have not crossed, could be something like the combination of "There is no task at which a human can do a better job than AI" and "There is no task for which a human can do an acceptably good job cheaper than AI."
Has there been overblown hype? Absolutely. But I read most of it as people trying to communicate across a massive gap in awareness and understanding, just like a lot of other hype and a lot of other bad reporting.
AI could also increase software costs for many firms. If it outcompetes other use cases on hardware acquisition, it might become more expensive to run any given piece of software. If AI makes cyberattacks much easier, you have to invest more in security
Pushing up the price of hardware would be a sign of huge value being achieved in other hardware applications, which should go along with big growth in the total scale of software applications.
In another comment of yours in this thread you distinguish between "speculative spending and investment on AI" and "actually adding value to customers". It seems to me that the same applies here: Hardware prices might get pushed up by speculative spending that does not add value to society.
Why not just look at revenues from AI services? That seems more direct than comparing some second-order effect like impact on the total software industry.
Anthropic and OpenAI both have revenues rising very quickly, although they also have extremely high valuations, so it isn't obvious to me whether their will or won't pay off. Google's revenues are tied into search, so it's a lot less clear how to separate out how much of their revenue is AI-driven, but you could estimate a %.
The EMH suggests that whatever investments are being made in publicly-listed companies are, on average, correct. Although the welfare economics of whether customers are actually benefiting would be more complicated than even looking at sales.
Robin, the software scoreboard is testable and worth tracking — that’s the strongest part. But the frame is too narrow.
The dividing line isn’t knowledge work vs. other work. It’s screen-mediated life vs. physical life. Someone spending eight hours in documents, email, and browsers is already being reshaped by current AI. Someone driving, building, cooking, performing, or sitting in a concert hall is barely touched except at the paperwork edges.
Your cost-reduction frame misses the bigger signal: demand creation. I spent 25 years not writing — not because it was expensive, but because the interface between thinking and output was broken. ChatGPT didn’t make my writing cheaper. It made it possible. That doesn’t show up on any software spending chart.
The uncomfortable part: the people most exposed to displacement are the ones producing the commentary about exposure. An economist writing about whether AI justifies its investment is writing about whether AI justifies him. The observer is inside the blast radius, describing it as a breeze.
The Efficient Markets Hypothesis says crashes in speculative markets can't be predicted in advance. I know you've deviated from it on the basis of personally knowing about AI, but my recollection is that your bets against it did not pay off https://x.com/TeaGeeGeePea/status/1908229825422110733
For the purpose of making use of AI, most existing firms can't manage it. It will take new firms to realize the promise, and that kind of capital allocation and firm formation and growth takes time.
A big impact on software engineering within 3 years sounds plausible to me. However, we need metrics. Otherwise we won't know if it has happened or not.
That's a reference to the rise of agentic workflows. The shift from a chatbot interface to one where you give agents tasks and they read your project files, write code, run tests and iterate has been pretty big - and it is quite a recent development.
> So we should expect an even faster growth in software spending if AI is in fact causing a big increase in the rate at which costs fall.
The big 3 AI bottlenecks right now -- chips, power, datacenters -- are likely to drive up the cost of AI over the next few years (if adoption rates continue). Also, with algorithmic improvements and perhaps continual learning on the way (add 10X compute conservatively), it seems likely that coming AI will be capable of far more than simple software development the same way 1 high-IQ worker can deliver far more than 100 average IQ workers. So I tend to think companies will ration expensive AI for the most difficult and challenging problems, less just any software development, at least until the bottlenecks are solved.
In that scenario, we would see software spending holding or dropping but AI ability dramatically improving.
The historical analogy here would be GPUs and crypto miners. GPUs were supposed to be used for gaming but got scarce and high-priced in that area because they could do more valuable work mining bitcoin. Perhaps today we expect AI to be used ideally for software design, but if instead it becomes far more valuable at AI research, chip design, drug discovery, or other major engineering problems, AI for software becomes scarce and high-priced. So companies go back to paying humans to do software.
I have trouble understanding macro implications of changes of productivity in a lot of places. So you have the supply side effects of AI on code/writing making it easier to write substack posts but also less demand due to the substitution of talking to AI more/using AI for something rather than a software
I just added to the post.
>Which is about now if that consensus happened 3 years ago. Or in about 3 years if that happened one year ago.
The level of coding ability AI demonstrates right now is dramatically different from where it was three *months* ago, let alone three years. A year ago it was still possible to believe (as many software engineers did) that AI would not greatly change the software development process soon. Now that's mostly no longer a defensible position to hold. Can you clarify what you're seeing as a consensus?
I'm not claiming I know what is the current consensus. That's why I made conditional claims.
That's fair.
2nd this. December 2025 is roughly when it crossed the threshold for “this changes coding forever”. So we are 4 months into this era, not 3 years.
It is not dramatically different than 3 months ago. People have been effectively using AI for code for years. That continues to be the case.
These types of statements are massively overblown in my opinion. What's changed in the last 3 months is mostly the collective awareness of AI's coding ability, which has increased steadily over the past few years.
The increase in fundamental coding ability has been fairly steady, yes. The awareness of the increase and current state, less so. This is true.
The impact of that increase on people's workflows, the size of the set of actual tasks that can be reliably delegated, the level of human expertise needed to make effective use of AI capabilities - these have not been nearly as smooth. There are thresholds across which small jumps in absolute ability (or ability per unit cost) create large jumps in usefulness.
The first was probably something like "It takes less time to ask AI than to do it myself," which I would agree started, for some tasks, in 2023, if you knew enough about AI at the time to know which tasks those were.
The last threshold, which we obviously have not crossed, could be something like the combination of "There is no task at which a human can do a better job than AI" and "There is no task for which a human can do an acceptably good job cheaper than AI."
Has there been overblown hype? Absolutely. But I read most of it as people trying to communicate across a massive gap in awareness and understanding, just like a lot of other hype and a lot of other bad reporting.
Fair point! Adoption of AI tools for coding is probably not a linear function of their coding ability.
AI could also increase software costs for many firms. If it outcompetes other use cases on hardware acquisition, it might become more expensive to run any given piece of software. If AI makes cyberattacks much easier, you have to invest more in security
Pushing up the price of hardware would be a sign of huge value being achieved in other hardware applications, which should go along with big growth in the total scale of software applications.
In another comment of yours in this thread you distinguish between "speculative spending and investment on AI" and "actually adding value to customers". It seems to me that the same applies here: Hardware prices might get pushed up by speculative spending that does not add value to society.
Why not just look at revenues from AI services? That seems more direct than comparing some second-order effect like impact on the total software industry.
Anthropic and OpenAI both have revenues rising very quickly, although they also have extremely high valuations, so it isn't obvious to me whether their will or won't pay off. Google's revenues are tied into search, so it's a lot less clear how to separate out how much of their revenue is AI-driven, but you could estimate a %.
We've seen a lot of speculative spending and investment on AI. I want to distinguish that from actually adding value to customers.
The EMH suggests that whatever investments are being made in publicly-listed companies are, on average, correct. Although the welfare economics of whether customers are actually benefiting would be more complicated than even looking at sales.
Robin, the software scoreboard is testable and worth tracking — that’s the strongest part. But the frame is too narrow.
The dividing line isn’t knowledge work vs. other work. It’s screen-mediated life vs. physical life. Someone spending eight hours in documents, email, and browsers is already being reshaped by current AI. Someone driving, building, cooking, performing, or sitting in a concert hall is barely touched except at the paperwork edges.
Your cost-reduction frame misses the bigger signal: demand creation. I spent 25 years not writing — not because it was expensive, but because the interface between thinking and output was broken. ChatGPT didn’t make my writing cheaper. It made it possible. That doesn’t show up on any software spending chart.
The uncomfortable part: the people most exposed to displacement are the ones producing the commentary about exposure. An economist writing about whether AI justifies its investment is writing about whether AI justifies him. The observer is inside the blast radius, describing it as a breeze.
> many expect a crash soon
The Efficient Markets Hypothesis says crashes in speculative markets can't be predicted in advance. I know you've deviated from it on the basis of personally knowing about AI, but my recollection is that your bets against it did not pay off https://x.com/TeaGeeGeePea/status/1908229825422110733
For the purpose of making use of AI, most existing firms can't manage it. It will take new firms to realize the promise, and that kind of capital allocation and firm formation and growth takes time.
A big impact on software engineering within 3 years sounds plausible to me. However, we need metrics. Otherwise we won't know if it has happened or not.
Total software spending or employment.
That's a reference to the rise of agentic workflows. The shift from a chatbot interface to one where you give agents tasks and they read your project files, write code, run tests and iterate has been pretty big - and it is quite a recent development.
> So we should expect an even faster growth in software spending if AI is in fact causing a big increase in the rate at which costs fall.
The big 3 AI bottlenecks right now -- chips, power, datacenters -- are likely to drive up the cost of AI over the next few years (if adoption rates continue). Also, with algorithmic improvements and perhaps continual learning on the way (add 10X compute conservatively), it seems likely that coming AI will be capable of far more than simple software development the same way 1 high-IQ worker can deliver far more than 100 average IQ workers. So I tend to think companies will ration expensive AI for the most difficult and challenging problems, less just any software development, at least until the bottlenecks are solved.
In that scenario, we would see software spending holding or dropping but AI ability dramatically improving.
I disagree. In that scenario we should see huge increases in software spending.
The historical analogy here would be GPUs and crypto miners. GPUs were supposed to be used for gaming but got scarce and high-priced in that area because they could do more valuable work mining bitcoin. Perhaps today we expect AI to be used ideally for software design, but if instead it becomes far more valuable at AI research, chip design, drug discovery, or other major engineering problems, AI for software becomes scarce and high-priced. So companies go back to paying humans to do software.
I have trouble understanding macro implications of changes of productivity in a lot of places. So you have the supply side effects of AI on code/writing making it easier to write substack posts but also less demand due to the substitution of talking to AI more/using AI for something rather than a software