11 Comments

As said below, I don't think the doubling can't continue forever, and just because it's been steady for the past decades doesn't mean it's going to be steady for 20 or 30 more years. If the innovations you mention below fall through on delivering the effectiveness, we will see a sudden decrease in computational growth.

I also don't think it's clear that at some threshold, the value of a calcualtion per second can't fall by a factor of two every year. Here, a lot depends on how much computation can be used to make up for physical or other hard limits in the economy. If better and better ways to use material and energy are unlockable by faster computation, then the value of computation may not decrease. But if it only forces people, corporations and nations into information-theoretic zero-sum games without solving harder limits on the economy, it may indeed fall more than a factor of two per year beyond that point, even if it is counterintuitive.

As for the Fukushima robots you mention below, they could also be handeled by humans with remote control, but I'm sure there are many applications where faster algorithms are a marginal improvement. I just don't think they will necessarily to a singularity kind of event without diminishing returns before a point where we don't recognize the human condition anymore, etc.

As for the historical observation that the economic value of human brains has increased, yes, but that may not be projectible onto the future. For example, in this time we have developed ways to use uranium and fossil fuels that our ancestors didn't have. But those are limited physical resources and there is no certainty that we can repeat this many times into the future. (We also used up other natural capital).

And ultimately of course, there's the question of desirability. Let's say ems are invented. Then we suddenly are back to a low per capita GDP because Malthusianism applies again. Same for any equvalent "innnovation" in reproduction rates. And due to the zero-sum nature of politics and various ideologies, it may be argued no amount of wealth will ever make people truly happy and content. We could all be billionaires, in a material sense, and the world could still be very shitty, without good ways of resolving these conflicts and the general misery of the human condition. :-)

Expand full comment

"Do we have any good reason to think it will continue 20 to 30 more years? Most of the gains from miniaturization have already been made."

Well, it's been pretty steady for the past ~70 years. Also, we haven't really developed the third dimension in chips. And neuromorphic (massively parallel, like the human brain) chips are just being developed.

Neuromorphic chips

"Of course, even if a "brain equiv" chip costs more than a brain, it might be programmed better to do the range of things humans are bored to do."Or things that humans are justifiably afraid to do. Imagine how much money could have been saved if robots had swarmed onto the Fukushima Diaiichi plant, and provided seawater such that none of the explosions would have occurred.

Expand full comment

"Perhaps the marginal value of calculations per second decreases with supply?"

Yes, perhaps it does. If it did so rapidly enough, the rate of decrease in marginal value would be more powerful than the rate of increase in number of calculations per second, and the world economy wouldn't grow dramatically, as my crude model predicts.However, we have some indirect experience about whether the marginal value of calculations per second decreases. Assuming that the calculations per second of human brains have been pretty constant over the last 200 years, we have seen the marginal value of humans *increase,* rather than decrease. That is, GDP per capita has risen, rather than fallen.Also, the total number of calculations per second performed by computers worldwide appears to be more than doubling every year (because the speed per dollar is roughly doubling, and the dollars spent is staying equal or increasing). It just seems intuitive to me that the value of a calculation per second can't fall by a factor of two every year.

Expand full comment

Imabsa is right, it can't continue forever. The Berkenstein Bound will make sure Moore's Law ends before too long.

The question is where it will end. Do we have any good reason to think it will continue 20 to 30 more years? Most of the gains from miniturization have alreaady been made.

Of course, even if a "brain equiv" chip costs more than a brain, it might be programmed better to do the range of things humans are bored to do.

Expand full comment

Perhaps the marginal value of calculations per second decreases with supply? The thing about human brains is that they are also consumers and therefore create demand for services and products. But if the number of humans stays steady and we just get swamped in very fast, very clever algorithms that do useful things, we will be limited by the raw material available, the light speed barrier, and the attention span we can give to the new goodies. Perhaps we will then decide to make more human-like minds, perhaps simulate them or invent ectogenesis pots to mass-produce new consumers. But why would we even want that?

In effect, we'll have a world where everybody has a lot of raw smartness availabe, but with physical limits and really no use for it other than "make shiner games for me"

Expand full comment

"The trend can't just keep continuing,..."

Well, then William Nordhaus really should have focused on that in his paper. He showed in Figure 1 a trend that was pretty darn steady from 1940 to 2010. If he thought it wasn't going to continue for at least another decade or two, he should have said why the trend of the past ~70 years was likely to stop.

"...there may very well be fundamental physical limits before we have brain-like chips for $1 a piece,..."

A few comments:

1) William Nordhaus estimates the human brain at 1 exaflop. Ray Kurzweil has estimated it at 20 petaflops (0.02 exaflops), a factor of 50 less. Hans Moravec (in 1997) estimated the human brain at 500 teraflops (0.0005 exaflops), a factor of 2000 less). So there's a fairly broad range there. In other words, there are a wide range of estimates of the computations per second that constitute "brain-like."

2) Moravec included memory in his assessment of what is "brain-like." So he not only had a speed of 500 teraflops, but also a memory of 1 billion megabytes (1 petabyte):

See figure labeled "All things great and small"

So even if we had 500 teraflops speed for $1, we'd also need the 1 petabyte of memory (per Moravec). Right now, I think a petabyte of solid state memory would run around $200,000.

3) We'd also need software, of course.

"I think the solution to it is that Nordhaus doesn't include the information on computing progress in his model either."Yes, I don't understand that. His model really seems to be divorced from progress in computers. In contrast, my model, while extremely crude (i.e., doesn't include memory, and the value of the computations per second is almost pulled out of thin air), at least clearly ties concrete predictions of future computer progress to concrete predictions of future economic growth.

Expand full comment

"He also estimates a human brain at 10^18 calculations per second (1 exaflop) and states, "At this rate of increase, supercomputers will reach the upper level of 10^18 flops by 2017."

But he seems to regard these merely as interesting tidbits, and makes no attempt to extrapolate these trends even 20-30 years into the future. For example, at what point does he think $1000 of computer will get to 10^18 calculations per second? Less than 20 years from now, right? And how about if the trend continues for 30 years...we're talking about ~$1 dollar computer doing 10^18 calculation per second, right?"

The trend can't just keep continuing, there may very well be fundamental physical limits before we have brain-like chips for $1 a piece, but a $1000 version may very well be possible and in any case Nordhaus' "model" is a simple extrapolation that doesn't include information on physical limits, so we seem to have a conundrum. I think the solution to it is that Nordhaus doesn't include the information on computing progress in his model either.

Expand full comment

I guess you're right for automation, I was thinking of technological progress in general and how there will be a next penicillin or transistor but also periods of slow change.

Expand full comment

That sure isn't how automation influences have played out in the past.

Expand full comment

"The two sign predictions that match the data suggest it would take a century or more before growth rates exceed 20% per year. Nordhaus says, 'The conclusion is therefore that the growth Singularity is not near.'”

William Nordhaus presents Figure 1 showing the cost of computing over time, and states, "The costs of a standard computation have declined at an average annual rate of 53% per year over the period 1940-2012."

He also estimates a human brain at 10^18 calculations per second (1 exaflop) and states, "At this rate of increase, supercomputers will reach the upper level of 10^18 flops by 2017."

But he seems to regard these merely as interesting tidbits, and makes no attempt to extrapolate these trends even 20-30 years into the future. For example, at what point does he think $1000 of computer will get to 10^18 calculations per second? Less than 20 years from now, right? And how about if the trend continues for 30 years...we're talking about ~$1 dollar computer doing 10^18 calculation per second, right?

So why doesn't that have any effect on his model...and why doesn't that tell him his model is probably wrong?

"But it sets a good standard for future efforts. Can anyone find a concrete simple economic model of singularity that better fits the data?"Everything is in the weeds. If one had a pond in which the lilies were doubling in number every year, and the pond was one percent covered, there would be any number of models that would appear to fit the data up to that point. But when the pond was completely covered ~7 years later, it would be clear what model should have been used to predict future coverage.Here's a concrete economic model:1) The average human contributes about $13,000 annually to world GDP.2) A human brain (i.e., without the body) is worth, say, half of that. 3) The 10^18 calculations per second is worth, say, 1/10th of that. 4) So every 10^18th calculations per second is worth...divide by 2, then divide by 10 = call it $1000 per year of GDP.5) If in 20 years, $1000 buys 10^18 calculations per second, and the world is spends $1 trillion to producing those calculations per second, then the world adds $1 trillion to world GDP.6) And if 10 years later, $1 buys 10^18 calculations per second, and the world still spends $1 trillion to produce those calculations per second, then the world adds $1 quadrillion to world GDP. 6) Conclusion: The model says world GDP will be increasing by more than 20% per year within <30 years.I guess I should check on flights to Stockholm... :-)

Expand full comment

Imho a model in this case should be based on occasional shocks rather than a continuous rate of change. So we could have huge changes in the near future, but I'm not sure that fits most people's definition of a singularity since the huge changes only last a very short time, don't max out the laws of nature, and in between them are long periods of slow change.

Expand full comment