39 Comments

The intuition for intelligence explosion is that, at any one point in time, a self-modifying AI is smarter than the AI that designed it, and therefore can improve its design. But that intuition doesn't prove that self modification doesn't converge asymptotically. The question is how the complexity of an artifact that a brain can design scales with the complexity of the brain.

One approach to this would be to find out how the number of steps it takes to prove or disprove statements in propositional logic of length n, given axioms of length n, scales with n.

Expand full comment

well, the new experimental techniques lead to new scientific discoveries, that closes the loop

Expand full comment

I skimmed the paper and it doesn't seem to me that it supports that claim.

It seems to be an economical model where one of the inputs to production is a "knowledge capital", but no assumption is made on how this "knowledge capital" changes over time, other than being monotonically non-decreasing.

In particular there is no claim about increasing returns.

Expand full comment

Another key exception is at the largest scale of aggregation — the net effect of on average improving all the little things in the world is usually to make it easier for the world as a whole to improve all those little things. For humans this effect seems to have been remarkably robust.

This seems more than just another exception. The others are examples of rare explosions; this would seem to be an absolute failure of the law of diminishing returns: the other rare explosions ultimately show diminishing returns.

Does this "robust" effect have a name? I don't grasp what phenomenon it refers to. Seemingly, if you improve all the little things in the world, you will find it harder to improve them further, having gathered the low-lying fruit with respect to improvements. A case has even been made that this has actually occurred with respect to technology.

Could someone provide further explanation of what's meant by this robust effect, an example, a name, or a link?

Expand full comment

It's possible you're right, but the plausibility of that argument is precisely what Hanson and Yudkowsky disagree about. Falling back on "the point should be clear" qualifies as begging the question (though to avoid confusion we should say "assuming the conclusion").

Expand full comment

Selective breeding, bionics and genetic manipulation CAN produce superminds inside human skulls one day.

The existence of something doesn't prove it can be engineered. Why assume we can duplicate flukes?

Expand full comment

Selective breeding, bionics and genetic manipulation CAN produce superminds inside human skulls one day. There already exist rare individuals who are extremely creative and good at problem solving (Gauss, Euler, Newton, Einstein) or can remember what they had for breakfast 20 years ago or calculate 3645.24 / 841.3 in under a second, so we know these abilities are possible even in naturally born human brains and unless we radically reform our societies these abilities will one day be something billionaires can buy and use to enslave the average-IQ peons.

Whether it's the stuff I described here, EMs or bottom-up AI, the future is very bleak for us peons unless we change things while we still can.

Expand full comment

As for the second point: obviously yes, there will be diminishing returns and an asymptote - eventually. Simply saying that an asymptote exists doesn't tell you *where* it exists, or whether it's at a high enough level to be dangerous. For this point to carry you'd need an argument for why the fundamental limits of intelligence are close enough to human as to not be a concern.

It's correct, I think, that it's implausible that the fundamental limits of intelligence are close to the human level. For one thing, humans were once probably more intelligent than they are now. The relevant contention applies specifically to machine intelligence and involves refusing the assumption that its limit comes anywhere near actual human cross-domain intelligence.

The asymptote argument undercuts the main reason it's assumed that super-intelligence will (eventually) exist.

The argument that's undercut is that entailed by the physicalist premise that the mind consists solely of the brain's information processing is the conclusion that constructing (or copying) a mind at least as intelligent as the smartest humans possess is inevitable.

If we lack rudimentary knowledge of where technical progress in AI will asymptote, we have no grounds for assigning a substantial probability to the asymptote's approaching the level of human (cross-domain) intelligence. This doesn't rule out the possibility that someone will (or even has) made a specific argument showing that AI will asymptote late. (This mere possibility does justify a nonzero probability that it will.)

The asymptote argument reverses the accustomed burden of proof in transhumanist discussions, where it is held that they're entitled to the foregone conclusion that continued technical progress plus the metaphysical possibility of a machine mind entails eventual super AI--or at the least human-level cross-domain AI.

(Thus there are arguments, as between Hanson and Yudkowsky, about whether human-level machine intelligence will first take the form of copying or of constructing, with the common assumption that it will exist eventually.)

Expand full comment

Not only do humans not have access to their own source code (so they can't make themselves much more smart), they also have to rely on chance to produce equally, or more, intelligent offspring, have to wait years to find out whether that offspring is smart, then have to spend years teaching it and are then still limited by the human lifespan and the limits of natural human biology.

An AI could literally expand its own "brain" for millennia.

Expand full comment

It seems to me that it is often not intelligence that is lacking but rather data and machines. I.e. we know how animals convert the chemical energy in sugar into electricity at 98% efficiency but we cannot match that efficiency because we do not have the ability to work on the atomic level. The point being that even with great intelligence has its limits. Some of the people who though that world was flat where plenty intelligent.

Expand full comment

I don't know, this seems like a fairly superficial criticism of the intelligence explosion hypothesis to me. In regard to the first point, the obvious difference between high IQ individuals and AI's is that humans don't have access to their own source code. At least in terms of intelligence amplification, that makes all the difference. As for the second point: obviously yes, there will be diminishing returns and an asymptote - eventually. Simply saying that an asymptote exists doesn't tell you *where* it exists, or whether it's at a high enough level to be dangerous. For this point to carry you'd need an argument for why the fundamental limits of intelligence are close enough to human as to not be a concern.

As for why value explosions are so rare, that seems thermodynamic at heart. Disordered systems tend to be less valuable, and most processes increase disorder, so a typical explosion will decrease value. Exceptions will therefore be due to processes that create order - look closely at any value explosion and I suspect you'll find at the source some kind of negentropy pump (ie, evolution or intelligence). Of course, that's not a very predictive model (plenty of intelligent processes don't lead to value explosions), but I think it gets to the heart of why Eliezer considers the intelligence explosion unique and is so concerned about it.

Expand full comment

"How could you know that the second independent clause is true?"

It's possible and highly likely given the course of human history. Even small differences in intelligence or technology have lead to devastating results over and over again. There's always the possibility of the future being different but I wouldn't bet on it in Robin's eat-or-be-eaten ultra-capitalist dystopia.

"Moreover, how do you and Robin know that whatever technology would serve to copy brain connections won't hit a wall long before it would enable ems?"

EMs aren't the only way, I talked about upgrading the human brain (research suggests savant like abilities can be unlocked in every brain and the human memory system can be improved) and once you're at the point where a computer can run an EM you're also at the point where you can build an artificial mind from scratch.

Expand full comment

It's a figure of speech, but the point should be clear: there are many situations where gaining an advantage can enable one to decimate the competition before they've caught up.

Expand full comment

I think you have an inaccurate conception of Nazi Germany and the potency of the earliest atomic arsenals.

Expand full comment

Sure, at some point there can be no further progress, but long before then the superminds will have enslaved or eradicated everyone else.

How could you know that the second independent clause is true?

Moreover, how do you and Robin know that whatever technology would serve to copy brain connections won't hit a wall long before it would enable ems?

[We seem disposed to deny such walls, even when we know what they are. Isn't quantum indeterminacy and probabilism "merely" an absolute wall to the knowledge project? The "interpretations" of quantum mechanics seem to express our need to interpret lack of epistemic access as ontological.]

Expand full comment

Cambias' example isn't directly analogous to your house burning example. Those gunpowder empires didn't completely destroy their non-gunpowder conquests, but most often converted them to some derivative of their civilization. Debates about the merits of colonialism notwithstanding, many of the European conquests saw large living standards rise associated with the import of European technology and institutions. The guns only destroyed inefficient regimes that were acting as barriers to advancement.

I'll posit a similar model about intelligent explosions. Most matter readily available to humans is "dumb." There's only about 18 billion pounds of truly smart matter on the planet (6 billion people * 3 pound brains). Even semi-conductors are mostly dumb as there's large classes of cognitive problem that they're currently incapable of dealing with. There is no current way to increase the amount of smart matter except very inefficient standard human reproduction.

AGI unlocks vast amounts of matter that can be made smart. Even at very inefficient intelligence densities, turning an even small fraction of all this dumb matter smart increases aggregated intelligence by at least several orders of magnitude. It doesn't represent an equilibrium shift, but a phase transition.

Expand full comment