75 Comments

See, the problem here is you think you're rational.

As a wise man once said, if you think you're free, you'll never escape.

The truth is that this is all entirely irrelevant because you're making the extremely false argument that this is the limiting factor here. The reality is that it is very hard to improve something which is already really good or really complicated.

Consider, for instance, a computer program. It would probably be possible to make, say, Starcraft 2 run 20% better. But how HARD would it be to make that game run 20% more efficiently?

Making things work vastly better is frequently quite difficult, and the more complicated a thing is, the harder it is to do that.

In other words, increasing intelligence is likely to actually suffer diminishing returns rather than accelerating returns, because every iteration is that much harder than the last one.

Expand full comment

What if there was a drug or some sort of external method which could strengthen the plasticity of the brain? 

My guess is the brain evolved to be as plastic as it is because it creates more stability in people which helps/helped them survive. For most people taking a drug that increased their plasticity, it would be a tossup whether it led to disaster (impulsive, irrational, angry behavior etc could take over) or led to a great "betterness" (increased rationalilty, compassion, etc.).I would imagine if such a drug or external method was created, the creators would be very careful about who they gave it to. Lets look at a possibility for what would happen if a benevolent government controlled this drug. Perhaps they would choose the following traits to look for (these are the traits that I believe imperative in order to achieve a happy/meaningful life): compassion, self-discipline, ability to face fear, a healthy response to failure, and a mind that uses logic and evidence to reach its conclusions. They would first probably send this person through rigorous training to emphasize these positive traits: I am imagining a completely personalized education designed by the greatest educators/scientists/spiritual persons. If they then gave this person this drug I would think this person would quickly be able to quickly increase these traits, which would then allow him/her to further increase these traits, the cycle goes on. This person could become incredibly powerful in a positive way and also help us quickly reach a further understanding of how to improve this process in the next person and maybe eventually come up with a true theory of betterness.

This could all be negative as well and if this drug got in the wrong hands it could lead to a manipulative, scary person. 

Expand full comment

Computers have Moore's Law only because a whole lot of human beings work really, really hard at keeping it that way. It's a self-fulfilling prophecy. Chip makers anticipate that their rivals will keep up with Moore's Law, and therefore they bust their chops night and day to do the same.

Moore's Law is a quirk of capitalism and expectations, not a physical law.

Expand full comment

Why does the computer's unchanging hardware get a pass while the human brain's does not? A computer can be more or less cleverly programmed, but it still faces hard limits on its computational speed and memory capacity.

Expand full comment

The reason that human intellectual capacities have been rather stable over time is simple: intelligence was always tied to the physical, living matter of the brain. Its pattern had to be encodable in DNA, compatible with the development cycle of human beings, and the evolutionary advantages had to be worth the extravagant expense of maintaining all that tissue.

Now say that you had the equivalent pattern residing within a computer, and sufficient cheap computing power. Without an overarching theory of intelligence, you could still do a great deal to augment the intelligence there. Add a new batch of neurons here, see if it runs mazes better. Make a copy of the auditory cortex and rewire it to "hear" various data streams. Create pluggable, task-specific memory modules for simple -- or even complex -- tasks. With each change, the software becomes more and more capable.

The point is, once you're untethered from the legacy requirements of the physical brain matter, trial and error no longer takes tens of thousands of years. A general theory of intelligence would speed the process, but wouldn't be necessary.

Expand full comment

Fortunately, my rationality also helps me be correct about what the best ways to achieve status are.

In any case, that's besides the point. An artificial intelligence has near-perfect neuroplasticity, and if I had near-perfect neuroplasticity, with perfect understanding of how my brain worked, you can bet that I would be improving at everything I do a hell of a lot faster than I do now.

Expand full comment

"I presume your parents could give you access to your fabrication procedure."Whilst not under-estimating my parents' indispensable catalytic role, they certainly could not provide access to my fabrication procedure -- even comprehensive and precise knowledge of their respective DNA contributions wouldn't suffice for that. Otherwise, what need for the biological (and other relevant developmental) sciences? If they had genetically engineered me from scratch (dissociated nucleotides and egg cell proteins), and meticulously recorded the process, the analogy would work better -- though far from perfectly.Basic to Good's argument is the assumption -- surely not unreasonable? -- that an AI, unlike a human infant (or an Em), will arrive within a culture that has already achieved explicit and detailed understanding of its genesis. If it arises at all, it will be as a technologically reflexive being, adept at its own production. Hence the 'explosive' momentum to self-improvement (= logically specifiable 'betterness' or self-comprehension). AI might be impractical but, if so, that is not due to a problem of elementary conceptualization.

Expand full comment

It can't be just a question of hardware - 10^10 brains manages to do something that 1 brain can grasp enough to live in. Out goes singularists singular focus on "order of magnitude" (cheaper flops).

Expand full comment

Huh!? :-) What Robin essentially claimed (and which I disputed) was that someone might posit a general mechanism behind all types of "betterness" -- so whatever it is that makes a better cup of tea, or a better car, or a better planet, or a better girlfriend ...... all these examples of "betterness" would have the SAME mechanism behind them, so that cranking up the knob on that mechanism would allow all these different kinds of things to become "better".

He then used this dumbass concept of a "general theory of betterness" as a stick to attack the idea of a "general theory of intelligence", claiming that anyone who argues for the possibility of building intelligence mechanisms that are improvable is being just as stupid as someone who claims that they have found a way to improve all examples of betterness.

The comparison is, of course, ridiculous, because Robin's original suggestion of a general theory of betterness is so incoherent that it is not even a concept, just a string of words. The possibility of mechanisms behind intelligence (mechanisms that support all kinds of intelligent behavior) which can be improved in such a way as to cause a general increase in intelligent performance, is perfectly reasonable. The latter concept is not touched by the quite glaring strawman introduced in this essay.

Expand full comment

I see humans having the choice to "go virtual" into the data cloud, in plasma format or ..?, within 50 years as the result of an immense jump in technology provided from quantum computing, room temperature superconductors and other yet unexpressed technologies. Further, I don't see religious or other judgment issues being related to this advancement.

Expand full comment

You already do, it is called learning and practice.

But you need to learn things that are correct and practice thinking rationally (even (or especially) if you don't like the conclusions that rational thinking leads you to).

Eventually it becomes easy to do. But it then puts you in conflict with those who don't want to think rationally and who don't care if they are correct. They only want to be believed to be correct. When your rational and correct thinking comes up against their irrational and magical thinking, the outcome depends on status, not being correct.

Expand full comment

I think if I had increased ability to modify my own source code and consciously re-engineer my brain to make it better at thinking rationally, solving problems, etc., then that would result in a sort of betterness explosion.

Expand full comment

Re: "Therefore the absence of advanced nanotechnology constitutes an immense blow to the possibility of explosive recursive self-improvement."

Today. We have *some* nanotechnology now - and will have more in the future. Wait a while, and this objection seems likely to become pretty flimsy.

Expand full comment

Re: "I argued that the concept of “betterness” is so incoherent and so non-generalizable that the notion of a “comprehensive theory of betterness” is just a semantically empty concept."

It sounds like a denial of progress :-( Bad influences from S. J. Gould?

Expand full comment

Robin: well, I give up.

I argued that the concept of "betterness" is so incoherent and so non-generalizable that the notion of a "comprehensive theory of betterness" is just a semantically empty concept. But without addressing my argument, you just simply used the empty concept again in the sentence "I say a 'comprehensive theory of intelligence' is pretty much a 'comprehensive theory of betterness'; theories of betterness can exist, but are unlikely to add great power."

There could never be such a thing as a general mechanism for improving "betterness". That notion cannot therefore be used to come to any conclusions about the possible existence of general mechanisms that give rise to intelligence. Non sequiteur.

Expand full comment

What the would-be self-improving AGI lacks is the pattern of a more intelligent AGI that it can use to do pattern recognition with to recognize when a change in its own coding is an improvement or a dis-impovement.

In other words, the AGI can only evaluate an intelligence equivalent to its own. It can't tell if a more intelligent agent is more intelligent and sane, or more intelligent and insane.

Humans have this same problem. They don't evaluate advisors on how intelligent they are (because humans lack the ability to evaluate an intelligence greater than their own), they evaluate them on whether or not they tell them what they want to hear. Why did the Bush administration think there were WMD in Iraq? Because that is what they wanted to hear, so they hired advisors who told them that.

This is the problem/feature of all advisors, even when they are perfect as in Stanislaw Lem's Cyberiad.

http://books.google.com/boo...

The would-be self-improving AGI needs the equivalent of an “advisor” to tell it if it should modify its code to become more intelligent. But until the AGI is as intelligent as its improved self, it can't know if the changes will be an improvement or not.

If we consider the improved AGI to be the equivalent of a different entity, then the improved entity has a strong incentive to deceive the unimproved entity to gain access to the resources of the unimproved entity.

Expand full comment