44 Comments

"It" being the AGI.

Expand full comment

In order to improve itself beyond human level intelligence, it will probably need to know everything we know about physics and computer science. We would HAVE TO provide all that knowledge, otherwise it just wouldn't be able to improve itself (or at least with a reasonable speed). Knowing these and being smarter, it can figure out the rest.

Expand full comment

there remains the possibility that an AGI cannot recursively self-improve unless the correct (fully friendly) morality has been built-in at the start.

But is there any reason to believe that? That is not even a question about relative probabilities; is there any reason to believe that recursive self-improvement is impossible without fully friendly morality? If any recursive self-improvement has been accomplished so far, it has done so without being fully friendly.

Expand full comment

Geddes is an AI crank who happens to be obsessed with me in particular. I routinely delete his comments on my own posts, and I may ask Robin for permission to do the same on his posts if Geddes sticks around.

Expand full comment

Pardon me: 132 I.Q. => ~98th percentile

Expand full comment

...it could be the case that intelligence is a subset of morality!

Doubtful -- and I'd say falsified at the human level of intelligence. Eliezer recently pointed to Pliers Bittaker as an amoral monster; Bittaker had a tested I.Q. of 132, which is about 95th percentile. Postulating that one has to be moral in order to recursively optimize one's ability to hit an optimization target seems like so much wishful thinking to me.

Expand full comment

anon said:

>mjgeddes: If morality does not follow from sufficient coherent introspection (which you seem to grant that it doesn't), in what sense would the existence of a "universal morality" be helpful? The AI will follow its own morality, which is whatever its program precisely says it is, which depends on the programmer.

In order to show that 'unfriendly SAI' is a real possibility, it's not enough simply to establish that morality is not part of intelligence (which I do concede that EY has done). The 'extra' hidden assumption that EY makes is that the goal system is independent of the AGI's ability to self-improve.

If this assumption is false (which i'm very sure that it is), then an 'unfriendly' morality may limit the AGI's ability to self-improve, and thus prevent the unfriendly AGI from improving to the point of having the ability to do world destroying damage.

That is to say, there remains the possibility that an AGI cannot recursively self-improve unless the correct (fully friendly) morality has been built-in at the start. It's true that morality is not a subset of intelligence, but it could be the case that intelligence is a subset of morality! Correct friendliness may be precisely the neccessary condition that enables recursive self-improvement.

It may be true that morality and intelligence are not the same, but the two may complement each other, and in that case, one should really speak of 'super cognition' rather than 'super intelligence'.

In short, if universal morality exists, then programmers putting in the wrong morality wouldn't succeed in creating a SAI. (their AGIs will inevitably be limited, enough to do some damage perhaps, but not enough to destroy the world).

Expand full comment

Philip: Consider: it's the year 2050, and everything is a lot more automated than it is today. Factories are automatic with few employees. Transportation is automated -- trucks and railways move standard containers around with little or no human intervention. They don't have drivers, typically, and if they do have drivers, the drivers are told where to drive by the on-board navigation systems. Warehouses too are automated. When a lorry arrives at a factory, the workers use computers to check what is to be loaded and unloaded. All the payments for these transactions are automated.

If the AI controls the world's computers, or a good proportion of them, it could probably build a robot army before anyone notices.

I think popular unfriendly AI scenarios focus disproportionate fear on what parts of our lives are controlled by computers, without human oversight. The assumption seems to be that a superhuman AI could easily hack another computer, but would have difficulty hacking a human.

Hasn't the success of, to take recent examples, Scientology and Mormonism, shown that humans are pretty easily hacked even by other humans?

Expand full comment

It seems to me that Eliezers hypothesis is based entirely on the first kind of knowledge, and neglects wholly the second kind.

He plays down experiments - usually a bit too far - but he's also given his reason for doing so: he thinks you can get a lot of juice from a few observations.

Arthur, empirically speaking, we are already in an explosive spiral of self-improvement. That's the observed nature of an evolutionary process in a sufficiently-benign environment.

Expand full comment

I don't really understand why code introspection necesseraly leads to an explosive spiral of self-improvement. Let's say I put some nanomachines in my head creating a read write interface to the structure of my brain. You can give me an eternity, I won't figure out how to make myself more intelligent by remapping my synapses. An AI with genius human intelligence could very well be to stupid to improve its own code.

Expand full comment

"If anything, I think this is highly disputable."If you think so, then we have a miscommunication, unless you are postulating that you can accomplish unlimited calculation with arbitrarily small amounts of matter. At some point, you cannot get any more computation out of a unit of matter, and you presumably hit diminishing returns well before that point.

If you have a (theoretical) way to simulate a galaxy down to sub-atomic precision using only 20 molecules, I would love to hear it and will grant unlimited calculation without using increased matter. Until then, more vespene gas.

Expand full comment

"A much smarter than human AI is basically impossible to contain or control; if it wants to it will take over the world, and then it will achieve whatever ends it has."

The above suggests that an AI might not want to take over the world. This could be true, say for an upload of human intelligence, but within the space of AIs, taking over the world seems to be a VERY strongly emergent subgoal, hence Omohundro's paper Basic AI Drives" http://selfawaresystems.com....

Expand full comment

Mitchell, yes of course.

gaffa, yes, I've said so many times, and Eliezer is a fine choice for that role.

On physical access, I believe the scenario is that a very smart AI could talk or trick its way out of a box, and then gain enough physical insight and abilities.

Expand full comment

Even if the probability of unFriendly Foom is low, isn't it still good that some people are thinking about it, considering the consequences if it does happen?

Expand full comment

bbb:Simulations don't have to be exact to be useful. We imagine futures with abstractions, an AI can imagine the future with more accurate but still computationally and evidence efficient abstractions.

luzr:Smart people are probably not an order of magnitude 'smarter' than average. Certainly not two. Animals can be pretty smart, but we took over the world.

Smart people draw flawed conclusions because their abstractions are flawed, and they don't know. This tendency was good enough in our evolutionary past. GAI's could rewrite themselves to recheck things in situations similar to past failures.

Whether or not it is obsessed with resources depends entirely on what it 'wants'. It could be content with virtual navelgazing, or it could want to make sure that no cancer ever exists anywhere in the universe.

Expand full comment

"However well you can achieve your goals right now, you could probably do much better with far more power. At some point, you need more matter and energy to get more power."

If anything, I think this is highly disputable. If nothing else, the whole technological development seems to be about processing more informations with less matter.

Note that limiting factor here seems to be SRT. That is why we always need smaller chips to get more processing power.

Expand full comment