31 Comments

Why would developers so severely misjudge their control of [superintelligent growth]

Most of today's developers don't worry much because they don't have to - the chance of any one of them creating a superintelligence soon is miniscule. The job of speculating on what might happen 20 years or so in the future is one for philosophers, not coders.

Expand full comment

Surely, impact is measured not in how many partisans you started with, but in opinion shifts.

Expand full comment

Now, I'm not saying that Robin is stupid. It's just that Eliezer is so amazing.

Expand full comment

Zubon:I think we're on the same page. I meant 'that large' in comparison to our advantage in other design spaces. We went from kites and balloons to spaceplanes in under 100 years, and we're even better than that at microprocessor design. We could take this to mean that intelligence is very good at these things, but maybe not so good at improving machine vision algorithms (but still much better than evolution).

Expand full comment

@Jeff,and @Tiiba: I couldn't disagree with you either of you more.

Why would you value debate where both sides are in agreement? Only through disagreement and discussion can true debate take place. The contrasting views of Robin and Eliezer are what make this blog thought-provoking and worth reading.

Expand full comment

I'm gonna have to go ahead and disagree with you there. I only come here for Eliezer's posts. Everybody else, even Robin, is a toolbag compared to him. It's almost annoying how consistently he hits the nail on the head.

x2

Expand full comment

More on architecture vs. content and hard takeoff as a specific technical problem. It might be wrong to think about seed AI as a content-producer, that makes putative stuff like economies do, where you can see improvements and technologies as goods. People project abstractions on the world around them, if they do something, they usually optimize for an abstraction that is statically attached to that thing. You make a processor, a thing that satisfies an abstraction of processor. You perform an operation described by an abstraction on other abstractions. This style of development is itself a specific algorithm of rationality, this is what works for us, this is what we are capable of doing. Economic analysis of this process is static analysis of this algorithm, a specific system operating by more or less simple rules.

If AI starts to invent algorithms for its own cognition, and it scales not by copying little black boxes and integrating them in the old algorithm of economy, but by expanding its mind, then you are in trouble. The activity of AI consists not in putative actions that produce stuff, it consists in following whatever cognitive algorithm previous incarnations of that AI came up with. External activity of the AI is as much operation of its mind as its internal activity, and its mind doesn't work on economy, it works on novel algorithms optimized for each specific context. Performing static analysis of this algorithm isn't going to yield simple laws, apart maybe from what physics, information theory and computational complexity can say on the topic, and this is orders of orders of magnitude beyond what we saw.

Expand full comment

Interesting that this discussion bogged with so little progress – this doesn’t exactly inspire confidence in the ability of would-be rationalists to resolve complex issues through discussion.

For what it’s worth, I think the key questions of fact here mostly revolve around the issue of what a mind with human-level intelligence would look like. Eliezer is apparently of the opinion that an AGI is a complex system of specialized modules, where some modules (like vision) do complex but well-defined processing with O(N) to O(logN) performance, while others are best viewed as polynomial approximations of various NP-complete search problems. In this view it’s obvious that once the AGI becomes competent to write AGI code it can rapidly scale up to use any available hardware, and a lot of his other claims about the capabilities of such an AGI rest on fairly sort chains of inference.

Robin obviously doesn’t hold this view. From his writings to date I can’t tell if he has a different model in mind, or if he just considers the whole question unanswerable at this point. But this seems to be the most fundamental technical issue in play.

Expand full comment

That we don't have AI already seems to be evidence that Intellignce might not have that large an advantage over evolution in mind design.

Define "that large"? Intelligence has been on the project for something approaching a century. Evolution has had multicellular life for about a billion years on this planet. Perhaps that is what you mean: intelligence may not be 10,000,000 times as quick. Many of us will be disappointed if intelligence turns out to be 1,000,000 times as quick, leaving us to wait most of a millennium.

Expand full comment

It's easy for people to decide that they have a good model of intelligence, and AI is right around the corner, as soon as they get their code working, or as soon as computers get a bit faster.

Could Elizer's view be a generalization of this? If we just had a self modifying AI, then Superhuman AI is right around the corner, and if we had a superhuman AI then SUPERsuperhuman AI is right around the corner, and so on.

Elizer says (IIRC) people expect AI from this or that because they don't understand how hard intelligence is. Eliezer knows that he doesn't know just how hard Super-Superhuman AI is, but he thinks Superhuman AI is enough to get there. This seems inconsistent to me.

That we don't have AI already seems to be evidence that Intellignce might not have that large an advantage over evolution in mind design. (whereas it has a bigger advantage in designing pumps, flying things, cameras, projectile launchers.)

Expand full comment

Emile,

The creation of a "friendly god" already seems to presuppose that it knows whats best for us. Historically, the most violence has come from leaders operating under the pretense of doing some sort of good (either for certain groups or everyone).

I know Eliezer and others are thinking very hard about how not to create an uber-tyrant, but even if they succeed I have my doubts that the financiers of such a massive AGI project (and it would have to be massive, if they hope to beat any would-be competitors to the punch) would make the best decisions. I suppose the same could be true of AGIs made for other purposes, but its hard for me to imagine them wielding the same sort of political power.

Of course, if Eliezer is right about the potential of AGI, we'll likely have an arms-race on our hands anyways.

Expand full comment

*I* read Robin's posts...

Expand full comment

"""Robin,

Why do you have Eliezer on this website? You don't seem to be very impressed about his view of AI. The rest of his postings are on philosophy, and he's really terrible at that, though in Ayn Rand-like fashion he thinks he's quite good at it. This website would be much better if you got rid of him."""

I'm gonna have to go ahead and disagree with you there. I only come here for Eliezer's posts. Everybody else, even Robin, is a toolbag compared to him. It's almost annoying how consistently he hits the nail on the head.

Expand full comment

PK, if Eliezer was right about very rapid local AI growth, then we'd need to move on to the issue of why would developers so severely misjudge their control of it. If he was right about that I'd want to tell potential developers their error as clearly as possible.

Michael, it being easier does not say it is easy.

Virge, the question is not what a super-intelligence can do, but how easily it is to create one.

Cameron, you can't assume the AI has no biases.

Emile, "pretty close" doesn't say much about rates.

Expand full comment

I don't know about Grant's reasoning, but I value freedom **much** more than safety.

"You can have peace or you can have freedom, don't ever count on having both at once."

We have grown up in a world where both were the "natural" condition, but this has been a historical exception. The past few decades have seen the increasing growth of supression. "Friendly" AI will only increase that tendency, possibly too strongly to effectively resist. At least this blog has convinced me of the need to strongly work for IA.

Expand full comment

Grant: Though I'm much more afraid of an AGI created to be a friendly god than I am of one created for almost any other purpose.

Really? Would you care to give your reasoning?

Expand full comment