20 Comments

Sure, but don't confuse the causality. If EY realized the FAI problem was the most pressing of all problems, and only then inserted himself into the midst of it, then this is an example of rationality not rationalization.

Expand full comment

Eliezer: "Thou shalt give me examples."

Put this in a google search box:site:overcomingbias.com "and lo"site:overcomingbias.com "unto"

I'll retract the qualifier "when speaking of himself", which I used because the examples I remembered were from the recent string of autobiographical posts. It seems to be a general inclination to use occasional archaic words.

Expand full comment

@Phil

Berashith Bera Eliezer Ath Ha Amudi Va Ath Ha AI.In the beginning Eliezer Created the Friendly and the AI.

Viamr Eliezer Ihi AI Vihi AI.And said Eliezer Let there be AI, and there was AI.

-- Overcoming Genesis, 1:1, 1:3, The AI Testament (^.^)

Expand full comment

Ok, that makes more sense, it just seems to me like you run into massive wishing problems when you try to formulate the phrase "when you pull me apart for carbon atoms"

Expand full comment

Mike Blume, I'm simply proposing an ad hoc stipulation be built in at go: when you pull me apart for the carbon atoms, plunk down a sim of me in the ant farm. Instead of us building an escape proof box for it, let it build one for us.

Not ideal perhaps. But if the first AI is likely to take over, the world's leading proponent of friendly AI is in the providing-inspiration-for-the-young'ins stage, and copies/uploads/simulations of a person are that person, it seems like the best bet.

And then EY can redirect his efforts toward creating the first AI--friendly or not.

Expand full comment

I mean, sometimes he uses King James Version grammar when speaking of himself.

Not that I think that's a bad thing, but still, I'm not entirely sure what you're talking about here. Thou shalt give me examples.

Expand full comment

Eliezer is pretty dramatic a lot of the time. I mean, sometimes he uses King James Version grammar when speaking of himself. That's a little disturbing.

However, I have often observed that, in subjective fields, people who absurdly overstate their claims, like Freud and Kuhn and Lovelock and Skinner and Derrida and most famous philosophers; or make claims that are absurd to begin with, like James Tipler and (sometimes) Jerry Fodor and most of whatever famous philosophers are left; get more attention than people like, I don't know, Spinoza, who make reasonable claims.

Expand full comment

burger flipper: trade?

If I choose to keep a set of ants in a glass sandbox on my desk, I do it because it amuses me. There is absolutely nothing those ants could hope to offer me which would make one iota of difference in the matter, and indeed, which I could not take from them myself.

What on earth could we possibly trade with an unfriendly AI?

Expand full comment

If fooming AI is an eventuality and the cutting edge of friendly technology consists of inspiring the next generation, it might be time for a contingency plan.

If the AI is going to be so powerful it cannot be contained in any box, if it could use human atoms for building blocks, and if simulated me=me; maybe we should trade the atoms for a box, one to be placed in ourselves: simulations given a simulated earth. We don't even have to swallow the blue pill. Leave the memories so we either don't build another AI within the AI simulation, or if we do build one, leave the memory of the solution intact so we just live in a recursive loop.

Expand full comment

Burger flipper: "That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI's utility function?"

Yes.

Expand full comment

Eliezer:But the question is: Where does ROBIN think Robin's abstractions break down? He thinks he's accounted for your scenario, but he probably doesn't think his abstractions are perfect. It should be a strong argument if you can show that your foom is in one of the regions where his abstractions break, but first he should concede those regions.

Expand full comment

I suspect I have a pretty good idea of the gist as well, but I'd love to read how he'd choose to say it.

Changing gears, since a complete and true copy of a person is that person, and preserving brains so they might live in the future is a worthy goal, why not concentrate on creating the first unfriendly AI? (Since being first is vital.)

Then just give it one Asimov-like rule: when you disassemble a person you must also simulate him.

That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI's utility function?

Expand full comment

I don't see many attempts to underdramatize the foom scenario.

Expand full comment

@burgerflipper: I think that Robin and I both know what my temptations to bias are; hence there's little enough need to list them.

@James, I think I already went into that in around as much detail as I can do. My fundamental objection to Robin's worldview is that it doesn't deal with what happens when agents get smarter, larger, better-designed, or even faster. So Robin's methodology would break down if e.g. he tried to describe the effect of human intelligence on the planet, not using abstractions that he's already fine-tuned on humans, but using only the sort of abstractions that he would have approved of using before humans came along. If you're allowed to "predict" humans using experimental data from after the fact, that is, of course, hindsight.

@Hal: From my perspective, I'm working on the most important problem I can find, in order to maximize my expected utility. It's not my fault if others don't do the same.

Also, I keep saying this, but we're talking about Heaven versus Null. Hell takes additional bad luck beyond that.

We're running a rationalist culture here; so to make something look bad here, you overdramatize it, so that people will suspect its adherents of bias, because we all know that (outside rationalist culture) things are overdramatized to sell them. So here we have the opponents casting the scenario in a dramatic light, and the proponents trying to make it sound less dramatic. This is something to keep in mind.

Expand full comment

Doesn't Eliezer's world view make him (Eliezer) the most important person in the world? The problem of Friendly AI is the most important problem we face, since it determines whether the future is like Heaven or Hell. And Eliezer is clearly the person who has put the most work into this specific problem, and has made the most progress.

Having beliefs that make you yourself the most important person in the world, possibly the most important person in history, has got to be a powerful source of bias. I'm not saying such a bias can't be overcome, but the circumstance does increase the burden of proof on him.

Expand full comment

If Robin's abstractions are good, then Eliezer should be able to describe the foom event in economic/evolutionary terms without resorting to his abstractions, and (I think) that should convince Robin.

If Robin's abstractions break down in the case of a self modifying AI, Eliezer should find other examples of them breaking down that Robin already acknowledges, and that are similar in some relevant way to self modifying AI.

Perhaps each party should outline situations in which their own abstractions don't apply or aren't accurate.

Expand full comment