56 Comments

Why does it matter if there's an established slow culture? From your point of view, there will be one soon.

Expand full comment

Robin Hansen: . . .a hand-coded AI foom remains possible after ems, but the context would be different in important ways.

Is it a context that makes Friendliness of the hand-coded AI less of a concern? If so, how?

Expand full comment

Hal:Rereading some of you arguments, I get the impression that you would favor many copies of you over saving Robin, because at some point the many Hals could do more good than Robin could do. Is that right? This seems different than the idea that we should favor many Hals because they additively have a better quality of life than one Robin.

Expand full comment

Hal:"...our present morality is engineered into us by an evolutionary environment which no longer exists. Why should we honor that one?"We don't 'honor' anything. We want what we want. For exactly the same reasons I don't want the future to give rise to a paperclip maximizer, I don't want the future to give rise to societies that commit genocide as common practice.

Like I said before: This scenario already makes me want to look into friendliness, even without a singleton, because it is what I consider an unfriendly outcome. That the people who would exist in this outcome would be ok with it is moot, just as I don't weigh the values of the paperclip maximizer as relevant to what I want.

Expand full comment

it is hard to imagine [...] that ems wouldn't consider running 1000x slower as something akin to death

I'm a bit suspicious of statements that begin with "It is hard to imagine" or "I can't see how" when speculating on the non-immediate future. They convey a sense of misplaced confidence in a huge space of potential counter-examples. Whether or not it is probable, it is certainly not hard to imagine, particularly when talking about something as unconstrained as a future em's philosophical intuitions.

Anyway, it is what it is regardless of what the ems consider it to be. It's not total suspension, and it's not information-theoretic death. Put yourself in the role of the evictee, with the options of termination, archival, or massive slowdown. I believe very few would choose archival, and fewer yet termination, especially once there's any established "slow culture" to participate in.

Expand full comment

loqi, it is hard to imagine storage demands not being at least 0.1% of running demands, nor that ems wouldn't consider running 1000x slower as something akin to death.

Lightwave, can my scenario really be more fun to argue than the basement AI that suddenly takes over the world?

Hal, even the evolved morality we inherited does not entirely approve of the morality that ems would evolve, the question is how hard we'd be willing to work to change their world/morality to match ours.

frelkins, I'm pretty sure I have no famous ethical axioms.

Virge, bambi is right; I'll assume crude trends continue until I see reasons to think otherwise.

Tyrrell and Roko, a hand-coded AI foom remains possible after ems, but the context would be different in important ways.

Expand full comment

1. I am somewhat appaled how easily everyone discusses the use of ems as tools which can be created and shut down at will. If ems are superior than bio-humans, maybe its the bio humans that should be shut down.

2. The whole ems economy scenario strikes me as very unlikely. Eliezer somewhere said to be wary of things that are fun to argue, and I think that's what everyone here is doing.

Expand full comment

Thanks for the Repugnant Conclusion link, Hal. On first read, it amazes me to see serious philosophers employing mathematical models that are clearly unstable right at the point where they're drawing their strong conclusions. Any of them working with welfare values on a linear scale that can take both positive and negative values must have noticed the discontinuity in their equations at zero. The tiniest change in the definition of a marginally positive quality of life can make the total welfare go from being the best of the best to being so negative that it isn't worth considering.

It's really not surprising that one can find paradoxes in a welfare function when the mathematics is obviously not modeling what they want it to model over the whole domain of interest. I'll have to think about it a little more. The only paradoxes I'm seeing so far come from unrealistic modeling.

Expand full comment

"Khyre, I'd rather create a real person, even with a limited lifespan, than a zombie/willing-slave without respectable desires."

Well me too, but my personal preference is irrelevant if my offspring are going to compete with more productive but less human ems.

"But I doubt creating a productive zombie can be done quickly."

("Zombie" has the wrong connotation - think more of an bright, enthusiastic cult member. But you did say "productive zombie", so that's ok)

I'm not so sure about the long time frame. We're not talking about understanding how memory encoding or reverse-engineering anything "deep" about human intelligence, we're talking about psychological conditioning. If you can stomach it stretch your imagination ...

Think about removing all ethical limits from experimentation into psychological conditioning, having ability to perform perfectly repeatable experiments.

Imagine if you could get hold of a pre-adolescent em (f**k that's a horrible thought) - the extra plasticity might be worth the longer training time.

You hear about new discoveries in neuroanatomy just about every month from fMRI. Imagine what will be known by the time uploading is possible. Even if you don't know exactly where all the neurons go and why, you might be able to engineer gross personality changes. You can experiment as much as you want.

Virtual psychopharmachology.

If I can think of that off the top of my head, think what would be achievable given the enormous commercial pressure to produce a willing slave. Yes, the development might be f**king horrible, but if you're going to assume that ems can be involuntarily KILLED, I don't think you can assume there will be any ethical restrictions on em development.

Expand full comment

It seems the accepted scenario here presents an artificial dichotomy between "funded" (living) and "unfunded" (dead) ems. Is this really the case? The primary thing that determines an em's (objective) lifespan is the longevity of its storage, not necessarily the CPU time allocated to it. If processing, not storage, is the bottleneck, then all it takes is a small amount of generosity (one wealthy storage-baron) to "freeze" unfunded ems. If decent compression is applicable to storage of forked ems, this type of coverage could easily be universally practical.

But why go straight from 1 to 0? An em can be slowed down to a near-infinite degree. A 6502 pried out of a NES given access to sufficient storage could run an entire civilization, albeit at a tremendous slowdown.

Continual genocide certainly seems possible, but as far as I can tell, you'd need to be confident that storage demands will keep pace with computing demands to put much weight into such a belief.

Expand full comment

James, the problem is that our present morality is engineered into us by an evolutionary environment which no longer exists. Why should we honor that one? Evolution does encourage us to reproduce, but it does so via the sex drive. An alternative would have made us value reproduction per se, and given us instinctive awareness that sex would lead to reproduction. But presumably that would have been too complex to engineer into our more primitive ancestors. This contingency hardly seems a sound basis for favoring the resulting set of values.

However I admit that it is hard to come up with arguments to choose one morality over another. Consistency would be desirable at a minimum. You might review the discussion around Parfit's "Repugnant Conclusion" which to me suggests an inconsistency in failing to value new life sufficiently.

Virge, in answer to your question, although I think Robin has more to offer the world than I do, if you were to balance enough copies of me against him dying, then yes, at some point I think it would be moral to favor the copies. Whether five is enough is hard to say. But as I was trying to indicate, these kinds of dilemmas are not specific to the issue of copying. Would you save Robin's life or that of five random people, if you had to choose one or the other? How about two people? How about ten? What if they are old and about to die? You can come up with a million variants. It is always hard for us to balance life against death. And see what you think about the Repugnant Conclusion linked in the previous paragraph.

Expand full comment

Richard, do you assign zero value to your autonomy? Do you also assign zero value to your personal enjoyment of the process of achieving your goals?

Virge, here is my reply.

Expand full comment

Virge:> (a) why coded AGI cannot or will not be produced by current human efforts, or> (b) why a self-improving AGI is necessarily limited or extremely slow to self-improve?

I can't think of a place where Robin has explained these, nor would I expect him to (though it would be interesting). It's a question of burden of proof. If somebody makes up bizarre future scenarios we expect them to demonstrate their likelihood, not for others to convincingly prove them impossible.

For (a), partially it depends on what you mean by "current". Since half a century of effort has produced squat it's not unreasonable to project some more squat, unless provided a reason not to. The credulous always latch onto today's handwaving as a "reason" when they really want to, which leads them to consider other people unreasonably skeptical. While there is no provable reason to think that researchers will never understand intelligence enough to code it, nobody has demonstrated such understanding yet, nor even convincing progress toward such a theoretical foundation on which coding could be based.

Again, for (b), it depends what you mean by "extremely slow" -- even if the millions of man-years finally produce a coded AI, how many millions of years should we expect it on its own to produce a better coded AI? Do you consider being merely millions of times more capable than human beings to be "extremely slow"? That's what would be required for any alarming self-improvement rate to occur. As to whether it is "necessarily limited", well if you find it more plausible to posit "unlimited" capability to a coded AI, I guess that's up to you.

Expand full comment

@Roko

Doesn't Robin have 2 famous ethical maxims: "try to be better humans" and "actions should be as noble as possible, but not nobler?" Aren't these enough to cover this conversation?

Expand full comment

Tyrell:

"However likely we are to stumble on non-em-based AI, surely we are even more likely to do so once we have an army of ems helping us."

This is a good point. I'd like to hear Robin's response to this.

@Robin: Perhaps you could consider adding a disclaimer as a footer to the bottom of your posts: this would probably save you a lot of time and avoid misunderstandings.

I still think that your analysis would benefit from saying something about ethics; because after all, we are in the prediction business for a reason: namely to shape the world into desirable outcomes.

You and Carl are debating the different possible ways that a dystopian nightmare could be created, arguing the details of scenarios that we just plain want to avoid. I think that your time would be better spent by first asking "what scenarios do we want to realize" and then thinking about how to get there. Eliezer is adopting this strategy...

Expand full comment

Tyrrell: After all, the "artificial" part per se isn't the threat. The threat comes from the "super-intelligent" part...

That nails what I think is the crux of the disagreement between Robin and Eliezer.

Robin seems focused on emulation of humans. Even with easily mass-produced emulations a foom is unlikely to happen until one can reliably reverse engineer the emulated human brain and work out how to expand its capabilities. Even then, the highly coupled architecture may present some serious limits (e.g. combinatorial explosion) on what modifications can be made. Under the emulation scenario, progress towards SI does look like it would undergo a slow series of improvement steps, with every step limited by human-scale limitations.

Since Eliezer expects coded SI to come before we can create ems, he's trying to explain why self-enhancing intelligence represents a completely different dynamic from every other change so far in human history. Everything we've seen so far has been limited by human wetware. Even when we mass thousands of humans onto one project, the inter-human communications problems impose limits on our capabilities. If we had a coherent, systematic general intelligence engine, with the ability to self-analyse and self-modify, then it's very difficult to see what could limit its accelerating intelligence. Under this scenario, going foom looks inevitable.

Can someone can point me to a comment or post where Robin argues either(a) why coded AGI cannot or will not be produced by current human efforts, or(b) why a self-improving AGI is necessarily limited or extremely slow to self-improve?

Expand full comment