57 Comments

What, based on the idea that - if you wait long enough - the "Some circumstances" will eventually crop up? That seems to be a dubious premise.

Expand full comment

Politics: lots of people with different values result in political outcomes we disagree with, in one form or the other.

Socially conservative, anti free market robots, anyone?

Expand full comment

Why use the "powerful foreign creatures" reference class instead of "creatures we bring into existence", i.e., pets, domestic animals and descendants? We do spend a lot of effort instilling values in our descendants, and it seems to work relatively well.

The reason that education and propaganda don't work well on foreigners is because they already have their own values, and all agents tend to protect their existing values from external modification. But that is not necessarily the case for creatures we bring into existence, so it makes sense that we'd have more success with them.

Expand full comment

Alex and ad, where did you get the idea that I was offering guarantees?

If the AIs evolve much faster than us, then any approach that can fail under some circumstances, is going to fail on a timescale that might seem long to the AIs, but will be short to us.

Expand full comment

Wei, I take an "outside" view and focus on what has worked so far best to keep peace with the most foreign powerful creatures that we have known. We have tried education and propaganda to mold their values, and also, law, trade, and treaties to gain peaceful interaction. The later has worked far better than the former. You could personally learn about these institutions and consider how best to improve or adapt them to new problems. I estimate almost no chance you can "solve" values to give you "everything."

Expand full comment

I look to "values" first, because:

1. The "peace" problem seems at least as hard as the "values" problem. The solutions you propose seem very unlikely to succeed, even if you could convince society at large to adopt them. I think AIs would naturally want to live apart from humans for efficiency (their optimal environment is likely very different from those of humans). And AIs are likely to invent their own institutions or methods of cooperation optimized for their cognitive traits, and we would find it very difficult to participate in them.(See this post for example.)

2. I have little idea what I could personally do to encourage "peace". What concrete suggestions do you have for readers of your blog?

3. All is lost if we fail on both problems. Solving "values" alone gives us everything, but solving "peace" alone gives us only a small share of the pie.

Expand full comment

Yes, both can help, but what do you look to first, and for what do you think all is lost if you don’t get the way you want?I think using "property rights" to stop different-values AIs from converting us in their preferred variety of paperclips is impossible. There is no consensus answer to your "rhetorical" question.

Expand full comment

Wei, don't read too much into the title. I have amply clarified that the dispute is over emphasis. Yes, both can help, but what do you look to first, and for what do you think all is lost if you don't get the way you want?

Expand full comment

I did not say I was deadset against broadcasting, I said we should decide that choice together. Creatures do not have to be human for law and trade and other institutions of peace to function between them.

Expand full comment

TGGP, I often have trouble finding my old relevant posts; thanks. Yes becoming domesticated is far better than extinction.

Expand full comment

If Chalmers's position is that AI++ with bad values would surely lead to disaster, in the sense of human extinction rather than in the sense of being left with a very small share of the pie, then that does seem mistaken, due to the possibility of strong property rights.

But why do you say "Seek Peace, Not Values" instead of "Seek Both Peace and Values"? It seems to me that we should seek to instill our values into AIs, since that gives the best possible outcome, but in case that effort fails, also look for ways to live in peace with AIs who do not share our values, so that we're at least left with something rather than nothing.

Your overall position, of advocating that we ignore the first best outcome and just work towards the second best outcome, is really puzzling.

Expand full comment

Vladimir, most land on Earth is legally owned by humans today. As long as future AIs respect our property rights, biological humans can continue to survive. We'll trade or rent some of that land to AIs in exchange for their services, but those who wish to remain biological will retain enough to keep themselves sheltered.

Expand full comment

It occurs to me that Prefer Peace is another relevant post.

Expand full comment

Roko, the relevant O.B post for constraints vs values is Prefer Law to Values.

Robin, as a total (rather than average) utilitarian you presumably view our domesticated animals as being more fortunate than their peers which were not domesticated. Would you say that as a second-best outcome we should hope to become the domesticated animals of more powerful creatures, even as mere food supply or laboratory test subjects?

Expand full comment

Doug S. nailed it. Even assuming that many examples from human history are not good counterarguments to Hanson's "fear of foreigners is unproductive" argument, independent AIs are not human. Never mind the difference in "values"; self-preservation motives will diverge and become incomprehensible. Our interactions with non-human animals have only rarely been even arguably positive for them. Quick, pick a species of wild animal that you would be comfortable waking up tomorrow to find out had suddenly leaped ahead just to *human intelligence*. What's strangest is that Hanson is on record as saying that he's deadset against broadcasting our presence to possible aliens. What, AIs evolved under a yellow sun are nicer than the ones from red or white suns?

http://speculative-nonficti...

Expand full comment

I'm disagreeing with a particular thing a particular person said in a particular context. Chalmers said any AI++ with bad values would surely lead to disaster, and in a way that didn't suggest he thought this was true by definition.

Expand full comment