36 Comments

I recently trotted out the argument from normality of genocide, which nudged Chip Smith into saying that the event which gave rise to the term "genocide" didn't qualify, and that it is in general of limited utility in understanding the conflicts it is generally applied to. He's already a Holocaust denier though, so I guess it isn't that big a leap.

Expand full comment

Robin, you really think that genocide is a "rare event"? There sure have been a lot of rare events int he past 200 years (and longer).

Oddly, you are right about one thing: these extremely common events of one group of people trying to exterminate another are becoming less common in terms of % of living people who are killed during them. At least, that's something I read online recently...

Expand full comment

I assume you say "needlessly expensive" because you believe that the law could be made clearer, and that that would make court cases shorter or less common, or something similar to that. Is that it?

What's your evidence that such improvements are possible? What would better law look like? How would we know that it's better? What's the risk of unintended consequences? Why aren't we using better law now? Isn't this a problem that lots and lots of very intelligent people have been struggling with for a very long time? What insight do you have that makes possible what they have failed to accomplish?

Expand full comment

I didn't mean that because it happened before it is implied that costs are low. I meant that the methods of coordination used in Rwanda have low costs.

But, since you insist, even if it is true that it "doesn't usually happen" (which can be argued against with countless examples), infrequency does not imply that the costs are high.

Expand full comment

"We live now in a world where some of us are many times more powerful than others, and yet we still use law to keep the peace, because we fear the consequences of violating that peace. Let’s try to keep it that way."

I know the psychopaths in this world are happy to hide behind the law.

Expand full comment

How about Chipmunks with pulse lasers?

Expand full comment

On reflection, this post isn't really about uploads - so perhaps this is the wrong spot for a discussion.

Expand full comment

An FAI's values are programmed-in, facts about the AI's initial conditions in a hopefully-proven-stable self-modifying system. (My job is to figure out how to do that exactly.)

Robin's laws are laws in the human sense, not in the sense of the Three Laws of Robotics; they are imposed from without by threatened sanctions, carried out from outside the AI - presumably by other AIs, who do so under threat of sanctions themselves, and so on. Robin thinks the AIs will not coordinate to change the system for fear of being the victims of their own next coordinated change.

Expand full comment

Whether wars and democides (government attacks the citizens) are uncommon depends entirely on what one compares to. I am fond of examine them statistically. Looking at the Correlates of War database (http://www.correlatesofwar.org) shows that ~2.16 wars start per year, most lasting days to weeks and having intensities on the order of 10-100 people dead per day. However, there is a well-known power-law frequency-size relationship (the Richardson law) with exponent -1.31 which means that arbitrarily large wars can occur with a non-negligible probability. Rummel's _Statistics of Democide_ appears to support another, steeper power-law for democides. Compared to wars democides appears to be more deadly: less common, but the size of the killings are often larger. The total democide death toll of the 20th century could well be on the order of 250 million.

However, looking at the full death toll worldwide across last century the probability of being killed by democide, war or violence in general has been a pretty small risk compared to the normal medical reasons. I think this strongly supports Robin's claim that mass killing is too costly to be indulged in for weak reasons. Rummel's observations about how democracy strongly reduces democide risks also seem to support the idea that societies where people are integrated are less likely to get rid of them.

Expand full comment

I too find Robin's repeated claims that exterminations (even ignoring the wide variety of slightly lesser persecutions, enslavements, rent-seekings, etc. in which the robots could engage) are "rare" in human history to be bizarre. There are hundreds of documented wars and genocides per century, probably vastly greater numbers of smaller-scale undocumented cases in prior centuries, and nearly a million murders per year. Furthermore, humans only have to be exterminated once. Robin has the burden of proving, against all historical knowledge, that the probability of such an extermination in any given year, by any of possibly billions of super-intelligences of widely varying values, that furthermore can rapidly evolve over the course of a single year, is extremely low. It defies all plausibility, but I think the problem here is that Robin (and alas, he is not the only one) is a student of economics (which assumes all transactions are voluntary) and not of history (which observes that they quite often are not).

I do, however, find Eliezer Yudkowsky's discussions of "Friendly AI" quite vague, and the distinction between Robin's "law" and Eliezer's "feelings" positions to be meaningless. Eliezer correctly points out that we can't count on analogies to our own evolved psychologies to hold, because AIs will be designed rather than evolved, or at least (per genetic AI techniques) evolved in a very different environment. This being the case, what basis is there to make a strong distinction between a "law" that prevents an act and a "feeling" or "desire" that prevents an act? There's nothing in contemporary computer architecture or design, beyond our anthropomorphizing of machines, to suggest such a strong distinction -- it's all just partial recursive functions, often with interaction and concurrence thrown in. So is there actually any major and meaningful distinction between "program the motivation" and "program the law", a distinction that does not rely on anthropomorphism and applies whether or not the AI is designed or evolved? If so, what is that distinction, in concrete engineering or mathematical terms, and, again not anthropomorphizing by assume machines have "feelings" we can empathize with, why does it lead to different outcomes?

Expand full comment

No, because the law is needlessly ambiguous and legal process is needlessly expensive.

Expand full comment

Because law is substantially zero sum competition?

Expand full comment

Assume that sentient robots would be inherently rational, without the flaws that make humans psychotic or irrational. If a robot can replicate itself, then it has the ability to power itself and repair itself. What does it need us for?

Throughout history, genocide has often been seen as rational. Carthage delenda est... and the Romans killed or enslaved everyone, razed the city, and plowed salt under the fields. For the Romans, genocide was a rational response to the conflict with Carthage in that it solved the problem once and for all.

War has been described as diplomacy by other means. What happens when the sentient robots, all communicating at the speed of light, decide to do something that humans oppose? What if they know they're right? In that case, would killing humans be seen as a rational response?

Expand full comment

Such groups would hear early about the proposal to eat them, retaliate against the proposers, suggest other groups to eat instead, and in the worse case actively resist plan implementation. Suppose, as seems likely, uploads think and coordinate several thousand times faster than humans running on a wet substrate. By the time wetware humans hear about the plan, it will be under way.

Expand full comment

"We could similarly eliminate some sick, weak, mentally ill, stupid, or idle rich. But we don’t. Why?"

Among other reasons, because human voters *value* their sick, weak, mentally ill, stupid, etc relatives, and feel Far compassion towards non-relatives.

With respect to the idle rich, extensive progressive taxation is the norm in democracies, the rich can engage in focused lobbying/support of anti-expropriation politics, and voter values (ethnic pride where the rich are of the same group, the American Dream, attitudes around the legitimacy of parents providing for their children) constrain confiscatory taxation.

"Women and men are very different, but so integrated that gendercide is unthinkable."Women and men have deep biological drives towards *valuing* some members of the opposite sex as mates and family members.

Expand full comment

The degree of integration is more important than the degree of homogeneity. Women and men are very different, but so integrated that gendercide is unthinkable.

Expand full comment