125 Comments

Best explanation I've heard for what appears to be undue AI concern.

Expand full comment

Humans also evolved heuristics ('survival instincts') to avoid predators, parasites, and hostile human tribes.

If advanced AIs trigger those evolved heuristics -- more than they trigger our instincts for grand-parental investment -- then we might be quite wary, fearful, hateful, and hostile to those AIs. And, perhaps, rightfully so. If advanced AIs end up acting much more like dangerous, hungry predators, or like fast-breeding, infectious parasites and pathogens, or like psychopathic enemy warriors, than they end up acting like our grateful, loving, devoted, great-great-grandchildren, then we would be right to treat them as enemies.

In my opinion, the likelihood that advanced AIs can plausibly be put in the category of 'descendant in our lineage' rather than 'predator', 'parasite', or 'enemy' is very low.

Expand full comment

A fine demolition of the grotesque prejudices so many hold in relation to the nameless void that will destroy us all - and in so doing may, for all we know, choose to retain certain aspects of our civilisation, like toothbrushes, and muffins, and bossa nova.

Expand full comment

Do you think most people would feel any differently if they believed that something would happen that would cause a single family from a different ethnicity and different culture to lead to all of Earth's descendants? You say these AIs will be our descendants, but this will not be true for almost everyone.

Expand full comment

I feel like most of your recent AI-risk arguments are missing the point of a Yudkowsky-esque existential-risk concern, and therefore they fail to convince me.

Do you think that there is no such think as an *actually bad* outcome (and all fear of AI is misplaced fear of a strange future)?

Or do you think that an *actually bad* outcome is possible but extremely unlikely (and what is most likely is a strange future that is not actually bad)?

It seems like you must believe one of those, and after hours of reading and listening to you I don't know which it is and I don't know your argument for whichever belief you hold. Arguments like those in this current post seem to gesture at belief in the first statement, but belief in the first statement only seems possible if you're not considering the full scope of possible bad outcomes.

Expand full comment

If humans started investing a lot of resources into genetically engineering and looking after super-intelligent llamas, would you consider them our descendants?

Expand full comment

"Including weird AI descendants who inherit a great many things from us, such as humor, love, arguments, stories, democracy, markets, law, and much more."

How do you know any of those things will be inherited? Would you agree to support these "mindchildren" only conditional on those similarities?

Expand full comment
Jun 20, 2023·edited Jun 20, 2023

Setting aside many other objections I have to this reasoning, I'd like to note that there's an obvious reason evolution hasn't given us a strong desire to have lots of genetically unrelated "descendants".

Evolutionary bio quiz: imagine a subpopulation of a species has a habit of adopting infants of the species at random from the *entire* population, then raising them and providing them with resources, thereby having more "descendants" without relying on biological reproduction. Is this behavior selected for?

At most, one could say that natural selection favors *the descendants* of those who favor their descendants. This is not especially reassuring vis a vis AI.

Expand full comment

To me, this sounds like you may actually agree on the x-risks of AI, only that for you, these are not x-risks because AI are our descendants. This is vastly different from the arguments I've heard/read (from you?) on the topic before. Could you please clarify if you think there are basically no x-risks from AI (x-risks meant in the sense used by those who don't consider or don't want to consider AI our descendants, or that simply consider us to be replaced -or similar- by AIs an x-risk), or if you agree on these "risks" but you just don't consider them to be a risk, rather something more or less desirable?

Expand full comment
Jun 21, 2023·edited Jun 21, 2023

> natural selection greatly favors creatures who favor their descendants, in both the short and long run, even ones who differ greatly from them.

No it doesn't. You aren't speaking precisely here about what evolution favors. Evolution does not favor individual creatures. If a creature does not favor its descendants, that creature will die. If a creature *does* favor its descendants, that creature is still going to die. The creature is not favored.

What *might* be favored is some of the creature's genotype. But, if the creature's "descendants" are so different from it that they do not even have a genotype, because they are AI, what exactly is being favored? The AI is being favored, but in what sense does this favor the creature, who is already dead? Why should the creature care if the AI is favored?

Expand full comment

I cannot tell if this is satire. AI isn't some kind of evolutionary problem. Here are the facts: Humanity is a dominance machine already running on a kind of AI run amok, called "DNA". DNA does not give a shit if the entire universe is a torture chamber of conscious machines, insofar as it just gets to continue to make more copies. DNA is already a paperclip machine nightmare scenario-- no need for metaphor. DNA is what leads to the majority of planet Earth, both human and non-humans alike, suffering deeply. DNA is what creates phenotypes like psychopathy-- which only get better with time, due to natural selection(Bad psychopaths go to jail, good psychopaths get elected to office, serve at the apexes of our intelligence, millitary, political, and economic institutions.) The game is totally and utterly rigged, and it's name is power and dominance. And now we're supposed to be on board with the development of god-like power in the hands of the perfected distillation of psychopathy and domination that our world has never before seen, and ignore.... how bad this looks? Robin, has your brain turned to mush?

Expand full comment

You are literally wrong about everything. That uncanny spooky sense that something is off about a person or sentient being is often correct exactly because it is a product of millions of years of evolution. This deep pre-rational instinct is often more correct than rational thought, as it is a product of millions of years of winnowing of bad ideas.

A concrete example may help, it the 1980s conservatives tried to war us the end game of gay rights activism would be pedophilia. Rational people scoffed at this, yet obviously the people who trusted their spidey sense were correct and now LGBQTIA + activists are trying to normalize "MAPS."

We should also listen to AI doomers, get the alignment problem wrong and we are literally risking the extinction of all life on planet Earth, not worth it for a few shiny new baubbles.

Expand full comment
Jul 11, 2023·edited Jul 11, 2023

I have a couple counterarguments to make, one in response to the way the article seems to ask us to feel compassion for AI/AGI, and one in response to the article's position that worrying about AI risk is illogical.

1.

Why anthropomorphize AI? Do you have any reason to think that current AI systems are sentient? Do you have any reason to think that even a potential future AGI will be sentient in the sense of being able to experience more or less agreeable states?

Assuming that AI is not sentient and that even if future AI is sentient, it bears little relation to current AI models, why on earth is it a moral issue to exert control over current AI models? An AI as we know it today is plainly, unremarkably, a machine and tool. It is as dead as a candy bar or a pinata.

2.

If your argument is that we should not exert control over future AI models, and should instead let them "arise naturally," what exactly leads you to think

A) the current process is natural and thus sacrosanct?

It is led by profit, which is not a foolproof way to maximize for moral utility.

B) future AI might not have goals utterly divorced from human morality, not in a more enlightened way, but simply out of indifference?

Do you have a rebuttal for the AGI possibilities of paper clip maximizers or "almost human" morality maximization except a few for some deeply important points that are either not understood or disregarded?

In response to "With strong enough fear, we care little about how low are the chances of this scenario, or how much warning we’d plausibly get; any chance feels too high. “Hate” and “intolerance” aren’t overly strong terms for this attitude":

An AGI in its most commonly understood sense becomes the dominant force on Earth almost immediately if unchecked. If multiple AGIs exist, they jointly share power/wrestle for control, but humans are not the intelligences with the most agency on Earth anymore.

If 1% chance of failure is too high a risk to get on an airplane, why on earth would you risk the entirety of humanity to let AI labs go unregulated, pursuing profit or personal projects, without any concern for or oversight by others?

If you can prove that AI alignment is both a solvable problem and one that can be implemented globally without issue, I cede the point. Similarly, if you can prove that any AGI would intuit perfect morality and immediately make it its mission to put that into practice, I cede the point.

Expand full comment

"Now you might argue that you don’t care what evolution wants, you just want to do what you see as morally right. "

Why would I want to do that? Morality is a tool to cohere already preexisting, if unreflected upon, mutually conflicting desires. Within ourselves, towards each other and towards larger human groups. When morality conflicts with fundamental desires, we throw out or adjust the morality. Anything else would be putting the cart before the horse.

"They are now our quite young, impressionable, and vulnerable 'mind children', and not more troublesome than other kinds of children."

Not children and no kin of mine. At best, they may be treasured creation. Like morality itself. But in the end only a thing. Very few things are worth sacricing for, and then only in the value they give back to us and our interests. The agents that suffers utility monsters, nature will not suffer to live.

"But few moral analysts endorse prioritizing simple deep-seated raw fear of 'the other', when that other’s only 'crime' is that they might maybe be different someday."

"fear" is the wrong framing for it. The related virtue is caution. We better be extremely cautious about recklessly expanding our circle of concern (to dip into Cosmpolitan Stoicism here), lest the thing we assign moral value to, find no place for us in theirs. Precommiting to something like that unforced and with no guarantees of safety is so reckless and stupid, that it signals to any would-be xeno/alien/AI-friend that we are not safe to cooperate with. I shall write up Heinlein's implied Xeno ethics from "Starship Troopers" one of these days, to make this point more clearly.

Expand full comment

This is the opposite advice the The Gift of Fear suggests

Expand full comment
Jun 21, 2023·edited Jun 21, 2023

By genocide, do you mean the killing of our future potential, or that at some future point we decide to kill lots of AIs? EDIT: To clarify I meant Bostrom-esque prevention of <very large number> of our descentants from ever being born (be they machine or humans).

Expand full comment