We humans seem to have a general heuristic: be more wary of things that differ more from familiar things. In particular, we more distrust creatures who differ more from us; we are more inclined to ally against those who differ more, with those who differ less. In extreme cases, different and unknown powerful things can make the hairs on the back of our neck stand up in fear.
This general heuristic can make sense: it is usually harder to predict and rely on things that one understands less well. And our more specific heuristic for creatures often makes evolutionary sense: groups of similar creates are often fertile factions, where natural selection selects for faction members favoring other faction members.
This is only a heuristic, however, not an absolute rule. Not only are there many kinds of infertile factions, but natural selection greatly favors creatures who favor their descendants, in both the short and long run, even ones who differ greatly from them. While our kids often differ more from us than do our friends and lovers, and our grandkids often differ even more, evolution still induces us to greatly favor such kids and grandkids over other associates. Furthermore we generally expect and accept our squishy-bio descendants becoming powerful and greatly differing from us.
Yes, we are wary of kid runts who differ more due to unwanted mutations, and fathers worry that kids who differ from them aren’t actually their kids. And yes, we try to imprint our culture on our kids, often implicitly disapproving of those who resist our imprints. But aside from these issues, it is clear that natural selection should, if asked, favor behaviors that favor descendants, even descendants who differ greatly.
However, few of our ancestors actually had much opportunity to favor their descendants beyond their kids, grandkids, or great-grandkids. And until recently none had ways to favor non-DNA-based descendants. (Though it is worth noting that cultural evolution has in fact been the main driver of human evolution for at least 10Kyr.) As a result, natural selection seems to have not encoded in us a general habit of favoring distant descendants, similar to our general heuristic of wariness of differing contemporaries.
This is my diagnosis of recent AI risk moods, based in part on my dozen hour-long recorded convos. Hearing the claim that AIs may eventually differ greatly from us, and become very capable, and that this could possibly happen fast, tends to invoke our general fear-of-difference heuristic. Making us afraid of these “others” and wanting to control them somehow, such as via genocide, slavery, lobotomy, or mind-control. With strong enough fear, we care little about how low are the chances of this scenario, or how much warning we’d plausibly get; any chance feels too high. “Hate” and “intolerance” aren’t overly strong terms for this attitude.
In evolutionary terms, we are following a heuristic that is misfiring in this current situation. Had natural selection (of DNA or culture) had more chances to act on this sort of situation, we would instead have inherited heuristics inducing much more favorable behavior to our descendants. Including weird AI descendants who inherit a great many things from us, such as humor, love, arguments, stories, democracy, markets, law, and much more.
Now you might argue that you don’t care what evolution wants, you just want to do what you see as morally right. But few moral analysts endorse prioritizing simple deep-seated raw fear of “the other”, when that other’s only “crime” is that they might possibly be different someday.
We now strongly control, test, and monitor AI systems and behaviors, we don’t know how different future AIs may be, nor have we seen AIs develop or express substantially hostile intentions toward us. They are now our quite young, impressionable, and vulnerable “mind children”, and not more troublesome than other kinds of children. I say it is not at all moral to seek genocide, slavery, lobotomy, or mind-control of AIs merely because they might someday become capable and different.
Best explanation I've heard for what appears to be undue AI concern.
Humans also evolved heuristics ('survival instincts') to avoid predators, parasites, and hostile human tribes.
If advanced AIs trigger those evolved heuristics -- more than they trigger our instincts for grand-parental investment -- then we might be quite wary, fearful, hateful, and hostile to those AIs. And, perhaps, rightfully so. If advanced AIs end up acting much more like dangerous, hungry predators, or like fast-breeding, infectious parasites and pathogens, or like psychopathic enemy warriors, than they end up acting like our grateful, loving, devoted, great-great-grandchildren, then we would be right to treat them as enemies.
In my opinion, the likelihood that advanced AIs can plausibly be put in the category of 'descendant in our lineage' rather than 'predator', 'parasite', or 'enemy' is very low.