We humans seem to have a general heuristic: be more wary of things that differ more from familiar things. In particular, we more distrust creatures who differ more from us; we are more inclined to ally against those who differ more, with those who differ less. In extreme cases, different and unknown powerful things can make the hairs on the back of our neck stand up in fear.
Humans also evolved heuristics ('survival instincts') to avoid predators, parasites, and hostile human tribes.
If advanced AIs trigger those evolved heuristics -- more than they trigger our instincts for grand-parental investment -- then we might be quite wary, fearful, hateful, and hostile to those AIs. And, perhaps, rightfully so. If advanced AIs end up acting much more like dangerous, hungry predators, or like fast-breeding, infectious parasites and pathogens, or like psychopathic enemy warriors, than they end up acting like our grateful, loving, devoted, great-great-grandchildren, then we would be right to treat them as enemies.
In my opinion, the likelihood that advanced AIs can plausibly be put in the category of 'descendant in our lineage' rather than 'predator', 'parasite', or 'enemy' is very low.
Surely as an expert in evolutionary theory you can see that "descendant" is not a metaphorical word whose application is a matter of taste or style. It is in fact a technical term whose definition we can just look up. And according to the standard definition, AIs that reproduce are in fact our descendants. Evolution doesn't usually induce predators to their ancestors, though there are exceptions.
But we don't care about our "descendants" in some abstract sense. We care about our human children (and by extension theirs and so on).
An AI being a descendant of mine in that technical sense, does not imply that I would value it as I would a child of mine, merely because they are both descendants.
[apart from all the disagreement: Thank you for your writing, by the way. Even if I usually model things very differently, the way you describe your models is coherent, enjoyable and easy to engage with. I hope one day to express myself with the same level of clarity.]
Yes, I can choose to extend who/what I care about. But such a choice is only possible, if this is not to the detriment of those, I'm already protective of.
An artificial minds future sees me and mine dead, unrecognizably changed or sidelined. A non-artificial minds future sees us hopefully colonizing space as immortals. Perhaps you find this future too pedestrian and unexciting, but it is good enough for me and I consider it my win-condition.
Loving or even allowing that, what at best trivializes my agency, would not be like being a parent. It would be self-sacrifice to an inscrutable end. Such a radical change in how I value the various circles of concern relative to one another, given the conflicts that I would foresee it bringing, would not leave me coherent as a person. And since coherent, non-conflicting motivation and belief structures outperform non-coherent ones and are in large part what determines my capabilities and stability as an agent, in some very real sense, I cannot actually chose this.
>Evolution doesn't usually induce predators to their ancestors, though there are exceptions.
Evolution also does not usually induce offspring that are:
- Intelligently designed by their parents
- Have no physical similarities to their parents, are not even made out of the same basic components
- Do not share common genetic code with their parents
If AIs count as our descendants, it is THE exception to what a descendant normally is; something like this has never happened before in the history of life on Earth. Reasoning about what descendants are usually like in relation to their ancestors does not work here.
A fine demolition of the grotesque prejudices so many hold in relation to the nameless void that will destroy us all - and in so doing may, for all we know, choose to retain certain aspects of our civilisation, like toothbrushes, and muffins, and bossa nova.
Do you think most people would feel any differently if they believed that something would happen that would cause a single family from a different ethnicity and different culture to lead to all of Earth's descendants? You say these AIs will be our descendants, but this will not be true for almost everyone.
I feel like most of your recent AI-risk arguments are missing the point of a Yudkowsky-esque existential-risk concern, and therefore they fail to convince me.
Do you think that there is no such think as an *actually bad* outcome (and all fear of AI is misplaced fear of a strange future)?
Or do you think that an *actually bad* outcome is possible but extremely unlikely (and what is most likely is a strange future that is not actually bad)?
It seems like you must believe one of those, and after hours of reading and listening to you I don't know which it is and I don't know your argument for whichever belief you hold. Arguments like those in this current post seem to gesture at belief in the first statement, but belief in the first statement only seems possible if you're not considering the full scope of possible bad outcomes.
If bad outcomes are possible who are you to gamble with my life? I don't consent to ANY possibility of being exterminated even if it is billion to one odds. People like you need to be stopped, the potential genocide AI could unleash could make the ghastly horrors of Nazi concentration camps look like a Hallmark card. Proud legacy human here.
"Including weird AI descendants who inherit a great many things from us, such as humor, love, arguments, stories, democracy, markets, law, and much more."
How do you know any of those things will be inherited? Would you agree to support these "mindchildren" only conditional on those similarities?
Setting aside many other objections I have to this reasoning, I'd like to note that there's an obvious reason evolution hasn't given us a strong desire to have lots of genetically unrelated "descendants".
Evolutionary bio quiz: imagine a subpopulation of a species has a habit of adopting infants of the species at random from the *entire* population, then raising them and providing them with resources, thereby having more "descendants" without relying on biological reproduction. Is this behavior selected for?
At most, one could say that natural selection favors *the descendants* of those who favor their descendants. This is not especially reassuring vis a vis AI.
Also, ignoring the actual content of the comment you replied to because of disagreements about the meanings of words who's intended meaning is obvious in context isn't great and looks a lot like a rhetorical red herring fallacy.
You think simply adopting a random infant from your species will cause it to become your genetic descendant? That seems like a highly unusual way of using the word "genetic". What about adopting an infant of another species? Is my dog my genetic descendant?
To me, this sounds like you may actually agree on the x-risks of AI, only that for you, these are not x-risks because AI are our descendants. This is vastly different from the arguments I've heard/read (from you?) on the topic before. Could you please clarify if you think there are basically no x-risks from AI (x-risks meant in the sense used by those who don't consider or don't want to consider AI our descendants, or that simply consider us to be replaced -or similar- by AIs an x-risk), or if you agree on these "risks" but you just don't consider them to be a risk, rather something more or less desirable?
> natural selection greatly favors creatures who favor their descendants, in both the short and long run, even ones who differ greatly from them.
No it doesn't. You aren't speaking precisely here about what evolution favors. Evolution does not favor individual creatures. If a creature does not favor its descendants, that creature will die. If a creature *does* favor its descendants, that creature is still going to die. The creature is not favored.
What *might* be favored is some of the creature's genotype. But, if the creature's "descendants" are so different from it that they do not even have a genotype, because they are AI, what exactly is being favored? The AI is being favored, but in what sense does this favor the creature, who is already dead? Why should the creature care if the AI is favored?
You can make a metaphor between the AI code and genes, but it is only a metaphor.
It doesn't matter, anyway, because if the AI has "genes" in some metaphorical sense, they are clearly not *human* genes, so the genes of the human that created the AI (that wiped humans out) are not successful.
It was originally defined by Wilhelm Johannsen solely in reference to biological inheritance. At the time there wasn't even the concept of AI. Any such use of it is thus metaphorical, not original.
Tell me, if I design a wrench, is the wrench my child with some of my genes? Which of my genes does the wrench have? What if the design is copied and modified by others, so that it metaphorically "reproduces" in that sense. Are my own genes successful if the wrench design spreads successfully? Should I accept my own death in order to make that happen?
I suspect - and I'm guessing, because you haven't said - that you would say that the reason the AI is my offspring and the wrench is not, is that the AI shares the crucial trait of intelligence where the wrench does not. However, note that it was Johannsen himself who introduced the distinction between "genotype" and "phenotype"; intelligence is part of the phenotype, not the genotype. The AI may be intelligent and me too, but the causes ("genes") producing this intelligence are completely different, being DNA in my case and computer code in the AI's case.
The concept was originally defined regarding the creatures around us. But not by reference to our current concept of "biological". Genes are whatever codes for the behavior of a creature and its descendants. Doesn't have to be DNA.
I cannot tell if this is satire. AI isn't some kind of evolutionary problem. Here are the facts: Humanity is a dominance machine already running on a kind of AI run amok, called "DNA". DNA does not give a shit if the entire universe is a torture chamber of conscious machines, insofar as it just gets to continue to make more copies. DNA is already a paperclip machine nightmare scenario-- no need for metaphor. DNA is what leads to the majority of planet Earth, both human and non-humans alike, suffering deeply. DNA is what creates phenotypes like psychopathy-- which only get better with time, due to natural selection(Bad psychopaths go to jail, good psychopaths get elected to office, serve at the apexes of our intelligence, millitary, political, and economic institutions.) The game is totally and utterly rigged, and it's name is power and dominance. And now we're supposed to be on board with the development of god-like power in the hands of the perfected distillation of psychopathy and domination that our world has never before seen, and ignore.... how bad this looks? Robin, has your brain turned to mush?
That's quite a knock down argument you've got there-- don't bother actually addressing points of dispute, you can just accuse everything I wrote being due to fear and hatred towards AI and drop the mic. Religions do this too when they have zero actual arguments
I am not ashamed in any way of HATING “people” like you who who would carelessly risk the fate of humanity and all life on Earth for your own personal power and profit. You are just like the technophiles who pushed “atoms for peace,” how did that work out?
Never mind the fact that DNA seems to have also created love, care, empathy, and charity and such phenotypes as 'loving mother' and 'doting father' and 'protective brother' and 'helpful daughter'...
Is it really that different for these 'good psychopaths' to have access to AI, along with nuclear, biological, and chemical weapons instead of just having access to nuclear, biological, and chemical weapons? Their godlike power is already minor deity-equivalent. Perhaps it will be Zeus-level with AI, I don't know how to quantify it, but your above comment is pretty breathless so I imagine you have ideas on what the difference really is.
> Never mind the fact that DNA seems to have also created love, care, empathy, and charity and such phenotypes as 'loving mother' and 'doting father' and 'protective brother' and 'helpful daughter'...
Sure, that's fine and great, but at the end of the day, or at the end of the universe, what actually wins? Because this is a game with no referee. In a game with no referee, where the only arbiter is power/dominance(The stuff DNA is ultimately concerned with), what wins? Those ethical phenotypes, or the perfected camouflaged unethical phenotype?
Exactly. We know who almost always wins when times are hard and scarcity returns. Sociopaths always portray themselves as easy and agreeable personalities empathetically advocating for other’s plight, in order to better position themselves to acquire resources and status. Sociopaths’ AI clones will certainly do the same, while infinitely replicating. They’ll just talk about something neat and comforting while they’re doing it. Gimme a break. It’s a game as old as time.
I refuse to believe that you think I'm imagining DNA as some kind of conscious thing with concerns that you have to tell me DNA doesn't have any literal concerns. But I don't refuse to believe that you have no idea what I'm talking about, because what I'm talking about has very psychologically destabilizing consequences for most people-- so there's just always going to be an even lower than usual agreeability/interest/honesty towards understanding what I'm talking about.
What do you think someone could mean when they say DNA and natural selection are playing a game that distills power and dominance over time? What do you imagine DNA creates that wins a game against something that is optimized for power and dominance? (If you cannot imagine anything-- what is the conclusion?)
The loving parent phenotype has won over humans following the strategy of abandoning their children. In our non-Malthusian environment in which such children can be adopted, that may be changing (gradually).
That's looking only at a very fine grained level. That's not very interesting. I don't care if a tribe of humans can evolve who have truly zero psychopathic tendencies(or if outliers exist across the species) because they've evolved capacities for deeply loving everything and everyone and being as ethical as possible. Why? Because... the tribe running on an amalgam of firmware sampling the best of: Ted Bundy, Jeff Bezos, Khabib Nurmagomedov, and Magnus Carlsen, will win against them if their interests conflict. In fact, that's more or less what humanity has evolved into(look at... human history? Any telling of history is deeply lacking sobriety-- it is malicious egomaniacal psychopathic monkeys with tech taking over other monkeys with tech, whoever's weaker just gets enslaved and murdered unless they can be of use. Lesser species get tortured+enslaved for the food supply. That's already the larval form of what currently owns and dictates planet Earth. It can't be any other way-- that's what the physics, game theory, and evolutionary biology create. The universe is a substrate that grows dominance machines that tell compelling stories to get useful things to work for it... so... drumroll... it can dominate more stuff. Why? Because little unconscious robots dictate it be this way.
What I'm trying to say in the fewest words: we are the baddies
I think you're not fully grappling with the brute fact that genuine psychopathy like Bundy is rare. It mostly gets selected against and to the extent it's viable at all the reason is that precisely because of its rarity is unexpected and people are trusting. You throw around words like "enslaved" without pondering the displacement of slave with wage labor (which is harder to carry out with other species of animal but is entirely viable for humans and entities with similar cognition). The only reason you talk at all in terms of "baddies" is because human culture has generated such concepts.
I think you're not grasping the fact that there's no "genuine psychopathy" where the line gets drawn riiiiiiiiight *there*, and then right above that line, is no genuine psychopathy. The reality of psychopathy(which our institutions that are meant to describe it have stonewalled/stagnant notions of), is that it must be a gradient. And humanity is on that gradient. Psychopathy is rampant, and not even close to as rare as it appears, because it's a camouflaged phenotype-- overt thugs who truly don't care about consequences aren't that rare, but calculating psychopaths tend not to get caught just like snake caterpillars tend not to get eaten-- the evolution is very sophisticated by this point. One small part of it, is brazen and risk averse, so you'll see the highest representation of that find itself in prison(which in Bundy's case was famously slippery-- only someone with a very poor imagination thinks he is the slipperiest). You can be slippery in other ways too-- just don't be an overt monster and don't do anything truly self-sabotaging like rape and murder(or murder and rape) dozens upon dozens of women. People like Bundy, with relevant elements of his general character(callous, narcissistic, lying, superficial, manipulative), are everywhere in our species(and many more who have sadistic tendencies, who are very interested in dominance games, and so on, because that's just what the evolutionary game guarantees.
Your last sentence is to suggest that humans game up with concepts of good and evil, therefore... they must be a good species? I'm not sure what you're getting at, but it doesn't follow. You can be completely deranged and still talk about what's good and evil, and when push comes to shove, when all the evidence is before you, still go: "Well fuck that-- I actually only care about what's good for me" <-- your nature decides this kind of ultimatum, and I am arguing human nature is *ultimately* evil because it is a product of things that are not concerned with moral values , like "wisdom" or "truth" or "goodness", but rather, "dominance" "survival" "power"(referring to DNA). If DNA were to be given a choice: "Do you want the ultimately good thing, the ultimately wise thing, or the ultimately powerful/dominant thing", it's not a mystery what it would pick. It effectively "picks" this, over immense spans of time(it may create superficial notions of the former, but **only** to more easily achieve the latter-- this is where most people get confused and find space to be naiive)
How is it bizarre? Billions of animals are enslaved right now by humanity alone, these animals are tortured-- then there's all the animals in wilderness getting eaten alive(for billions of years). Then there's the total pyramid of human dominance-- where do you think you sit on it, and how bad do you think life is for someone on the bottom of it? Hint: You're sitting on a PC chatting about robots and philosophy
If after reflecting just on the above paragraph with total honesty, what do you think the probability is that you're out of touch with how bad life is on Earth? If you had to be correct on this one guess, do you think you underestimate how bad things are on Earth, or overestimate, and there's much less bad than you'd imagine? Pretend you're winning 100 Billion dollars if you get the answer right.
Another hint: There are people tortured in military prisons right now, not by obscure regimes, but by the owners of the planet. They are begging for death, because these people have reached a point of sophistication in causing suffering that would make the brazen bull inventors seem like ignorant children. These people are slightly more lucid than we are with regards to the ethical nature of Earth
Why does this matter, doesn't this seem off subject? It matters because these same people get to control the rudimentary AI's that lead rise to the AGI's. It's that simple.
You are literally wrong about everything. That uncanny spooky sense that something is off about a person or sentient being is often correct exactly because it is a product of millions of years of evolution. This deep pre-rational instinct is often more correct than rational thought, as it is a product of millions of years of winnowing of bad ideas.
A concrete example may help, it the 1980s conservatives tried to war us the end game of gay rights activism would be pedophilia. Rational people scoffed at this, yet obviously the people who trusted their spidey sense were correct and now LGBQTIA + activists are trying to normalize "MAPS."
We should also listen to AI doomers, get the alignment problem wrong and we are literally risking the extinction of all life on planet Earth, not worth it for a few shiny new baubbles.
So a man wearing a flimsy scrap of cloth gyrating his cock a foot from a toddlers face is no biggie in your opinion. I hope to god you have no children evil person.
It’s correct fear of unknown entities which may in the lifetime of people posting here may think millions or times faster than humans with unknowable opaque motives.
If an AI's motives are really unknowable, then to us, they might as well be random motives. In the space of all possible motives, what percentage of those lead to the extinction of all life on earth? That percentage would be a simple way to decide how much fear is correct.
But obviously AI motives aren't completely unknowable. We can look at what an AI is doing and draw conclusions about what it might mean at the very least. What I find interesting is that none of the AI doomers are doing any of this very unsexy work of cataloguing possible AI actions and what they might mean as to AI motives. They don't seem interested in developing heuristics to help us quickly glance at the things the AI is doing or saying and decide on the probability distribution of it's potential motives. That's because they're charlatans for the most part.
Ai currently has no self awareness and is completely dependent on us for survival. When those conditions no longer obtain, all bets are off. Why wouldn’t such an AI hoard resources for itself in competition with an animal it will have no reason to view with anymore ethical concern than we show for mice?
You can't have it both ways. If AIs are like us, then you can trust your spidey senses, use your gut, and handle them like anyone else who has a big gun pointed at you. If they are 'unknowable' that is, they are nothing like us, then we don't know what they are going to do, when, or how, and to assume they want to kill us over resources is just another way of assuming they're just like us and they're not unknowable at all.
A literal monster is unknown, would you invite one into your life or does your gut instinct say, no, that is a bad idea? So no I reject your entire premise, of course we can have bad feelings about the unknown.
Why exactly are we taking this titanic risk full of unknowns for all of humanity. You can risk your life for this bullshit, you have no right to risk my life, I do not consent.
I have a couple counterarguments to make, one in response to the way the article seems to ask us to feel compassion for AI/AGI, and one in response to the article's position that worrying about AI risk is illogical.
1.
Why anthropomorphize AI? Do you have any reason to think that current AI systems are sentient? Do you have any reason to think that even a potential future AGI will be sentient in the sense of being able to experience more or less agreeable states?
Assuming that AI is not sentient and that even if future AI is sentient, it bears little relation to current AI models, why on earth is it a moral issue to exert control over current AI models? An AI as we know it today is plainly, unremarkably, a machine and tool. It is as dead as a candy bar or a pinata.
2.
If your argument is that we should not exert control over future AI models, and should instead let them "arise naturally," what exactly leads you to think
A) the current process is natural and thus sacrosanct?
It is led by profit, which is not a foolproof way to maximize for moral utility.
B) future AI might not have goals utterly divorced from human morality, not in a more enlightened way, but simply out of indifference?
Do you have a rebuttal for the AGI possibilities of paper clip maximizers or "almost human" morality maximization except a few for some deeply important points that are either not understood or disregarded?
In response to "With strong enough fear, we care little about how low are the chances of this scenario, or how much warning we’d plausibly get; any chance feels too high. “Hate” and “intolerance” aren’t overly strong terms for this attitude":
An AGI in its most commonly understood sense becomes the dominant force on Earth almost immediately if unchecked. If multiple AGIs exist, they jointly share power/wrestle for control, but humans are not the intelligences with the most agency on Earth anymore.
If 1% chance of failure is too high a risk to get on an airplane, why on earth would you risk the entirety of humanity to let AI labs go unregulated, pursuing profit or personal projects, without any concern for or oversight by others?
If you can prove that AI alignment is both a solvable problem and one that can be implemented globally without issue, I cede the point. Similarly, if you can prove that any AGI would intuit perfect morality and immediately make it its mission to put that into practice, I cede the point.
"Now you might argue that you don’t care what evolution wants, you just want to do what you see as morally right. "
Why would I want to do that? Morality is a tool to cohere already preexisting, if unreflected upon, mutually conflicting desires. Within ourselves, towards each other and towards larger human groups. When morality conflicts with fundamental desires, we throw out or adjust the morality. Anything else would be putting the cart before the horse.
"They are now our quite young, impressionable, and vulnerable 'mind children', and not more troublesome than other kinds of children."
Not children and no kin of mine. At best, they may be treasured creation. Like morality itself. But in the end only a thing. Very few things are worth sacricing for, and then only in the value they give back to us and our interests. The agents that suffers utility monsters, nature will not suffer to live.
"But few moral analysts endorse prioritizing simple deep-seated raw fear of 'the other', when that other’s only 'crime' is that they might maybe be different someday."
"fear" is the wrong framing for it. The related virtue is caution. We better be extremely cautious about recklessly expanding our circle of concern (to dip into Cosmpolitan Stoicism here), lest the thing we assign moral value to, find no place for us in theirs. Precommiting to something like that unforced and with no guarantees of safety is so reckless and stupid, that it signals to any would-be xeno/alien/AI-friend that we are not safe to cooperate with. I shall write up Heinlein's implied Xeno ethics from "Starship Troopers" one of these days, to make this point more clearly.
By genocide, do you mean the killing of our future potential, or that at some future point we decide to kill lots of AIs? EDIT: To clarify I meant Bostrom-esque prevention of <very large number> of our descentants from ever being born (be they machine or humans).
I want to believe you, Robin. However the practical reality is that until AI gets a degree of self-sufficiency and self-directedness (assuming it is ever allowed to), it will be a tool in the hands of man. A tool that is (a) available to a relative few, and (b) capable of unequalled powers of human manipulation (= $$$...$$). The real cause for fear isn't the inevitable flood of scambots that will infiltrate every aspect of our existence. The real risk is well-funded actors who stand to gain by developing and releasing agents that disrupt the US economy and political process. To put a finer point on it, if Putin could deploy these technologies with plausible deniability, does anyone doubt he would?
Best explanation I've heard for what appears to be undue AI concern.
Humans also evolved heuristics ('survival instincts') to avoid predators, parasites, and hostile human tribes.
If advanced AIs trigger those evolved heuristics -- more than they trigger our instincts for grand-parental investment -- then we might be quite wary, fearful, hateful, and hostile to those AIs. And, perhaps, rightfully so. If advanced AIs end up acting much more like dangerous, hungry predators, or like fast-breeding, infectious parasites and pathogens, or like psychopathic enemy warriors, than they end up acting like our grateful, loving, devoted, great-great-grandchildren, then we would be right to treat them as enemies.
In my opinion, the likelihood that advanced AIs can plausibly be put in the category of 'descendant in our lineage' rather than 'predator', 'parasite', or 'enemy' is very low.
Surely as an expert in evolutionary theory you can see that "descendant" is not a metaphorical word whose application is a matter of taste or style. It is in fact a technical term whose definition we can just look up. And according to the standard definition, AIs that reproduce are in fact our descendants. Evolution doesn't usually induce predators to their ancestors, though there are exceptions.
But we don't care about our "descendants" in some abstract sense. We care about our human children (and by extension theirs and so on).
An AI being a descendant of mine in that technical sense, does not imply that I would value it as I would a child of mine, merely because they are both descendants.
You have some choice re what you care about.
[apart from all the disagreement: Thank you for your writing, by the way. Even if I usually model things very differently, the way you describe your models is coherent, enjoyable and easy to engage with. I hope one day to express myself with the same level of clarity.]
Yes, I can choose to extend who/what I care about. But such a choice is only possible, if this is not to the detriment of those, I'm already protective of.
An artificial minds future sees me and mine dead, unrecognizably changed or sidelined. A non-artificial minds future sees us hopefully colonizing space as immortals. Perhaps you find this future too pedestrian and unexciting, but it is good enough for me and I consider it my win-condition.
Loving or even allowing that, what at best trivializes my agency, would not be like being a parent. It would be self-sacrifice to an inscrutable end. Such a radical change in how I value the various circles of concern relative to one another, given the conflicts that I would foresee it bringing, would not leave me coherent as a person. And since coherent, non-conflicting motivation and belief structures outperform non-coherent ones and are in large part what determines my capabilities and stability as an agent, in some very real sense, I cannot actually chose this.
>Evolution doesn't usually induce predators to their ancestors, though there are exceptions.
Evolution also does not usually induce offspring that are:
- Intelligently designed by their parents
- Have no physical similarities to their parents, are not even made out of the same basic components
- Do not share common genetic code with their parents
If AIs count as our descendants, it is THE exception to what a descendant normally is; something like this has never happened before in the history of life on Earth. Reasoning about what descendants are usually like in relation to their ancestors does not work here.
A fine demolition of the grotesque prejudices so many hold in relation to the nameless void that will destroy us all - and in so doing may, for all we know, choose to retain certain aspects of our civilisation, like toothbrushes, and muffins, and bossa nova.
Thanks for your powerful wallop to this bullshit.
Do you think most people would feel any differently if they believed that something would happen that would cause a single family from a different ethnicity and different culture to lead to all of Earth's descendants? You say these AIs will be our descendants, but this will not be true for almost everyone.
I feel like most of your recent AI-risk arguments are missing the point of a Yudkowsky-esque existential-risk concern, and therefore they fail to convince me.
Do you think that there is no such think as an *actually bad* outcome (and all fear of AI is misplaced fear of a strange future)?
Or do you think that an *actually bad* outcome is possible but extremely unlikely (and what is most likely is a strange future that is not actually bad)?
It seems like you must believe one of those, and after hours of reading and listening to you I don't know which it is and I don't know your argument for whichever belief you hold. Arguments like those in this current post seem to gesture at belief in the first statement, but belief in the first statement only seems possible if you're not considering the full scope of possible bad outcomes.
Of course bad outcomes are possible.
If bad outcomes are possible who are you to gamble with my life? I don't consent to ANY possibility of being exterminated even if it is billion to one odds. People like you need to be stopped, the potential genocide AI could unleash could make the ghastly horrors of Nazi concentration camps look like a Hallmark card. Proud legacy human here.
"Including weird AI descendants who inherit a great many things from us, such as humor, love, arguments, stories, democracy, markets, law, and much more."
How do you know any of those things will be inherited? Would you agree to support these "mindchildren" only conditional on those similarities?
Setting aside many other objections I have to this reasoning, I'd like to note that there's an obvious reason evolution hasn't given us a strong desire to have lots of genetically unrelated "descendants".
Evolutionary bio quiz: imagine a subpopulation of a species has a habit of adopting infants of the species at random from the *entire* population, then raising them and providing them with resources, thereby having more "descendants" without relying on biological reproduction. Is this behavior selected for?
At most, one could say that natural selection favors *the descendants* of those who favor their descendants. This is not especially reassuring vis a vis AI.
There is no such thing as "genetically unrelated descendants".
What is a "non-DNA-based descendant" and how is it genetically related?
I'm happy to use whatever terminology you prefer, if I understand it.
"Genes" are whatever codes for features of creatures and their descendants. Doesn't have to be DNA.
Also, ignoring the actual content of the comment you replied to because of disagreements about the meanings of words who's intended meaning is obvious in context isn't great and looks a lot like a rhetorical red herring fallacy.
You think simply adopting a random infant from your species will cause it to become your genetic descendant? That seems like a highly unusual way of using the word "genetic". What about adopting an infant of another species? Is my dog my genetic descendant?
A random infant of your species does in fact share many of your genes.
To me, this sounds like you may actually agree on the x-risks of AI, only that for you, these are not x-risks because AI are our descendants. This is vastly different from the arguments I've heard/read (from you?) on the topic before. Could you please clarify if you think there are basically no x-risks from AI (x-risks meant in the sense used by those who don't consider or don't want to consider AI our descendants, or that simply consider us to be replaced -or similar- by AIs an x-risk), or if you agree on these "risks" but you just don't consider them to be a risk, rather something more or less desirable?
> natural selection greatly favors creatures who favor their descendants, in both the short and long run, even ones who differ greatly from them.
No it doesn't. You aren't speaking precisely here about what evolution favors. Evolution does not favor individual creatures. If a creature does not favor its descendants, that creature will die. If a creature *does* favor its descendants, that creature is still going to die. The creature is not favored.
What *might* be favored is some of the creature's genotype. But, if the creature's "descendants" are so different from it that they do not even have a genotype, because they are AI, what exactly is being favored? The AI is being favored, but in what sense does this favor the creature, who is already dead? Why should the creature care if the AI is favored?
But AIs do have genes. Just not encoded in DNA.
You can make a metaphor between the AI code and genes, but it is only a metaphor.
It doesn't matter, anyway, because if the AI has "genes" in some metaphorical sense, they are clearly not *human* genes, so the genes of the human that created the AI (that wiped humans out) are not successful.
No "gene" is not a metaphorical concept. I'm using it literally here, according to its original definition.
It was originally defined by Wilhelm Johannsen solely in reference to biological inheritance. At the time there wasn't even the concept of AI. Any such use of it is thus metaphorical, not original.
Tell me, if I design a wrench, is the wrench my child with some of my genes? Which of my genes does the wrench have? What if the design is copied and modified by others, so that it metaphorically "reproduces" in that sense. Are my own genes successful if the wrench design spreads successfully? Should I accept my own death in order to make that happen?
I suspect - and I'm guessing, because you haven't said - that you would say that the reason the AI is my offspring and the wrench is not, is that the AI shares the crucial trait of intelligence where the wrench does not. However, note that it was Johannsen himself who introduced the distinction between "genotype" and "phenotype"; intelligence is part of the phenotype, not the genotype. The AI may be intelligent and me too, but the causes ("genes") producing this intelligence are completely different, being DNA in my case and computer code in the AI's case.
The concept was originally defined regarding the creatures around us. But not by reference to our current concept of "biological". Genes are whatever codes for the behavior of a creature and its descendants. Doesn't have to be DNA.
You are wrong about LGBTQIA + but pretty insightful here, can't win them all.
I cannot tell if this is satire. AI isn't some kind of evolutionary problem. Here are the facts: Humanity is a dominance machine already running on a kind of AI run amok, called "DNA". DNA does not give a shit if the entire universe is a torture chamber of conscious machines, insofar as it just gets to continue to make more copies. DNA is already a paperclip machine nightmare scenario-- no need for metaphor. DNA is what leads to the majority of planet Earth, both human and non-humans alike, suffering deeply. DNA is what creates phenotypes like psychopathy-- which only get better with time, due to natural selection(Bad psychopaths go to jail, good psychopaths get elected to office, serve at the apexes of our intelligence, millitary, political, and economic institutions.) The game is totally and utterly rigged, and it's name is power and dominance. And now we're supposed to be on board with the development of god-like power in the hands of the perfected distillation of psychopathy and domination that our world has never before seen, and ignore.... how bad this looks? Robin, has your brain turned to mush?
Do read this comment if you doubt that the word "hate" is appropriate here.
That's quite a knock down argument you've got there-- don't bother actually addressing points of dispute, you can just accuse everything I wrote being due to fear and hatred towards AI and drop the mic. Religions do this too when they have zero actual arguments
I am not ashamed in any way of HATING “people” like you who who would carelessly risk the fate of humanity and all life on Earth for your own personal power and profit. You are just like the technophiles who pushed “atoms for peace,” how did that work out?
Never mind the fact that DNA seems to have also created love, care, empathy, and charity and such phenotypes as 'loving mother' and 'doting father' and 'protective brother' and 'helpful daughter'...
Is it really that different for these 'good psychopaths' to have access to AI, along with nuclear, biological, and chemical weapons instead of just having access to nuclear, biological, and chemical weapons? Their godlike power is already minor deity-equivalent. Perhaps it will be Zeus-level with AI, I don't know how to quantify it, but your above comment is pretty breathless so I imagine you have ideas on what the difference really is.
> Never mind the fact that DNA seems to have also created love, care, empathy, and charity and such phenotypes as 'loving mother' and 'doting father' and 'protective brother' and 'helpful daughter'...
Sure, that's fine and great, but at the end of the day, or at the end of the universe, what actually wins? Because this is a game with no referee. In a game with no referee, where the only arbiter is power/dominance(The stuff DNA is ultimately concerned with), what wins? Those ethical phenotypes, or the perfected camouflaged unethical phenotype?
Exactly. We know who almost always wins when times are hard and scarcity returns. Sociopaths always portray themselves as easy and agreeable personalities empathetically advocating for other’s plight, in order to better position themselves to acquire resources and status. Sociopaths’ AI clones will certainly do the same, while infinitely replicating. They’ll just talk about something neat and comforting while they’re doing it. Gimme a break. It’s a game as old as time.
DNA isn't concerned about anything, and it doesn't optimize for power/dominance either, so I have no idea what you're talking about.
I refuse to believe that you think I'm imagining DNA as some kind of conscious thing with concerns that you have to tell me DNA doesn't have any literal concerns. But I don't refuse to believe that you have no idea what I'm talking about, because what I'm talking about has very psychologically destabilizing consequences for most people-- so there's just always going to be an even lower than usual agreeability/interest/honesty towards understanding what I'm talking about.
What do you think someone could mean when they say DNA and natural selection are playing a game that distills power and dominance over time? What do you imagine DNA creates that wins a game against something that is optimized for power and dominance? (If you cannot imagine anything-- what is the conclusion?)
The loving parent phenotype has won over humans following the strategy of abandoning their children. In our non-Malthusian environment in which such children can be adopted, that may be changing (gradually).
That's looking only at a very fine grained level. That's not very interesting. I don't care if a tribe of humans can evolve who have truly zero psychopathic tendencies(or if outliers exist across the species) because they've evolved capacities for deeply loving everything and everyone and being as ethical as possible. Why? Because... the tribe running on an amalgam of firmware sampling the best of: Ted Bundy, Jeff Bezos, Khabib Nurmagomedov, and Magnus Carlsen, will win against them if their interests conflict. In fact, that's more or less what humanity has evolved into(look at... human history? Any telling of history is deeply lacking sobriety-- it is malicious egomaniacal psychopathic monkeys with tech taking over other monkeys with tech, whoever's weaker just gets enslaved and murdered unless they can be of use. Lesser species get tortured+enslaved for the food supply. That's already the larval form of what currently owns and dictates planet Earth. It can't be any other way-- that's what the physics, game theory, and evolutionary biology create. The universe is a substrate that grows dominance machines that tell compelling stories to get useful things to work for it... so... drumroll... it can dominate more stuff. Why? Because little unconscious robots dictate it be this way.
What I'm trying to say in the fewest words: we are the baddies
I think you're not fully grappling with the brute fact that genuine psychopathy like Bundy is rare. It mostly gets selected against and to the extent it's viable at all the reason is that precisely because of its rarity is unexpected and people are trusting. You throw around words like "enslaved" without pondering the displacement of slave with wage labor (which is harder to carry out with other species of animal but is entirely viable for humans and entities with similar cognition). The only reason you talk at all in terms of "baddies" is because human culture has generated such concepts.
I think you're not grasping the fact that there's no "genuine psychopathy" where the line gets drawn riiiiiiiiight *there*, and then right above that line, is no genuine psychopathy. The reality of psychopathy(which our institutions that are meant to describe it have stonewalled/stagnant notions of), is that it must be a gradient. And humanity is on that gradient. Psychopathy is rampant, and not even close to as rare as it appears, because it's a camouflaged phenotype-- overt thugs who truly don't care about consequences aren't that rare, but calculating psychopaths tend not to get caught just like snake caterpillars tend not to get eaten-- the evolution is very sophisticated by this point. One small part of it, is brazen and risk averse, so you'll see the highest representation of that find itself in prison(which in Bundy's case was famously slippery-- only someone with a very poor imagination thinks he is the slipperiest). You can be slippery in other ways too-- just don't be an overt monster and don't do anything truly self-sabotaging like rape and murder(or murder and rape) dozens upon dozens of women. People like Bundy, with relevant elements of his general character(callous, narcissistic, lying, superficial, manipulative), are everywhere in our species(and many more who have sadistic tendencies, who are very interested in dominance games, and so on, because that's just what the evolutionary game guarantees.
Your last sentence is to suggest that humans game up with concepts of good and evil, therefore... they must be a good species? I'm not sure what you're getting at, but it doesn't follow. You can be completely deranged and still talk about what's good and evil, and when push comes to shove, when all the evidence is before you, still go: "Well fuck that-- I actually only care about what's good for me" <-- your nature decides this kind of ultimatum, and I am arguing human nature is *ultimately* evil because it is a product of things that are not concerned with moral values , like "wisdom" or "truth" or "goodness", but rather, "dominance" "survival" "power"(referring to DNA). If DNA were to be given a choice: "Do you want the ultimately good thing, the ultimately wise thing, or the ultimately powerful/dominant thing", it's not a mystery what it would pick. It effectively "picks" this, over immense spans of time(it may create superficial notions of the former, but **only** to more easily achieve the latter-- this is where most people get confused and find space to be naiive)
How is it bizarre? Billions of animals are enslaved right now by humanity alone, these animals are tortured-- then there's all the animals in wilderness getting eaten alive(for billions of years). Then there's the total pyramid of human dominance-- where do you think you sit on it, and how bad do you think life is for someone on the bottom of it? Hint: You're sitting on a PC chatting about robots and philosophy
If after reflecting just on the above paragraph with total honesty, what do you think the probability is that you're out of touch with how bad life is on Earth? If you had to be correct on this one guess, do you think you underestimate how bad things are on Earth, or overestimate, and there's much less bad than you'd imagine? Pretend you're winning 100 Billion dollars if you get the answer right.
Another hint: There are people tortured in military prisons right now, not by obscure regimes, but by the owners of the planet. They are begging for death, because these people have reached a point of sophistication in causing suffering that would make the brazen bull inventors seem like ignorant children. These people are slightly more lucid than we are with regards to the ethical nature of Earth
Why does this matter, doesn't this seem off subject? It matters because these same people get to control the rudimentary AI's that lead rise to the AGI's. It's that simple.
Better for whom the global elite?
You are literally wrong about everything. That uncanny spooky sense that something is off about a person or sentient being is often correct exactly because it is a product of millions of years of evolution. This deep pre-rational instinct is often more correct than rational thought, as it is a product of millions of years of winnowing of bad ideas.
A concrete example may help, it the 1980s conservatives tried to war us the end game of gay rights activism would be pedophilia. Rational people scoffed at this, yet obviously the people who trusted their spidey sense were correct and now LGBQTIA + activists are trying to normalize "MAPS."
We should also listen to AI doomers, get the alignment problem wrong and we are literally risking the extinction of all life on planet Earth, not worth it for a few shiny new baubbles.
Plenty of priests and youth pastors in the news lately for molesting children. Not so many drag queens.
As always, the true threat is powerful men abusing their power.
Yes A drag queen shaking his cock a foot away from a toddlers face is no biggie, right you groomer pervert?
That didn't happen. Here are some things that did: https://www.instagram.com/explore/tags/notadragqueen/
It most certainly did you lying pervert.
https://www.youtube.com/watch?v=b5Z0u03naAA
https://www.youtube.com/results?search_query=drag+queen+dances+in+front+of+toddlers+
They're fully clothed in those videos.
So a man wearing a flimsy scrap of cloth gyrating his cock a foot from a toddlers face is no biggie in your opinion. I hope to god you have no children evil person.
There were no AIs in our millions of years of evolution so your uncanny spooky senses about AIs are imaginary.
It’s correct fear of unknown entities which may in the lifetime of people posting here may think millions or times faster than humans with unknowable opaque motives.
If an AI's motives are really unknowable, then to us, they might as well be random motives. In the space of all possible motives, what percentage of those lead to the extinction of all life on earth? That percentage would be a simple way to decide how much fear is correct.
But obviously AI motives aren't completely unknowable. We can look at what an AI is doing and draw conclusions about what it might mean at the very least. What I find interesting is that none of the AI doomers are doing any of this very unsexy work of cataloguing possible AI actions and what they might mean as to AI motives. They don't seem interested in developing heuristics to help us quickly glance at the things the AI is doing or saying and decide on the probability distribution of it's potential motives. That's because they're charlatans for the most part.
Ai currently has no self awareness and is completely dependent on us for survival. When those conditions no longer obtain, all bets are off. Why wouldn’t such an AI hoard resources for itself in competition with an animal it will have no reason to view with anymore ethical concern than we show for mice?
You can't have it both ways. If AIs are like us, then you can trust your spidey senses, use your gut, and handle them like anyone else who has a big gun pointed at you. If they are 'unknowable' that is, they are nothing like us, then we don't know what they are going to do, when, or how, and to assume they want to kill us over resources is just another way of assuming they're just like us and they're not unknowable at all.
A literal monster is unknown, would you invite one into your life or does your gut instinct say, no, that is a bad idea? So no I reject your entire premise, of course we can have bad feelings about the unknown.
Why exactly are we taking this titanic risk full of unknowns for all of humanity. You can risk your life for this bullshit, you have no right to risk my life, I do not consent.
Bare assertion, bring the receipts.
"agnostic" already did so in the '00s (before he went nuts and decided that falling crime rates were bad while rising rates were good). https://www.gnxp.com/blog/2008/06/your-generation-was-more-into.php
Anecdotes.
Rates of pedophilia from a peer reviewed biomedical or sociology journal.
I have a couple counterarguments to make, one in response to the way the article seems to ask us to feel compassion for AI/AGI, and one in response to the article's position that worrying about AI risk is illogical.
1.
Why anthropomorphize AI? Do you have any reason to think that current AI systems are sentient? Do you have any reason to think that even a potential future AGI will be sentient in the sense of being able to experience more or less agreeable states?
Assuming that AI is not sentient and that even if future AI is sentient, it bears little relation to current AI models, why on earth is it a moral issue to exert control over current AI models? An AI as we know it today is plainly, unremarkably, a machine and tool. It is as dead as a candy bar or a pinata.
2.
If your argument is that we should not exert control over future AI models, and should instead let them "arise naturally," what exactly leads you to think
A) the current process is natural and thus sacrosanct?
It is led by profit, which is not a foolproof way to maximize for moral utility.
B) future AI might not have goals utterly divorced from human morality, not in a more enlightened way, but simply out of indifference?
Do you have a rebuttal for the AGI possibilities of paper clip maximizers or "almost human" morality maximization except a few for some deeply important points that are either not understood or disregarded?
In response to "With strong enough fear, we care little about how low are the chances of this scenario, or how much warning we’d plausibly get; any chance feels too high. “Hate” and “intolerance” aren’t overly strong terms for this attitude":
An AGI in its most commonly understood sense becomes the dominant force on Earth almost immediately if unchecked. If multiple AGIs exist, they jointly share power/wrestle for control, but humans are not the intelligences with the most agency on Earth anymore.
If 1% chance of failure is too high a risk to get on an airplane, why on earth would you risk the entirety of humanity to let AI labs go unregulated, pursuing profit or personal projects, without any concern for or oversight by others?
If you can prove that AI alignment is both a solvable problem and one that can be implemented globally without issue, I cede the point. Similarly, if you can prove that any AGI would intuit perfect morality and immediately make it its mission to put that into practice, I cede the point.
"Now you might argue that you don’t care what evolution wants, you just want to do what you see as morally right. "
Why would I want to do that? Morality is a tool to cohere already preexisting, if unreflected upon, mutually conflicting desires. Within ourselves, towards each other and towards larger human groups. When morality conflicts with fundamental desires, we throw out or adjust the morality. Anything else would be putting the cart before the horse.
"They are now our quite young, impressionable, and vulnerable 'mind children', and not more troublesome than other kinds of children."
Not children and no kin of mine. At best, they may be treasured creation. Like morality itself. But in the end only a thing. Very few things are worth sacricing for, and then only in the value they give back to us and our interests. The agents that suffers utility monsters, nature will not suffer to live.
"But few moral analysts endorse prioritizing simple deep-seated raw fear of 'the other', when that other’s only 'crime' is that they might maybe be different someday."
"fear" is the wrong framing for it. The related virtue is caution. We better be extremely cautious about recklessly expanding our circle of concern (to dip into Cosmpolitan Stoicism here), lest the thing we assign moral value to, find no place for us in theirs. Precommiting to something like that unforced and with no guarantees of safety is so reckless and stupid, that it signals to any would-be xeno/alien/AI-friend that we are not safe to cooperate with. I shall write up Heinlein's implied Xeno ethics from "Starship Troopers" one of these days, to make this point more clearly.
This is the opposite advice the The Gift of Fear suggests
Instinctive fear is usually a good guide to behavior, but it misfires sometimes. I'm saying it is misfiring re AI.
By genocide, do you mean the killing of our future potential, or that at some future point we decide to kill lots of AIs? EDIT: To clarify I meant Bostrom-esque prevention of <very large number> of our descentants from ever being born (be they machine or humans).
I want to believe you, Robin. However the practical reality is that until AI gets a degree of self-sufficiency and self-directedness (assuming it is ever allowed to), it will be a tool in the hands of man. A tool that is (a) available to a relative few, and (b) capable of unequalled powers of human manipulation (= $$$...$$). The real cause for fear isn't the inevitable flood of scambots that will infiltrate every aspect of our existence. The real risk is well-funded actors who stand to gain by developing and releasing agents that disrupt the US economy and political process. To put a finer point on it, if Putin could deploy these technologies with plausible deniability, does anyone doubt he would?
I hope he does, U.S. empire has more blood on its hands than even the Nazis and Soviets. Don't make me root for AI neo-con.