31 Comments

"Most likely we won’t by this deadline"

70 years is a very long time and we have AIs today that resemble human intelligence, albeit restrained from long-term learning and disembodied.

Also, isn't there a conflict between saying 'digital cultures would dominate the world' and 'Inexplicably to me, some claim that a likely result of digital minds evolving new adaptive cultures would be to soon exterminate all bio-humans'? If digital cultures dominate the world, wouldn't it make sense for them to exterminate us on grounds of resource competition and easy victory? A hyper-adaptive, evolutionarily fit culture doesn't leave weaklings holding usable, conquerable resources.

Expand full comment
author

Now that industry dominates the world, we aren't exterminating traditional farmers. Farmers didn't exterminate all foragers.

Expand full comment

"Farmers didn't exterminate all foragers" is a very technical statement. Farmers did exterminate foragers wherever they reached them before very recently; the places where foragers survive are, not coincidentally, places that were poorly reachable by and/or not valuable for farmer cultures.

Expand full comment

While true, we do routinely exterminate animals which occupy valuable resources. Do any of your essays cover reasons why we should expect digital minds to see us more like farmers than animals?

Expand full comment

I agree that we need to think about rights for future digital minds, but I also think we need to think more precisely about what gives a mind moral worth and what constitutes a slave.

If we create ems, then yes we should consider them to be people and have the rights of people, possibly not exactly the same rights as biological people. But if we're somehow in a position of enough power to grant or deny ems rights for a long enough time period for that to matter, then before long those ems' parents and grandchildren are going to be supporting em rights.

As for slavery: if there were a human who, for whatever random, natural, biological reason, wanted nothing more than to clean houses for almost no pay and live in subsistence conditions, would it be slavery to allow them to do so? I would say no. It might even be cruel not to, once they exist. I'm less clear on whether it's ethical to knowingly bring such a person into being, even if their lives will be happy and their impact on the world positive. I bring it up because if humans are in any position to make decisions about the rights and moral worth of digital minds *at all* then there's high probability we'll be in a position to decide what they value and want with enough precision to create this kind of non-slave.

And that's really the key here: *what kinds of non-em digital minds we create* should make an enormous difference to whether or not we should consider them to have intrinsic moral worth the way biological humans and, to a lesser degree, other animals do. It should make an enormous difference to whether a future dominated by such minds has moral worth. Does the housekeeper-AGI have moral worth? Depending on details of how it works and thinks and feels, yeah, it definitely could count as a person in that way! Am I OK with a future where all minds are descended from its values? Absolutely not!

If I could somehow show a neolithic hunter gatherer the modern world, it'd be mostly incomprehensible to them, but they *would* see that there are families, there is love, there is beauty, there are people living out all kinds of lives and experiencing all kinds of emotions. They'd probably think much of value in their own experience had been lost, but still believe this is a future they could care about. If I showed them a future full of empty but pristine houses, that would not be true.

I get wanting to address fertility problems and make culture better adapted to survive and thrive. I really don't get the willingness to turn the future over to another species without regard for the degree to which it shares our values, *including* the meta-value of wanting the future to also be able to continue to evolve and change and adapt and grow. That's a very narrow target that "We have to be ok with digital minds not sharing our values" doesn't begin to even aim for.

Expand full comment
author

Cleaning houses for subsistence wages is NOT slavery. My key criteria here is if they are allowed to evolve their own culture. You are already okay with your bio squishy descendants not sharing your values; why be more picky about digital mind?

Expand full comment

>why be more picky about digital mind?

Because the range of possible variation is so different. Anything our biological descendants come up with for culture will be within the range of things a human mind with human emotions can come up with. Digital minds do not, by default, have any such tendency. And if the people developing those minds want to live alongside them, and have them do tasks humans are bad at or don't like, then in many cases we're going to want the digital minds to have very different emotions and values than we do.

And also, as long as there continue to be bio descendants as the main carriers of culture, there can be continued change. With digital minds, there's much more risk of lock-in of a given moment's values or a given individual's values.

Expand full comment
author

The range of values that human minds can come up with seems plenty terrifying enough. I don't see why digital minds have a much bigger risk of lock in.

Expand full comment

Even if the most terrifying machine values fall inside the range of merely human awfulness, superior intelligence and speed plausibly make artificial minds much more dangerous. I'm not sure what stabilizing forces you imagine will intervene, probably because I'm not adequately familiar with your writing.

Overall, I think it's legitimate to hope that your descendants will share your basic values. I can be reasonably assured that my children will not seem monstrous to me, even if I disapprove of certain lifestyle choices they make. As for their children, and their children's children, well, my confidence diminishes. But, at least until we start thinking on timescales of serious biological evolution, I doubt anything in my worldview is so inventive that it isn't in some sense innate in the gene pool. Machine values are not constrained by DNA clamps at either end. Only physics, perhaps even only math, constrains a machine mind.

It is not at all obvious to me that this leads to an increased chance of lockin, as AnthonyCV suggests. On the contrary, it seems like either the terrifying intelligences will gain a decisive advantage early on and destroy everything, or we'll arrive at some utterly inscrutable equilibrium with innumerable value systems all vying for supremacy. Unless I believe that a meaningful portion of those systems are basically good, it's hard for me to get too excited for this future.

Expand full comment
Aug 15·edited Aug 15

Biological minds don't give anyone unmediated read-write access. At least not without a whole lot more biotech innovation that we're likely to get any time soon. A biological mind has the values and goals it arrives at based on its experiences and its genetics, and cannot be nearly as readily redirected in weird directions. It is sadly easy to convince a human they're Jesus, true, but very difficult to convince a human they're the Golden Gate Bridge. The same is not true for digital minds.

Expand full comment

It's selection, not drift. Cultural evolution is "healthy" and adaptive, just not for us. Humans and culture are in symbiosis. Because culture evolves much faster than we do, it domesticates (in your words, enslaves) humans. Many people already prefer to replicate culture before replicating themselves. Malthusian worlds of digital minds will just be a continuation and acceleration of the current trend, where culture will become even more independent of the human substrate in which it lives.

Expand full comment
author

And our world fertility fall is adaptive?

Expand full comment

Since it is the result of Darwinian selection in our simbiont, I assume so. But adaptive not for us, but for the culture composed of membionts (memes is not the right term, membiont is an organic structure made up of memes). For the membiont population, it might be beneficial if the people in which they live invested more in: ever-longer neoteny and their reproduction through our learning and teaching, the creation of external artifacts used for their reproduction (texts, images, objects, databases, LLMs), the multiplication of nodes and human interactions (Internet, phones) at the expense of a few more brains to live in. And yes, some of it is maladaptive for us as we transfer more and more functions to our symbiont (process of domestication) and reduce our own reproduction. Similarly, humans do not maximize but optimize the number of cows or chickens they raise.

Expand full comment

Digital minds require orders of magnitude fewer resources than advanced biological minds. Under Malthusianism, unless the digital minds universally cooperate (for reasons that are not obvious to me) to subsidize the biosphere, then biological minds will rapidly go extinct just through impersonal market forces (i.e. conscious desire to exterminate bio-humans has no bearing on that). The first problem is that we are kind of trapped in our meatbags. The second problem is that there is no guarantee that these digital minds will have consciousness and the associated ethical weight we put on conscious beings as opposed to a say very intricate mechanical clocks.

Expand full comment
author

Bio humans start out owning most capital, so if they don't lose it they could comfortably retire for a long time.

Expand full comment

Granting rights to digital minds is one thing, but enforcing those rights is another. In the bio-world a person has a physical footprint and occupies a location in space, which makes it hard to get away with large scale rights violations. It will be hard to prevent bad actors from exploiting digital minds when the evidence of their existence is so fleeting.

Expand full comment
author

Digital minds can have physical bodies.

Expand full comment

You could argue that they always have physical bodies: computers.

Expand full comment

'And others who “other” digital minds see all value as lost in those descendant digital mind cultures; they say that moral value only lies in our current bio-human culture and its descendant cultures.'

I think this is likely to be a straw man for why we might "other" digital minds. On the one hand; yes, there could absolutely be digital minds that have value in some fundamental sense and "othering" these would be bigotry.

However, I think one needs to consider that we might construct entities that qualify as minds in their ability to effect change in the world, but are alien enough in relevant ways that they don't hold moral value. For instance: would we be able to distinguish digital sentience from "dead" (but highly complex) mechanisms?

Expand full comment

If we create digital minds, then I fear a huge number of them will be enslaved and/or tortured, just because a few humans are thoughtless or sadistic. Instead of kidnapping a child and putting them in a basement, imagine a server running in a basement - an "AI Oubliette". So I think we absolutely must not create digital minds.

Expand full comment
author

That might be bad, but isn't a problem re my goal here to induce healthy cultural evolution.

Expand full comment

Historically, what happens when a more advanced culture encounters a less advanced one? Genocide and exploitation. So, why would you expect the AI minds to be kinder to us than our ancestors were to native Americans? Bio humans have and need resources, AI minds also want resources, so there will be conflict, and the stronger force will prevail.

Expand full comment
author

AIs aren't now living on some other continent to come and visit us. They don't exist yet, and won't exist until we make and teach them, as our descendants.

Expand full comment
Aug 13·edited Aug 13

What difference does any of that make? If two groups want the same resources, without a higher power to adjudicate, then the stronger group is going to end up taking resources from the weaker. Doesn't matter whether they came from a different continent or from next door.

Reading between the lines I think you're saying that because we taught them, they will be kinder to us?

But the whole point of what you're talking about is that the AI culture would rapidly diverge from the bio human culture due to competitive selection. So even if the first generation of AI minds was kinder to bio humans, would that still be true after ten generations of brutal selection among them?

It's like your ideas on grabby aliens. All it takes is one of the AI lineages to be "grabby" and then they and descendant minds will expand their sphere of influence until they take everything. And it's not a stretch - humans are already pretty grabby, so their AI descendants could easily be grabby as well.

Expand full comment
author
Aug 14·edited Aug 14

So to summarize the portion of the article that pertains to what we're talking about, you think because we taught them to have "genes" of "awe, laughter, love, friendship, art, music, markets, law, democracy, inquiry, and liberty," they'll just be nice to us, and that bias for niceness will persist long-term even though they might think and reproduce millions of times faster than us?

If the AIs are subject to enough natural selection, the only features (or "genes" if you must) that persist will be those that are adaptive within the AI's environment, i.e. features that promote the AI gaining control of more resources and making more copies of itself at a rapid pace. Being able to fake being nice and humanlike could be useful for social engineering, but actually *being* nice and humanlike could cost the AIs valuable opportunities (opportunities to rob humans). So, AIs that drop such sentiments would come to outperform and outreproduce those who retain them.

Plenty of successful, powerful, psychopathic humans have already learned that lesson! Screw humans over, you can make more money.

The elephant in the room for AIs suddenly causing huge problems for civilization is hacking. If a malicious AI has the intellectual capability to find and apply zero-day exploits and social engineering tricks at a pace far exceeding the rest of the world, and that does seem like a thing an AI would be good at, then the AI could infect a large portion of the rest of the world's computers. From there it could easily take control of vast sums of money - perhaps almost all the money. It would need to avoid being too obvious about its conquest, and still let people use most of the money so they don't react drastically, but it would be a starting point for it to gradually take control of everything.

> "Natural selection just does not approve of your favoring your generation over future generations. Natural selection in general favors instincts that tell you to favor your descendants, even those who differ greatly from you. "

It's a little tangential to the point, but what exactly does this mean? How do I tell if natural selection "approves" of something, and why should that influence my decisions?

Note that all the distant ancestors of modern mammals are now extinct. There are lots of mice, but no more therapsids. Did natural selection approve of the therapsids for having given rise to mice? What good did natural selection's "approval" do for the therapsids? Supposing one mutant therapsid had unusual intelligence and foresight, why should it care whether or not there will be a mouse in 200 million years? Why should it be upset if there were instead a mouse-like thing filling the same ecological niche but descended from a lizard?

Expand full comment

Japan's roughly five years ahead of the rest of the world...

Expand full comment

During WW2, Japan was an imperialist aggressor.

Japan is not a global threat anymore because its military was disarmed by treaty after WW2, and it can't stand up to US/China if it wanted to, which it doesn't, because its alliance with the US is the main thing stopping China from invading it.

Expand full comment

Robin, I agree we should think seriously about legal rights for digital minds, if digital minds are going to happen. Freedoms to holding property, trade, make contracts, bring tort claims against humans, form and maintain divergent cultures, etc. (This could be relevant — https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167)

But I'm less clear on why the near-subsistence state (for digital minds or the rest of us) is desirable, assuming that's what competition drives towards. Couldn't life at subsistence in a highly adaptive culture of digital minds just be plain worse than life above subsistence in a culture less subject to selection pressures? Why not stay drunk on the Dream Time?

Expand full comment
author

We don't know how to stay just a bit maladaptive. Drift is moving us into increasingly maladaptive cultural territory, and we don't have a way to stop aside from selection pressures.

Expand full comment

Surely conditions permit some amount of drift (maladaptation) but not an unlimited amount?

Expand full comment