Discussion about this post

User's avatar
Limitless Horizons's avatar

"Most likely we won’t by this deadline"

70 years is a very long time and we have AIs today that resemble human intelligence, albeit restrained from long-term learning and disembodied.

Also, isn't there a conflict between saying 'digital cultures would dominate the world' and 'Inexplicably to me, some claim that a likely result of digital minds evolving new adaptive cultures would be to soon exterminate all bio-humans'? If digital cultures dominate the world, wouldn't it make sense for them to exterminate us on grounds of resource competition and easy victory? A hyper-adaptive, evolutionarily fit culture doesn't leave weaklings holding usable, conquerable resources.

Expand full comment
AnthonyCV's avatar

I agree that we need to think about rights for future digital minds, but I also think we need to think more precisely about what gives a mind moral worth and what constitutes a slave.

If we create ems, then yes we should consider them to be people and have the rights of people, possibly not exactly the same rights as biological people. But if we're somehow in a position of enough power to grant or deny ems rights for a long enough time period for that to matter, then before long those ems' parents and grandchildren are going to be supporting em rights.

As for slavery: if there were a human who, for whatever random, natural, biological reason, wanted nothing more than to clean houses for almost no pay and live in subsistence conditions, would it be slavery to allow them to do so? I would say no. It might even be cruel not to, once they exist. I'm less clear on whether it's ethical to knowingly bring such a person into being, even if their lives will be happy and their impact on the world positive. I bring it up because if humans are in any position to make decisions about the rights and moral worth of digital minds *at all* then there's high probability we'll be in a position to decide what they value and want with enough precision to create this kind of non-slave.

And that's really the key here: *what kinds of non-em digital minds we create* should make an enormous difference to whether or not we should consider them to have intrinsic moral worth the way biological humans and, to a lesser degree, other animals do. It should make an enormous difference to whether a future dominated by such minds has moral worth. Does the housekeeper-AGI have moral worth? Depending on details of how it works and thinks and feels, yeah, it definitely could count as a person in that way! Am I OK with a future where all minds are descended from its values? Absolutely not!

If I could somehow show a neolithic hunter gatherer the modern world, it'd be mostly incomprehensible to them, but they *would* see that there are families, there is love, there is beauty, there are people living out all kinds of lives and experiencing all kinds of emotions. They'd probably think much of value in their own experience had been lost, but still believe this is a future they could care about. If I showed them a future full of empty but pristine houses, that would not be true.

I get wanting to address fertility problems and make culture better adapted to survive and thrive. I really don't get the willingness to turn the future over to another species without regard for the degree to which it shares our values, *including* the meta-value of wanting the future to also be able to continue to evolve and change and adapt and grow. That's a very narrow target that "We have to be ok with digital minds not sharing our values" doesn't begin to even aim for.

Expand full comment
29 more comments...

No posts

Ready for more?