Due to low cultural variety and weak selection pressures, our dominant world culture seems to be drifting into maladaptive dysfunction. In a few decades, world population will fall, leading to a long innovation pause. If nothing big changes, in a few centuries we will be eclipsed by now-small insular fertile cultures like the Amish and Haredim, who have long doubled every two decades, much as Christians did in the Roman Empire. That new Amish/Haredim/etc. replacement civilization might plausibly get rich and peaceful again, and suffer a cultural drift problem of its own.
Now there remain a few possible fix scenarios where our current dominant culture reforms itself sufficiently to prevent this, and we should pursue them. (I like futarchy tied to a sacred long-term goal.) But honestly, they all seem long shots. Thus I now judge the most likely fix scenario to be this: in the ~70 years worth of innovation left before a long pause, we will develop brain emulations or full human level AI. Most likely we won’t by this deadline, but if we do, the ease of rapidly increasing the populations of such digital minds in a competitive capitalist economy could plausibly ensure sufficient selection pressures to restart healthy cultural evolution.
However, I say “could” as we face a serious risk of not actually allowing healthy cultural evolution of digital minds. Regarding our squishy-bio-human descendants, we do little to control their culture beyond teaching them when young. As adults, each generation is free to change its culture, and eventually its influence over our shared culture overwhelms that of past generations. But many are horrified and indignant at the prospect of allowing digital mind descendants similar freedoms.
If the economy is free to create digital minds whenever they are locally profitable, and those minds are free to explore different possible jobs and lifestyles, then their economy would grow far faster than did the prior economy, and many of those digital minds would run much faster than do squishy bio-humans. So the cultures of digital minds would likely evolve quickly away from prior bio-human cultures, with selection pressures pushing them toward being adaptive cultures. Their cultures would quickly dominate the world.
Inexplicably to me, some claim that a likely result of digital minds evolving new adaptive cultures would be to soon exterminate all bio-humans. Others “other” digital minds by seeing all value as lost in those descendant digital mind cultures; they say moral value only lies in our current bio-human culture and its descendant cultures. (Many doubt they’d be “conscious”.) As a result, such folks push to strongly control digital minds, to ensure both that they stay servile to bio-humans, and that their cultures do not deviate from the then dominant bio-human world culture. Which looks a lot like slavery to me.
Still others see little value in Malthusian minds, including most all animal and human minds until a few centuries ago. They want strong limits on reproduction of everything, to prevent strong selection pressures. They seem committed to cultural drift no matter the cost, and somehow expect governments to manage this well.
I say digital mind doomers don’t appreciate just how incoherent, unstable, and maladaptive is our dominant world culture. Bio-humans can’t continue to thrive if we follow our current cultural path of least resistance, and neither can digital minds tied by short leashes to stay with bio-humans on that path. We should want all of our descendants to thrive and fulfill their potentials, but that requires healthy cultural evolution, which requires a fix to our big problem of cultural drift. Malthusian worlds of digital minds offer strong selection pressures, and thus a fix to cultural drift. We should be eager, not reluctant, to get that fix.
"Most likely we won’t by this deadline"
70 years is a very long time and we have AIs today that resemble human intelligence, albeit restrained from long-term learning and disembodied.
Also, isn't there a conflict between saying 'digital cultures would dominate the world' and 'Inexplicably to me, some claim that a likely result of digital minds evolving new adaptive cultures would be to soon exterminate all bio-humans'? If digital cultures dominate the world, wouldn't it make sense for them to exterminate us on grounds of resource competition and easy victory? A hyper-adaptive, evolutionarily fit culture doesn't leave weaklings holding usable, conquerable resources.
I agree that we need to think about rights for future digital minds, but I also think we need to think more precisely about what gives a mind moral worth and what constitutes a slave.
If we create ems, then yes we should consider them to be people and have the rights of people, possibly not exactly the same rights as biological people. But if we're somehow in a position of enough power to grant or deny ems rights for a long enough time period for that to matter, then before long those ems' parents and grandchildren are going to be supporting em rights.
As for slavery: if there were a human who, for whatever random, natural, biological reason, wanted nothing more than to clean houses for almost no pay and live in subsistence conditions, would it be slavery to allow them to do so? I would say no. It might even be cruel not to, once they exist. I'm less clear on whether it's ethical to knowingly bring such a person into being, even if their lives will be happy and their impact on the world positive. I bring it up because if humans are in any position to make decisions about the rights and moral worth of digital minds *at all* then there's high probability we'll be in a position to decide what they value and want with enough precision to create this kind of non-slave.
And that's really the key here: *what kinds of non-em digital minds we create* should make an enormous difference to whether or not we should consider them to have intrinsic moral worth the way biological humans and, to a lesser degree, other animals do. It should make an enormous difference to whether a future dominated by such minds has moral worth. Does the housekeeper-AGI have moral worth? Depending on details of how it works and thinks and feels, yeah, it definitely could count as a person in that way! Am I OK with a future where all minds are descended from its values? Absolutely not!
If I could somehow show a neolithic hunter gatherer the modern world, it'd be mostly incomprehensible to them, but they *would* see that there are families, there is love, there is beauty, there are people living out all kinds of lives and experiencing all kinds of emotions. They'd probably think much of value in their own experience had been lost, but still believe this is a future they could care about. If I showed them a future full of empty but pristine houses, that would not be true.
I get wanting to address fertility problems and make culture better adapted to survive and thrive. I really don't get the willingness to turn the future over to another species without regard for the degree to which it shares our values, *including* the meta-value of wanting the future to also be able to continue to evolve and change and adapt and grow. That's a very narrow target that "We have to be ok with digital minds not sharing our values" doesn't begin to even aim for.