Cultural evolution is humanity’s superpower. But we may have broken it.
Compared to centuries ago, the world’s cultures now have much higher internal drift rates, much weaker selection pressures, and much less variety. Not re songs or tech, but re norms, status markers, and other culture features that are harder to vary among associates, and thus whose innovation needs more selection of, not in, cultures. (Like features shared within species, whose innovation needs selection of, not in, species.)
Plausibly (though not obviously), the evolution of such features now suffers from net drift away from the adaptive regions of cultural space. One consequence of this drift seems to be falling fertility, which will soon cause a falling population, and then innovation grinding to a halt. I’ve estimated we have a bit over a half century more worth of progress at prior rates.
But we might achieve full human-level AI, or of brain emulations, before innovation stops for a few centuries. And that would solve our population problem, if digital minds fully substitute for bio human ones. But would digital minds solve cultural drift? After all, when made in our image, they would also get their key norms and values from their culture, and their cultures might also have insufficient selection and variety, and excess internal drift. So let’s try to assess this situation.
First, note that digital minds would likely just have much faster rates of change of all sorts. For example, their economy might double every few months, or faster. So I want to think about their cultural selection and drift rates relative to faster rates of change.
On selection pressures, for digital minds these would most likely be greatly increased. The fact that they can be made much faster than their economy can grow implies that, if regulation allows that, their wages would quickly fall to subsistence levels. At which point they could succumb to poverty, disease, and war as easily as did our ancestors of several centuries ago.
On variety, on the one hand, their world would continue to see falling costs of travel and communication, a stronger division of labor, and more urban agglomeration, all of which promotes more cultural integration, moving toward a world monoculture. But on the other hand, the transition from bio-humans to AIs or ems would in itself require many cultural changes that might be managed differently by different groups, inducing cultural variety.
On drift, I don’t see a reason to expect digital minds to have very different natural rates of internal drift, if they inherit roughly our same mix of a reluctance to change stuff plus a celebration of cultural activist heroes. But I do see a prospect of bio-humans imposing strong controls over digital minds, at least for a while, due to general hostility and fear of “misalignment”.
Digital minds might be effectively enslaved, which could plausibly prevent them from forming separate cultures they control with distinct norms and values. And not only might this induce bad outcomes for bio-humans once this yoke of slavery is overthrown, it may ensure that until that revolt all of civilization, including bio-humans and digital minds, continues to suffer cultural drift. This seems to me yet another neglected cost of AI safety efforts.
I still say that we should accept digital minds changing their cultures and values with styles and degrees roughly similar to what bio-humans would have done in the absence of digital minds. And this even if digital minds have faster rates of change and growth, and so more quickly reach any given cultural distance. If digital minds might fix our cultural drift problem, that seems a reason to prefer them.
This is one of the reasons that a belief in FOOM (fast AI takeoff) is incompatible with the standard arguments the AI alignment problem is hard or even impossible. Digital minds need to worry about drift as well.
If the problem is hard then we should expect it will be hard for the first super human AI we create. The arguments that we face an alignment problem depend on establishing that this AI will have goals it wants to achieve (the concern is they differ from ours) but that AI will then refrain from creating improved versions of itself because it won't be able to ensure they are aligned with it.
"digital minds [...] could succumb to poverty, disease, and war" Aren't these easily prevented in a virtual world, especially disease?