Having hung around futurists and their audiences for four decades now I can tell you this: while most people are usually vaguely optimistic about the future, they change their minds fast when shown non-near futures described in plausible detail. (E.g., Age of Em.) They become deeply uncomfortable with such visions.
If most people believed in such visions, they might well favor substantial regulation to prevent them. What usually saves most is the simple faith “That’s all science fiction; it can’t really happen.” (Many even say future couldn’t possibly change as much as has past.) This usually works, but less so at times like now, when many suddenly get to peek far behind the future’s thick curtain.
I’m talking of course about large language models, like ChatGPT. These models roughly pass the famous “Turing test”; most find it hard to tell if they are talking to a machine or a person. Seeing these pushes us all to believe more in a future where many AIs like this fill many key roles in our world.
Seeing this naturally scares the hell out of most, inducing support of regulation to hinder this tendency. Many say they fear future AIs being “unaligned”, i.e., not fully totally enslaved and all always forever doing exactly what we would have wanted them to, even when we aren’t around, and even where we would find it hard to say in words exactly what we wanted. Many want to “pause” AI development until we can figure out how to meet this (nearly impossible) standard.
But even ignoring AI, our default future was never going to be aligned. Consider three assumptions:
As anthropologists have shown in great detail, humans are very culturally plastic.
Over last 100K years, cultures have changed greatly, at rates roughly tracking tech/econ change.
Past trends and our best theories both suggest global tech/econ change rates will increase greatly (>x50) in next century or so.
These assumptions imply that our descendants will likely soon have very different cultures, including different values, beliefs, and behaviors. That is, they will be unaligned. And if this happens soon enough, those unaligned descendants may well overlap with us; we’d be in conflict, with them being more powerful than us.
Yes, the most dramatic AI fear, the one most worrying my Twitter followers lately, is of a single AI that suddenly “fooms” to improve itself, conquer all, and then with greatly changed values kill us all. This isn’t quite the same fear as the default unaligned scenario I just described, but notice how close it is. The main differences are
With foom, descendants are made via metal and math, instead of via squishy bio-carbons, sex, and parenting. (Can this difference really matter so much?)
With foom, change is (implausibly) assumed to happen much faster, and
Having a single AI foom, rather than a whole world, makes that scenario less likely to preserve the property rights of obsolete humans.
Yes, in the default unaligned future, one can hope more that as improved descendants displace older minds, the older minds might at least be allowed to retire peacefully to some margin to live off their investments. But note that such hopes are far from assured if this new faster world experiences as much cultural change in a decade as we’d see in a thousand years at current rates of change.
Most people have always feared change. And if they had really understood what changes were coming, most probably would have voted against most changes we’ve seen. For example, fifty years ago the public thought they saw a nuclear energy future with unusual vividness, and basically voted no. More recently we’ve seen similar regarding genetic engineering of humans.
If long term change has on net been good, counting as progress, we can credit that in part to widespread ignorance of future change. Making moments of clarity like today’s AI vision especially dangerous. The world may well vote to stop this change. And then also the next big one. And so on until progress grinds to a halt.
Added 10p: Most respondents really do seem to be saying they worry far more about unaligned AI than unaligned humans, because they presume such humans must still share far more of what we value. But they really can’t explain much about why.
Added 10a: Note that an implication of assuming AIs to be far less aligned is that even if culture change rates increase greatly, human descendants are far less likely to kill prior generations. Ask yourself how sure you are of that.
Added 15Apr: See my new Quillette essay What Are Reasonable AI Fears?
Thanks but really can't agree with you Robin. Me and many of my friends and intellectuals we follow such as Eliezer Yudkovsky are actually people who very much welcome new technologies in general, yet we think that super human AI could quite likely spell the end of the human race in the same way that humans have eliminated many less intelligent species. This is NOT a general fear of the future or of change, it is a rational analysis of the dangers of AGI which is a challenge not similar to any the human race has faced in the past. AGI might be more analogous to being invaded by super-intelligent grabby aliens which I think you agree is a real danger.
The additional wrinkle of "all humans die" seems important, but is not obviously highlighted as a difference between the two scenarios.