To appreciate an AI future, consider a future without AI. That is, imagine that we somehow manage to forever prevent any AIs who might take control of civilization. But also assume that our descendants continue to evolve competitively, with technical and economic growth for millions or billions of years. Now ask: what exactly did you think would happen there?
In my view, it is pretty obvious that our “human” descendants would change radically. Eventually for sure, but maybe even rather quickly. After all, we now differ greatly not only from our ancestors of ten million years ago, but even from our ancestors of a thousand years ago. Also, rates of change have been greatly accelerating, and new tech will let us make many deep changes to our bodies, brains, and environments. Our future civ should thus be far larger, more complex, and more varied.
Faster rates of change and longer lifespans (likely including immortality) will let more different generations overlap in time, inducing more stronger conflicts between them. And in these conflicts, we expect later generations to eventually be abler, richer, smarter, and nearer to the center of control and info. And thus to win.
Some future changes will count as robustly-valuable innovations that accumulate and last a long time, much like the past innovations that we now embody and expect to long continue. Like eyes, consciousness, reason, laughter, love, law, markets, etc. But other changes may be less robust, varying more across space and time. Along those sorts of change dimensions, our descendants attitudes, habits, and values will “drift”, and without obvious limits. And some fiercely hate the prospect of such drift.
To limit this drift, some hope that we will develop better “mind control” tech, together with strong civ-wide centralized governance. These would allow authorities to better control citizens, and a whole generation of parents to better control the next generation of kids. Such centralized powers seem to me both dangerous and distasteful. This mind control would at least partially displace past processes by which generations have chosen how to change their attitudes, styles, and values.
Okay, now imagine a future date when becomes possible to live beyond Earth. Some may then reason:
As an Earth person, I am partial toward Earth people, relative to space people. If we let people leave Earth, then space people will become different in some ways, grow more than Earth people, and eventually come to dominate. We should prevent this horror by preventing anyone from leaving Earth, at least until we find ways to centrally and absolutely mind-control all space people.
This kind of partiality is infertile; genetic or memetic evolution does not in general select for it. Yes, this strategy might promote pro-Earth-people partiality, but it hurts all other partialities that would be helped by a much larger future economy.
Now let’s consider AI. Humans now occupy a tiny corner of a vast mind space, and compared to our more human descendants, AIs would more expand out into this mind space, analogous to space people expanding into the Solar system. And similar to in the Earth vs. space case, a partiality toward those who sit in our corner of mind space might recommend against preventing that AI expansion, at least until we can develop a centralized and absolute mind control over all AIs. But note that this partiality is also infertile, and not favored by natural selection.
Many say we must greatly slow the development of AIs until we can implement absolute centralized mind control over them all. Yet if AIs could be entirely prevented, few of these folks would advocate a similar policy of great slowing general social change until we can develop centralized absolute mind control over our more human descendants. I explain this difference as due to many feeling a far stronger feeling of partiality re AIs than re their more human descendants. I question whether, on reflection, you really want to embrace that big of a difference.
Some say that the key point is that AIs will just have different styles of thinking, styles which are to them just intrinsically repugnant, relative to our usual human styles. But as a human somewhat on the autism spectrum, I know that my style of thinking is also unusual, relative to most humans. I cannot embrace others’ feelings of repugnance re my thinking style, nor feel greatly inclined to see AI’s different styles as especially repugnant.
Hi, "we now differ greatly not only from our ancestors of ten million years ago, but even from our ancestors of a thousand years ago". In what way do we differ greatly from our ancestors of a thousand years ago? (I just don't see it.)
The problem with letting populations leave earth and settle the galaxy would be a real one and not a question of “earth partiality” in the sense of narrow self-attachment or nationalism - as Robin describes it here. This misses the actual risk.
The real problem would be that they grow distant from us culturally and/biologically (with communication limited by speed of light). And then they are effectively aliens (after a few thousand years) and may well come back and kill, conquer or enslave the population of earth.
This is the same problem with AI.
The fear is not that they outcompete humans and peaceably replace us with a new and more effective species. I agree this would only be sad to the degree that you are emotionally attached to humans. The fear is that we (or our children) or humankind are slaughtered, or thrown into desperate misery and hunger - mostly as some sort of collateral damage to another species’ expansion.