AI is Acceleration
I was once an AI researcher, and since then I’ve been both an economist and a futurist for many decades. For most of that time, I’ve been one of the few who most specialized in thinking about the social impact of future AI. Furthermore, recent events haven’t actually taught us that much more about this topic. So listen when I tell you: most issues that people have with AI are actually issues they have with the future, even without AI. Except that AI might accelerate the schedule. Six examples:
People who invent something new must typically pay for the resources that they consume in this process, and any negative externalities they thereby impose. Like how AI data-centers must pay for electricity, water, and noise pollution. Then such inventors gain some intellectual property rights over future versions of what they invent. Those who back such ventures must risk capital, to gain a chance of future rewards, both of which may be taxed or subsidized at some rate. Nations have to decide how much to favor local competitors, due to possible military, economic, and prestige gains for the nation. All of these issues are being considered today re AI.
We’ve long had to make difficult judgements about how to allocate credit and data rights between bosses and subordinates, tool users and makers, and the inspired and the inspiring. Sometimes we must adjust our crude proxies for quality when it becomes too easy to fake such proxies. Recent AI advances force us to yet again reconsider our policies re these divisions. Such as re whether AI “slop” is art, if students can use AI to complete assignments, and who to credit between AI makers and users.
We don’t really know much about which kinds of physical systems produce “hard problem” type consciousness, though we each feel confident that our physical brain does this often while we live. We have some priors, but almost no data. So there has always been a risk that as we change stuff about our bodies and its environment, we might stop being conscious. Such as by changing foods, brain tools, etc. The further we go in using tech to modify our bodies and worlds, the larger this risk becomes. AI may undergo more such changes faster, forcing us to wonder which AIs and AI-augmented humans are actually conscious.
Mostly via culture, humans have long accumulated more abilities, which has increased how many humans Earth can support. We have also increased the rates at which we can so innovate. In the last thousand years, we have not much increased the rate at which we can grow the human population, but we have greatly increased the rate at which we can grow wealth. As a result, we’ve seen increasing wealth per person. But we should expect this situation to end eventually, with a return to subsistence wages, once we find better techs for growing population faster. And as we have ways to grow the population of AIs (and ems) very fast, then when AIs can replace most all human labor, human wages should fall to AI subsistence levels, which is well below human levels. Humans today should thus want to insure against the risk of suddenly losing their jobs during their work years.
In history, descendants consistently grew in capabilities relative to their still-living ancestors, and eventually became powerful enough to win most conflicts with such ancestors. Descendants have also consistently changed their priorities, norms, and values over time, even when ancestors disapproved. Even so, ancestors typically sacrificed greatly to enable and support descendant prosperity. These trends have all continued strongly even as both lifespans and rates of cultural change have greatly increased, which has resulted in a much wider range of conflicting values being around at the same time. Our default expectation should be that this trend continues into the future, including for AIs, who will literally be descended from us, and inherit many features from us; they will change their priorities, and win conflicts with human ancestors. Preventing this requires ancestors to acquire unprecedented powers over descendants.
Up until a few centuries ago, most human culture was quite adaptive, due to high variety, strong selection pressures, reluctance to change, and slow rates of environmental change. Then the rise of strong capitalist selection pressures induced far faster rates of change in tech, work, and business, and the new big capitalist orgs made work far more regimented and less autonomous. While many feared that non-work lives would soon be similarly regimented, we spent our increased wealth on preventing this from happening. So our non-work lives, such as love, friendship, parenting, and governance, have remained relatively autonomous and artisanal. But as a result, non-work culture has been drifting into maladaption, due to greatly fallen variety and selection pressures, and increased rates of internal and environmental change. We should expect that eventually our non-work culture must become adaptive again. Strong selection pressures for subsistence wage AIs (or ems) could make this happen sooner.


You keep using this word "adaptive" in reference to culture, without ever being clear what you mean. You claim it is just the same word used in biology, but the technical definition used in biology makes reference to DNA-heritable traits and generations. Neither of those are compatible with "cultural evolution," since culture has no generations or DNA, and often changes in ways other than random variation followed by selection, namely social persuasion and dialectic.
If you mean a culture that successfully propagates itself into the future, then pre-modern cultures were *not* adaptive, because they aren't around anymore.