As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.
One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.
On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.
The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.
Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.
In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.
So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:
- Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
- Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
- Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)
These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”
The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.
This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.
You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.
Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.
Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.
I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.
Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.
Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.
Those promised supporting quotes: Continue reading "Regulating Infinity" »
a WordPress rating system