Discover more from Overcoming Bias
As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.
One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.
On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.
The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.
Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.
In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.
So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:
Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)
These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”
The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.
This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.
You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.
Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.
Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.
I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.
Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.
Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.
Added 15Sep: A version of this post appeared as:
Robin Hanson, Regulating Infinity, Global Government Venturing, pp.30-31, September 2014.
Those promised supporting quotes:
First, David Chalmers:
If humans survive, the rapid replacement of existing human traditions and practices would be regarded as subjectively bad by some but not by others. … The very fact of an ongoing intelligence explosion all around one could be subjectively bad, perhaps due to constant competition and instability, or because certain intellectual endeavours would come to seem pointless. On the other hand, if superintelligent systems share our values, they will presumably have the capacity to ensure that the resulting situation accords with those values. …
If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. The wrong value system need not be anything as obviously bad as, say, valuing the destruction of humans. If the AI+ value system is merely neutral with respect to some of our values, then in the long run we cannot expect the world to conform to those values. (more, see also)
Second, Scott Alexander:
The current rulers of the universe – call them what you want, Moloch, Gnon, Azathoth, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.
The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.
And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed. But the sheer speed of the cycle makes it possible that we will end up with one entity light-years ahead of the rest of civilization, so much so that it can suppress any competition – including competition for its title of most powerful entity – permanently. In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it is on our side, it can kill Moloch dead.
And so if that entity shares human values, it can allow human values to flourish unconstrained by natural law.
Third, Nick Bostrom in Superintelligence:
We suggested earlier that machine intelligence workers selected for maximum productivity would be working extremely hard and that it is unknown how happy such workers would be. We also raised the possibility that the fittest life forms within a competitive future digital life soup might not even be conscious. Short of a complete loss of pleasure, or of consciousness, there could be a wasting away of other qualities that many would regard as indispensible for a good life. Humans value music, humor, romance, art, play, dance, conversation, philosophy, literature, adventure, discovery, food and drink, friendship, parenting, sport, nature, tradition, and spirituality, among many other things. There is no guarantee that any of these would remain adaptive. …
We have seen that multipolarity, even if it could be achieved in a stable form, would not guarantee an attractive outcome. The original principal– agent problem remains unsolved, and burying it under a new set of problems related to post-transition global coordination failures may only make the situation worse. Let us therefore return to the question of how we could safely keep a single superintelligent AI.