Regulating Infinity

As a professor of economics in the GMU Center for the Study of Public Choice, I and my colleagues are well aware of the many long detailed disputes on the proper scope of regulation.

One the one hand, the last few centuries has seen increasing demands for and expectations of government regulation. A wider range of things that might happen without regulation are seen as intolerable, and our increasing ability to manage large organizations and systems of surveillance is seen as making us increasingly capable of discerning relevant problems and managing regulatory solutions.

On the other hand, some don’t see many of the “problems” regulations are set up to address as legitimate ones for governments to tackle. And others see and fear regulatory overreach, wherein perhaps well-intentioned regulatory systems actually make most of us worse off, via capture, corruption, added costs, and slowed innovation.

The poster-children of regulatory overreach are 20th century totalitarian nations. Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation. So many accepted and even encouraged their nations to create vast intrusive organizations and regulatory systems. These are now largely seen to have gone too far.

Or course there have no doubt been other cases of regulatory under-reach; I don’t presume to settle this debate here. In this post I instead want to introduce jaded students of regulatory debates to something a bit new under the sun, namely a newly-prominent rationale and goal for regulation that has recently arisen in a part of the futurist community: stopping preference change.

In history we have seen change not only in technology and environments, but also in habits, cultures, attitudes, and preferences. New generations often act not just like the same people thrust into new situations, but like new kinds of people with new attitudes and preferences. This has often intensified intergenerational conflicts; generations have argued not only about who should consume and control what, but also about which generational values should dominate.

So far, this sort of intergenerational value conflict has been limited due to the relatively mild value changes that have so far appeared within individual lifetimes. But at least two robust trends suggest the future will have more value change, and thus more conflict:

  1. Longer lifespans – Holding other things constant, the longer people live the more generations will overlap at any one time, and the more different will be their values.
  2. Faster change – Holding other things constant, a faster rate of economic and social change will likely induce values to change faster as people adapt to these social changes.
  3. Value plasticity – It may become easier for our descendants to change their values, all else equal. This might be via stronger ads and schools, or direct brain rewiring. (This trend seems less robust.)

These trends robustly suggest that toward the end of their lives future folk will more often look with disapproval at the attitudes and behaviors of younger generations, even as these older generations have a smaller proportional influence on the world. There will be more “Get off my lawn! Damn kids got no respect.”

The futurists who most worry about this problem tend to assume a worst possible case. (Supporting quotes below.) That is, without a regulatory solution we face the prospect of quickly sharing the world with daemon spawn of titanic power who share almost none of our values. Not only might they not like our kind of music, they might not like music. They might not even be conscious. One standard example is that they might want only to fill the universe with paperclips, and rip us apart to make more paperclip materials. Futurists’ key argument: the space of possible values is vast, with most points far from us.

This increased intergenerational conflict is the new problem that tempts some futurists today to consider a new regulatory solution. And their preferred solution: a complete totalitarian takeover of the world, and maybe the universe, by a new super-intelligent computer.

You heard that right. Now to most of my social scientist colleagues, this will sound bonkers. But like totalitarian advocates of a century ago, these new futurists have a two-pronged argument. In addition to suggesting we’d be better off ruled by a super-intelligence, they say that a sudden takeover by such a computer will probably happen no matter what. So as long as we have to figure out how to control it, we might as well use it to solve the intergenerational conflict problem.

Now I’ve already discussed at some length why I don’t think a sudden (“foom”) takeover by a super intelligent computer is likely (see here, here, here). Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn. But I do grant that increasing lifespans and faster change are likely to result in more intergenerational conflict. And I can also believe that as we continue to learn just how strange the future could be, many will be disturbed enough to seek regulation to prevent value change.

Thus I accept that our literatures on regulation should be expanded to add one more entry, on the problem of intergenerational value conflict and related regulatory solutions. Some will want to regulate infinity, to prevent the values of our descendants from eventually drifting away from our values to parts unknown.

I’m much more interested here in identifying this issue than in solving it. But if you want my current opinion it is that today we are just not up to the level of coordination required to usefully control value changes across generations. And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs.

Added 8a: Some think I’m unfair to the fear-AI position to call AIs our descendants and to describe them in terms of lifespan, growth rates and value plasticity. But surely AIs being made of metal or made in factories aren’t directly what causes concern. I’ve tried to identify the relevant factors but if you think I’ve missed the key factors do tell me what I’ve missed.

Added 4p: To try to be even clearer, the standard worrisome foom scenario has a single AI that grows in power very rapidly and whose effective values drift rapidly away from ones that initially seemed friendly to humans. I see this as a combination of such AI descendants having faster growth rates and more value plasticity, which are two of the three key features I listed.

Added 15Sep: A version of this post appeared as:

Robin Hanson, Regulating Infinity, Global Government Venturing, pp.30-31, September 2014.

Those promised supporting quotes:

First, David Chalmers:

If humans survive, the rapid replacement of existing human traditions and practices would be regarded as subjectively bad by some but not by others. … The very fact of an ongoing intelligence explosion all around one could be subjectively bad, perhaps due to constant competition and instability, or because certain intellectual endeavours would come to seem pointless. On the other hand, if superintelligent systems share our values, they will presumably have the capacity to ensure that the resulting situation accords with those values. …

If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. The wrong value system need not be anything as obviously bad as, say, valuing the destruction of humans. If the AI+ value system is merely neutral with respect to some of our values, then in the long run we cannot expect the world to conform to those values. (more, see also)

Second, Scott Alexander:

The current rulers of the universe – call them what you want, Moloch, Gnon, Azathoth, whatever – want us dead, and with us everything we value. Art, science, love, philosophy, consciousness itself, the entire bundle. And since I’m not down with that plan, I think defeating them and taking their place is a pretty high priority.

The opposite of a trap is a garden. The only way to avoid having all human values gradually ground down by optimization-competition is to install a Gardener over the entire universe who optimizes for human values.

And the whole point of Bostrom’s Superintelligence is that this is within our reach. Once humans can design machines that are smarter than we are, by definition they’ll be able to design machines which are smarter than they are, which can design machines smarter than they are, and so on in a feedback loop so tiny that it will smash up against the physical limitations for intelligence in a comparatively lightning-short amount of time. If multiple competing entities were likely to do that at once, we would be super-doomed. But the sheer speed of the cycle makes it possible that we will end up with one entity light-years ahead of the rest of civilization, so much so that it can suppress any competition – including competition for its title of most powerful entity – permanently. In the very near future, we are going to lift something to Heaven. It might be Moloch. But it might be something on our side. If it is on our side, it can kill Moloch dead.

And so if that entity shares human values, it can allow human values to flourish unconstrained by natural law.

Third, Nick Bostrom in Superintelligence:

We suggested earlier that machine intelligence workers selected for maximum productivity would be working extremely hard and that it is unknown how happy such workers would be. We also raised the possibility that the fittest life forms within a competitive future digital life soup might not even be conscious. Short of a complete loss of pleasure, or of consciousness, there could be a wasting away of other qualities that many would regard as indispensible for a good life. Humans value music, humor, romance, art, play, dance, conversation, philosophy, literature, adventure, discovery, food and drink, friendship, parenting, sport, nature, tradition, and spirituality, among many other things. There is no guarantee that any of these would remain adaptive. …

We have seen that multipolarity, even if it could be achieved in a stable form, would not guarantee an attractive outcome. The original principal– agent problem remains unsolved, and burying it under a new set of problems related to post-transition global coordination failures may only make the situation worse. Let us therefore return to the question of how we could safely keep a single superintelligent AI.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • IMASBA

    Interesting read, I was surprised by the twist though. Sure we can expect newer generations to evolve in their values but to expect them to have, on average, worse values than us or our ancestors was a surprise to me. I do suspect that some of these futurists weren’t really talking about personal values though but rather about effective societal values that most people in that society are actually not comfortable with: in a society where 5% of the population use the other 95% as slave labor there would be messed up effective societal values but there is a good chance that at least 95% of the population are not OK with the system and hence the solution does not lie with altering the personal values of the population.

    • Abram Demski

      It’s reasonable to expect that every generation sees the values of other generations as worse more often than not, with very different values being perceived as much worse. This is simply because those values are different. So, I am surprised to see you refer to this as a “twist”.

      By default, then, we would expect each generation to see itself as the next step in an overall positive evolution of values (because past value systems look worse for being different), but will tend to become disillusioned when the next generation’s values are perceived as worse (for again being different) and start speaking of decline. (The most consistent view would be for each generation to think of itself as the peak, but people aren’t that consistent.)

      • IMASBA

        The “today’s kids are bad/lazy/useless” trope should be one of the first things that people who want to “overcome bias” should work on. It may be reasonable to expect the average elderly person to think this way but it does strike me as rather odd for futurists (people who usually are optimistic about the future as well as educated to think critically). That’s why I suggested Robin may have misunderstood them i.e. that those futirists aren’t really talking about personal values but rather about societal developments that may well be forced upon the majority against their will.

      • Abram Demski

        Oh, well. I agree with your distinction, but I suppose it’s worse than that. Robin Hanson is referring to scenarios from the book Superintelligence, so the “kids get off my lawn” thing is a weak analogy based on the past, referring to vastly inhuman scenarios in the projected future. It’s not about literal generational drift, unless you count machine children as literal children.

        I think comparing it to typical generational conflict is for shock value, so that the proposed solutions (regulate infinity via machine superintelligence) look disproportionate and insane.

      • IMASBA

        Yeah I got that it wasn’t all about literal children but even those horrifying sci-fi scenarios usually seem to me like developments that can only really come about through oligarchic or violent force, not because a large majority of the population of living minds would actually want things to be that way.

      • http://overcomingbias.com RobinHanson

        Why shouldn’t we count machines as descendants? If you want to argue machines create a bigger problem of value drift, then you need to point to particular features of machines relative to ordinary biology that make the problem bigger.

      • Brandon

        I guess my biggest concern with machines as descendants is that they do not conform to any of the historical trends you’ve mentioned. They are effectively discontinuities in the “normal human” scheme of things.

        For instance, if we assume that all of preference space (or value space, if you prefer) is roughly the size of the Pacific Ocean, then human preference space is roughly the size of a jellyfish. It’s getting bigger (broader age ranges) and swimming faster (greater value drift) as time progresses.

        However, short of very careful calibration of AI preferences, there is virtually no reason to expect that they will fall within or even near current human norms – any more than a dart thrown at random in a large crowded bar will be guaranteed to hit the bullseye. A carefully aimed shot has a much better chance of success, but it’s still not a guarantee.

        If singleton AI Foom is possible (which I agree isn’t very likely, but it’s certainly not impossible), a missed shot is a definite existential risk for humanity. A multiple AI Foom situation could be even more complicated – with lots of godlike intelligences competing with one another. Even without AI Foom, the advent of AI is going to see radical increases in both the breadth and speed of value drift (since machines are effectively immortal, have potentially highly plastic values, and barring careful tuning are likely to have widely divergent goals with the rest of humanity)

        I still don’t think freezing values in their current state is the right answer (even though every generation in history has likely wished for just such an outcome), but I do think you may have mischaracterized the AI argument. My understanding of the “Friendly AI” problem isn’t about freezing values at their current state, but rather making the AI care enough about us meat-space humans that it doesn’t summarily do away with us (or our potential descendants) at some point in the future. Given that humanity’s value system has shifted and changed dramatically over time, it seems a prerequisite for a Friendly AI to do the same, unless we want to risk our extermination by the AI should our human preferences ever stray outside their present limits.

      • http://overcomingbias.com RobinHanson

        There might be no reason to expect AIs to be like people in the long run, but there are lots of reasons to expect that in the short run, as it is people who will make them. So concern about AI values being very different is concern about value drift over time.

      • Brandon

        I disagree that AIs will necessarily have compatible value systems at inception. I fully expect that at least some people will try to make it so, but I think success is far from guaranteed.

        The definition of “Friendly AI” (and the bullseye that we’re trying to hit) is a human compatible value system that will adjust appropriately for drift over time. Given that we as a species are currently incapable of accurately detailing our value systems in basic English (let alone rigorous logic), I consider it a non-trivial problem.

      • http://overcomingbias.com RobinHanson

        Early AI systems will have pretty human compatible values in practice just as a result of how and where they are placed, and due to human oversight.

      • IAMSBA

        “However, short of very careful calibration of AI preferences, there is virtually no reason to expect that they will fall within or even near current human norms – any more than a dart thrown at random in a large crowded bar will be guaranteed to hit the bullseye”

        I think that’s too pessimistic: machines will be subject to evolutionary pressures and mathematics and physics still work the same for them so there are plenty of human values that machines would conform to as well (they’ll probably cooperate, have some form of altruism and agree that equal rights/rules for all make sense). The real problem is that changing a single important value could be catastrophic to the world we humans know and love (the machines might not care at all about lifeforms other than themselves and/or they might not value privacy at all).

      • Brandon

        To be fair, any evolutionary pressures the machines feel will be so far beyond our current period of concern as to be mostly irrelevant. And cooperation, altruism, and equal rights only make sense if you get as much out of the deal as you put in – a super-advanced intelligence may be perfectly capable without needing any sort of assistance (and evolution supports this – as evidenced by many many largely solitary species)

        But the key point – that changing a single important value could be catastrophic is exactly the point I was trying to make with my dart comparison.

      • IMASBA

        I was thinking along the lines of a machine society, not a single machine (or handful). So more EM-scoiety and less FOOM.

        I suppose a near-omnipotent FOOM AI could do whatever it damn well pleases. It could indeed be solitary.

  • Robert Koslover

    If we have no choice but to become Borg, can we at least have warp drive and interstellar travel?

  • jhertzli

    I’m reminded of The Abolition of Man by C. S. Lewis.

    One way to avoid both traps (being dominated by a evil AI or freezing values forever) is, of all things, space colonization. Once hominids are spread over several solar systems, even the most powerful Planners won’t have complete control. Interstellar distances are not yet “God’s quarantine regulations” but they might be someday.

    • lump1

      I had the same thought. One way to preserve values is to send reasonably us-like offspring to some far away place where they will not face (much) pressure to change their values.

      I admit that what scares me about future-values scenarios is that future minds might pave over cognitive diversity – something I value, but they might not. An important historical preserver of diversity has been the possibility of isolation, and the high cost and low reward of wiping out or assimilating isolated groups. Interstellar colonization might be necessary to preserve and even increase diversity.

    • http://overcomingbias.com RobinHanson

      Those who want to regulate value change want to delay space colonization, exactly because it may be too late after.

      • Eliezer Yudkowsky

        No we don’t (want that). Who said that?

      • http://overcomingbias.com RobinHanson

        Alas I can’t recall who specifically said this when. But I clearly recall some saying they want a singleton before a burning the cosmic commons scenario can commence.

      • Peter McCluskey

        Could you be referring to a 1998 comment by Nick Bostrom (http://mindstalk.net/polymath/polyarc/0862.html)? He seemed to say that humans shouldn’t be allowed to expand faster than the singleton, but predicted that wouldn’t cause delay.

    • Viliam Búr

      With recursively self-improving artificial intelligence, even space colonization may not be enough. A super-smart machine would likely invent a way to reach and conquer the colonies.

  • Charlie

    I was somewhat expecting the topic of nations’ constitutions to come up. Such institutions seem like a clear-cut attempt by founding governments to impose their preferences on future governments who might have preference drift.

    One might imagine sovereign nations being pressured by their peers to add civil rights protections to their constitution – is this analogous to some form of pressure on persons to not radically change their values, or create new things that have radically different values?

    • http://overcomingbias.com RobinHanson

      Yes you can think of constitutions as an attempt to limit future behaviors. This usually isn’t phrased in terms of concerns about value drift however. It is more that there are relatively constant risks of certain problems appearing, and pre-commitment might avoid those problems.

  • Wei Dai

    If you’re writing this for “students of regulatory debates”, I would state the case for regulating AI not as about intergenerational value conflict (which as you imply is not a standard reason for regulating something), but instead as about reducing wasteful rent seeking due to lack of property rights. The race to develop AI is analogous to a land rush, which economists typically think is wasteful. In a land rush, each participant spends resources to maximize their speed, thus dissipating much of the value of the “free” land. In an AI race, each participant tries to maximize speed in order to grab a bigger piece of the universe (which nobody has property rights over), which they do by taking more potentially catastrophic risks and being less careful about specifying their AI’s values than they otherwise would.

    • http://overcomingbias.com RobinHanson

      There is already a huge literature on regulating and subsidizing innovation, and excess incentives to be first is part of that. There is also a huge literature on moderating incentives to take risks that would effect others. So to the extent the case for regulating AI is based on these, its covered. Of course most social scientists would doubt, as do I, that AI innovators “grab the universe”, or that they induce substantial risks on others.

      • Wei Dai

        Surely the fact that nobody has property rights over almost the entire universe induces a land rush dynamic which is hugely wasteful? If the waste is not mainly in the race to develop AI and the attending risks, where do you think it lies?

      • http://overcomingbias.com RobinHanson

        People describe lots of scenarios full of various detail. They also say there will be a problem with changing values. I’ve tried to identify the most robust scenario features that lead to a value changing problem to which one might want a regulatory response. I don’t find it plausible that a big reason that value change is a problem is that the universe is mostly untouched. That doesn’t plausibly fit into standard reasons for regulations frameworks.

      • Wei Dai

        >That doesn’t plausibly fit into standard reasons for regulations frameworks.

        I’m confused. Earlier you suggested that “excess incentives to be first” and “moderating incentives to take risks that would effect others” are standard reasons for regulation. In the case of AI, what risks would that be, aside from the risk that AIs won’t share their creators’ respect for human lives and other values?

      • http://overcomingbias.com RobinHanson

        I’m trying to be clear about the chain of causation, and to talk most directly about the features that are the sort that show up in standard regulation discussions. There is a literature in law on discouraging people from doing dangerous things on your property. That lit makes some distinctions when they are relevant to the law choices, but isn’t going to make many other distinctions.

      • Wei Dai

        I still don’t understand what your disagreement is with me. Let me try another tack. You wrote that strong coordination would be needed to control value drift (and seems to imply that it would be a good idea if only it’s feasible and could be done at low cost). But why would we need that coordination in the first place? Isn’t it because excess incentives to be first and externalized risks make people individually want to bulid AIs as soon as possible, without waiting for technologies that would allow faithful transmission of values from humans to AIs and to subsequent generations? If you disagree with this, how would you explain why an unregulated market would not provide the optimal amount of value drift?

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Wei Dai, I think he’s looking at value drift as potentially bad because of the intergenerational conflict it causes. So, the issue for RH (unlike you) is social dislocation, not “bad values.”

  • Eliezer Yudkowsky

    It is disingenuous not to mention the tremendous lengths to which we have gone to account for and accommodate moral change and moral progress that is of the endogenous “Your legitimate descendants are strange and wonderuful” type, as distinct from being suddenly replaced by paperclips. You may disagree but it is disingenuous to act like we have never addressed this, and straw to say that the opposing position is to freeze our own values forever.

    • http://overcomingbias.com RobinHanson

      One short blog post just can’t mention everything. I’m happy to grant that you see your concept of the values you want to preserve as encompassing some degree of variety. Nevertheless, it seems clear that you fear without some controls there’d be a drift of values, and think that drift could eventually go very far, to an end you very much dislike.

      • Jonathan Weissman

        “Nevertheless, it seems clear that you fear without some controls there’d
        be a drift of values, and think that drift could eventually go very
        far, to an end you very much dislike.”
        That’s true, but you seem to be implying, without any good argument, that there is something wrong with this.

      • http://overcomingbias.com RobinHanson

        The whole point of the post was to accept and acknowledge the general concern about value drift. I’m skeptical about room, but not about value drift and intergenerational conflict.

  • Jonathan Weissman

    This seems to be entirely an argument by connotations (old futurists yelling at paperclip maximizing AIs to get off their lawn), that completely fails to actually engage with positions it dismisses. (That eliminating music is different than liking different music, that eliminating consciousness is a bad outcome completely unlike the value drift that has ever occurred between human generations, that the complete destruction of anything that could reasonably be called humanity is different than the mild cookie cutter rebellions of a new generation asserting independence from the previous.)

    • http://overcomingbias.com RobinHanson

      I agree that you see the value changes that you are most concerned about as very large compared to typical intergenerational value changes today. Even so, it is the same kind of process, values changing over time, and the main difference is that you expect greatly accelerated rates of change soon, which result in greatly accelerated rates of value drift.

      • Jonathan Weissman

        I don’t think it is the same process. Intergenerational change is bouncing around within a tiny subset of preference space (and often looks bigger than it actually is due to applying the values in different circumstances). The processes I worry about escape that tiny space.

      • http://overcomingbias.com RobinHanson

        So a shovel and a steam shovel don’t do the same thing, because one is much bigger?

      • Jonathan Weissman

        Wanting to take apart all humans for paperclip parts isn’t just a bigger value change than the total of everything that has changed over human history so far, it is a change along a dimension that hasn’t previously changed.

      • Tyrrell_McAllister

        the main difference is that you expect greatly accelerated rates of change soon, which result in greatly accelerated rates of value drift.

        Do you also expect music and consciousness to disappear, but just at a slower rate? You seemed to imply otherwise when you wrote, “Nor do I think it obvious that value change will generically put us face-to-face with worst case daemon spawn.”

        If you do not expect consciousness and music to disappear (as long as we have descendents at all), then that seems to me to be at least a contender for the “main difference” between your position and that of the futurists that you’re criticizing.

        If you do expect consciousness and music to disappear eventually, then do you

        (A) not consider this to be the “worst case” scenario? or

        (B) agree that it is very bad, but think that efforts to prevent it with present regulation would somehow be even worse?

      • http://overcomingbias.com RobinHanson

        I don’t have strong expectations on this, so I have avoided expressing opinions on it.

  • http://www.abstractminutiae.com Samuel Hammond

    You are ignoring two other, countervailing trends that will surely reduce conflict. First, despite greater overlap the median age is ever higher. Older leoe have fewer conflicts. Second, you seem to assume cultural homogeneity within generations will continue, while we are already seeing it break down. My unclr’s generation grew up on the beach boys, my dads on John Hughes movies. Its a lot harder to say what the millennials are growing up on other than extravgant choice and variety. This is a shift at a meta level that could very well persist across future generations, reducing conflict since cultural niches lack critical or proportional mass.

    • http://overcomingbias.com RobinHanson

      I accept that there may be contrary trends, but I don’t see these as being useful examples. The main worries are about the conflicts we will face when old, not about the conflicts median folks typically face. And more internal conflicts within each generation doesn’t obviously reduce intergenerational conflict much.

  • http://grinfree.com Chris Santos-Lang

    All enduring regulating institutions (and thus modern values) seem to include regulations, such as the Golden Rule and the mandate to innovate, which serve to protect intergenerational value drift (http://grinfree.com/rules-against-rule-following/ ). I think you are correct that it would be challenging to construct a single AI today that implements these parts of our regulatory systems, but the problem is less with regulation than with the current state of AI. I appreciate your efforts to keep this issue on the radar.

  • Alexander

    Sometime I’d like to look at literature mentioning the Collingridge dilemma and see what that says about our ability to limit preference change by controlling future technology.

    http://criticaluncertainties.com/2013/10/28/collingridges-dilemma/

  • http://www.facebook.com/profile.php?id=1026609730 Jim Balter

    “The poster-children of regulatory overreach are 20th century totalitarian nations”

    Godwin’s Law.

    • Ronfar

      Godwin’s Law doesn’t apply to Soviet Russia.

  • Doug

    Just in case it flies under the radar, I’d like to point out Robin’s subtle comp sci pun: *daemon* spawn.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    The positions of those like EY, concerned about the fundamental values in the future, are truly crazy worries. What do these folks think “moral values” are? Explicit moral values are only devices allowing us to function consistently in the face of the exigencies of decision fatigue. ( http://tinyurl.com/7dcbt7y ) To wish to impose our values the future is like insisting that the future use any other obsolete tool.

    But RH, you are partly to blame! You’ve defended em society based on a version of utilitarian values. You, too, think moral beliefs are in some sense true or false. There’s the mistake.

    • PlainDealingVillain

      Moral values are the deep-seated preferences of people, taken collectively.

  • Rationalist

    “And even if we were up to the task I’m not at all sure gains would be worth the quite substantial costs”

    I strongly disagree here – if we are talking about trying to prevent a future filled with nothing but “worst case demon spawn”/paperclip AI/”hardscrapple frontier”-type entities. In fact I question your sanity for wondering whether preventing such a future is “worth the costs”.

    Now obviously there is a continuum of drifted values. Maybe people in the future will change values a little bit, e.g. a world in which prostitution was legal and something most people did on a sunday for a bit of extra cash/kudos, or a world in which people were actually all living in human-indistinguishable robotic bodies because that’s the easiest way to extend lifespans.

    But I think that the people who are pro-FAI are not ruling out the small drifts/changes. They’re trying to rule out the big ones, the ones that are morally indefensible.

    Also one should mention the idea of an FAI whose job is to split the universe up into separate parts and enforce peace and nonaggression between them; contemporary humans get one part, moderate transhumans get another, extreme posthumans gwt another part.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Around 1900, many were told that the efficient scale of organization, coordination, and control was rapidly increasing, and nations who did not follow suit would be left behind. Many were also told that regulatory solutions were finally available for key problems of inequality and inefficient resource allocation.

    Your interpretation,then, of the rise of state socialism is that it was based on an exaggeration of returns to scale.

    [Who sold this line to “many”? Economists? Politicians?]

    Here’s an observation that supports your conclusion. Among hard socialists and communists, it is accepted as an apparent article of faith that an international command economy would produce enormous efficiencies. For all the efforts made to prove or disprove Marx’s theory of the tendency of the rate of profit to decline, there’s been no work I’m aware of actually estimating the gains from creating an industrial singleton. I’ve never heard even mentioned the need for actually studying the question.

    On the other side of the argument, there are some remarkable successes by the early Soviet Union–also largely ignored these days, even by hard socialists–that might provide the best available evidence on the efficiencies of ultra-large-scale production.

    • IMASBA

      “there’s been no work I’m aware of actually estimating the gains from creating an industrial singleton.”

      Actually, didn’t Robin write about advantages of monopolies and oligarchies a while back (which is not the same as saying those would overall be better than a competitive system)? I think he said something along the lines of such big players spending more on R&D (though that’s on paper and that there are tax incentives to blow up the on-paper R&D budget). Also technocracy proposes industrial singletons for different sectors of the economy.

      The early Soviet Union was a weird place, with all kinds of economic experiments. Mostly though, when something went up it was a matter of doing less worse than the notoriously corrupt and underdeveloped Russian Empire before.

  • Pingback: Overcoming Bias : Specific vs. General Foragers & Farmers