When Is “Soon”?

A few years ago, my co-blogger Eliezer Yudkowsky and I debated on this blog about his singularity concept. We agreed that machine intelligence is coming and will matter lots, but Yudkowsky preferred (local) “foom” scenarios, such as a single apparently-harmless machine in a basement unexpectedly growing over a weekend so powerful that it takes over the world, and drifting radically in values in this process. While Yudkowsky never precisely defined his class of scenarios, he was at least clear about this direction.

Philosopher David Chalmers has an academic paper on singularity, and while he seems inspired by Yudkowsky-foom like scenarios, Chalmers tries to appear more general, talking only about the implications of seeing “within centuries” “A++”, i.e., artificial intelligence “at least as far beyond the most intelligent human as the most intelligent human is beyond a mouse.” Chalmers worries:

Care will be needed to avoid … [us] competing [with them] over objects of value. … [They might not] defer to us. … Our place within that world … [might] greatly diminish the significance of our lives. … If at any point there is a powerful AI+ or AI++ with the wrong value system, we can expect disaster (relative to our values) to ensue. (more)

Chalmers’ generality, however, seems illusory, because when pressed he relies on foom-like scenarios. For example, responding to my commentary, Chalmers says:

Hanson says that the human-machine conflict is similar in kind to ordinary intergenerational conflicts (the old generation wants to maintain power in face of the new generation) and is best handled by familiar social mechanisms, such as legal contracts whereby older generations pay younger generations to preserve certain loyalties. Two obvious problems arise in the application to AI+. Both arise from the enormous differences in power between AI+ systems and humans (a disanalogy with the old/young case). First, it is far from clear that humans will have enough to offer AI+ systems in payment to offset the benefits to AI+ systems in taking another path. Second, it is far from clear that AI+ systems will have much incentive to respect the existing human legal system. At the very least, it is clear that these two crucial matters depend greatly on the values and motives of the AI systems. (more)

This vast power difference makes sense in a (local) foom scenario, but makes much less sense if we are just talking about speeding up civilization’s clock. Imagine our descendants gradually getting more capable and living faster lives, with faster tech, social, and economic growth, and shorter gaps between successive generations, so that as much change happens in the next 300 years as has occurred in the last 100,000. In this case why couldn’t our descendants manage their intergenerational conflicts similarly to the way our ancestors managed them? Similar events would happen, but just compressed closer in time.

Our ancestors have long had “enormous power differences” between folks many generations apart, and weak incentives to respect the wishes of ancestors many generations past. Their intergenerational conflicts were manageable however, mainly because immediately adjacent overlapping generations had roughly comparable power (shared values mattered much less). So if immediately adjacent overlapping future generations also have comparable power, why can’t they similarly manage conflict?

Yes, familiar mechanisms for managing intergenerational conflict seems insufficient if a single machine with unpredictable values unexpectedly pops out of a basement to take over the world. But Chalmers didn’t say he was focusing on foom scenarios; he says he is talking in general about great growth happening within centuries.

You might respond that our descendants will differ in having more generations overlap at any given point in time. But imagine that the growth speedup of the industrial revolution had never happened, so that the economy doubled only every thousand years, but that plastination was feasible, allowing brains to be preserved in plastic at room temperature and revived millions of years later. If a tiny fraction of each generation were were put into plastic and revived over the next thousand generations, would this fact suddenly make make intergenerational conflict unmanageable, making it crucial that the current generation ensure that no future generation ever had the wrong values?

I’m not saying there are no scenarios where you should care about descendant values, or even that you should be fully satisfied with traditional approaches to intergenerational conflict. But I am saying that having lots of growth in the next few centuries does not by itself invalidate traditional approaches, and that folks like Chalmers should either admit they are focused on foom scenarios, or explain why foom-like concerns arise in very different scenarios.

A lesson to draw from this example, I think, is that it is often insufficient to say that some important development X will happen “soon” – it is better to say that X will happen on a timescale short compared to another important related timescale Y. For example, if you tell me that I will die of cancer “soon,” what matters is that this cancer killing timescale is shorter than the timescale on which cancer cures are found, or on which I can accomplish important tasks like raising my children. I might not mind the cancer process going ten times faster than I’d expected, if the other processes go a hundred times faster. Since an awful lot of processes will speed up over the next few centuries, it is relative rates of speedup that will matter the most.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Sid

    “it is often insufficient to say that some important development X will happen “soon” – it is better to say that X will happen on a timescale short compared to another important related timescale Y.”

    Robin’s inner physicist.

  • John

     ” Imagine our descendants gradually getting more capable and living
    faster lives, with faster tech, social, and economic growth, and shorter
    gaps between successive generations, so that as much change happens in
    the next 300 years as has occurred in the last 100,000. In this case why
    couldn’t our descendants manage their intergenerational conflicts
    similarly to the way our ancestors managed them?”

    This implies significant changes to our biology, at the very least. Which, again, raises the question why would anybody think of EMs/cyborgs/mind uploads/whatever as their “descendents” rather than  as a different non-biological/quasi-biological species whose only tangible point with humanity is that the latter created the former.

    “Our ancestors have long had “enormous power differences” between folks many generations apart”

    Not true. There were differneces all right, but not enormous. The chances of a primitive African tribesman killing a British soldier in the 19th century were probably orders of magnitudes higher than humans (again, let’s highlight, HUMANS) killing an EM/SAI. Besides, one important point here is that people who lived in the same time period generally had a good probability of obtaining advanced military technology, even if they were relatively backwards (think of American Indians using horses and rifles). Advanced military technology for SAI/EMs may be in such a form that humans cannot effectively use it, maybe even cannot comprehend it.

    I am also wondering what the purpose of this post was. To tell us that everything will be alright and we should not worry? I do not think it achieves that. Moreover, I think that position is so freaking dangerous that everyone who promotes it is crushing his morals (yes, even if they are utilitarian of the worst kind).

  • White Lotus

    The organization and maintenance of political institutions
    are by their nature resistant to change and adaptation. People are inherently
    conservative. The American Constitution is a good example of how many
    generations a political order has failed to adapt its core institutions despite
    substantial generational, demographic and cultural changes. Dismantling of
    political institutions by enlisting the young generation was attempted in
    Cambodia under Pol Pot. It seems short of genocide and returning to Year Zero,
    that the rate of institutional change required to reallocate power relations in
    response to emergent AI+1 challenges will fail as human political institutions
    are inherently premised on the ‘now’ changing at a relatively modest rate. Human beings
    need time to reach a coalition of various stakeholders that agree (after
    compromise) and by the time they reach agreement, the coalition will discover
    time has moved on and the agreed changes are redundant. Unless AI+1 also must act through the process of slow coalition building, it would appear there is no mechanism that human beings can employ to meet the AI+1 challenge.  

  • Michael Vassar

    But I am saying that having as much growth in the next few centuries as we have had over the previous few million centuries does by itself invalidate traditional approaches to intergenerational conflict.

  • Paul Christiano

    I like the general point, and think the analogy to inter-generational conflict is a productive way to think about this particular situation, but I don’t really buy your conclusions from the analogy.

    It seems to me that, if I’m not too worried about future generations, it’s because of (approximately) shared values and human tendencies towards respect/altruism/consistency for/towards/with other generations. The situation looks quite different when considering a transition to machine intelligence. I don’t understand why you discount these factors.

    • http://overcomingbias.com RobinHanson

      Shared values is only a minor reason why we do not now kill all the retired folks and take all their stuff. More important is a shared respect for law, and the fact that now retired folks were of comparable power at an earlier point in time.

      • John

         I honestly do not see how you can make that point. First, shared respect for the law IS a shared value (I would also argue that it is not respect but fear of punishment that is the dominant factor here).  Second, really, what type of power someone wielded long ago is not really a motivation for anything (at most, it is a human emotional motivation; I do not see artificial beings having human emotions in the medium to long-run).
        The reason why we do not kill retired people has everything to do with emotions, empathy and sheer biology which tells us that some of them are our ancestor. And almost all of us are pretty disgusted with killing ancestors.

      • http://overcomingbias.com RobinHanson

        John, if we respect law because we fear punishment, that is not a shared value reason at all.

      • John

        Robin, I really did not expect this type of sloppy reading from you. I said that respect for law is a shared value AND that fear of punishment is the dominant factor why we do not kill old people. Not that fear of punishment is the reason why we respect the law.

      • Paul Christiano

        I care about what happens to most of the universe, which looks like it will be dominated by decisions of future generations. Legal mechanisms for controlling that sort of thing look pretty weak, and don’t seem to have played much of a role historically (as you often note). 

        But even re killing old folks, while I think respect for law is an important factor, a more natural factor is that if we kill old folks today we will expect future generations to behave similarly (for the obvious reasons, both causal and acausal). The strength of this effect through a transition depends on (several different notions of) the similarity between that transition and anticipated future transitions. 

      • http://overcomingbias.com RobinHanson

        OK, then why can’t future transitions have a similar degree of similarity to past transitions?

  • Andy McKenzie

    (+1), liked the post. 

    John, if people are not allowed to say what they think is true because they are afraid people will call them morally wrong, then we are even less likely to converge on the optimal way of approaching any possible problems. 

    Michael, it seems to me that much faster growth implies much faster generation turnovers, no?

    Paul, I think the “respect/altruism/consistency for/towards/with other generations” is generally a less motivating factor than respect for norms and the law. I’d bet we could find historical examples where there was quasi-inter-generational warfare in the absence of a good government with good laws. But, a point worth thinking about.

    I think the key question from Robin’s post is “what’s the chance of foom?” And I think the answer is largely based on “how easy will it be to design an agent that is smarter than humans?” And one useful way to approach that would be, which trade-offs exist in human minds that wouldn’t exist in machine minds?

    • John

       ” if people are not allowed to say what they think is true because they
      are afraid people will call them morally wrong, then we are even less
      likely to converge on the optimal way of approaching any possible
      problems. ”

      Give me just one example of human history when worrying less about an existential problem would have produced better results than what actually occured.

      “Michael, it seems to me that much faster growth implies much faster generation turnovers, no?”
      No. Significantly faster generational turnover requires changes to human biology. Why would you assume that the relevant point of view from which we are supposed to eveluate this is posthuman rather than natural human?

      • Andy McKenzie

        > Give me just one example of human history when worrying less about an existential problem would have produced better results than what actually occured.

        There are so many examples where people falsely predicted apocalyptic events and it turned out poorly for them and their society. An interesting, recent example is here: http://www.religiondispatches.org/archive/culture/5983/a_year_after_the_non-apocalypse:_where_are_they_now/

        > No. Significantly faster generational turnover requires changes to human biology. Why would you assume that the relevant point of view from which we are supposed to eveluate this is posthuman rather than natural human?

        Not necessarily. Consider that the first AI’s are ~ human intelligence in many ways. There is a “generation” of them, who have an incentive to cooperate and follow our norms/laws. Then, the next “generation” of AI’s that occurs is relatively more creative, but they still have an incentive to cooperate. And etc. This generation cycling can be fast enough to account for the overall fast growth that Michael was discussing. 

      • John

         First, I would not classify this type of reasoning as thinking about “existential problems” on a social level. I should have made that clarification. There have been and there will be fringe movements that make this mistake. However, if we take mainstream postitions (and be certain that debate over human – AI relations will be pretty mainstream in 20 years) only, I still cannot think of a single such case – granted, they may be 1 or 2, but for every one of them, I bet I can think of at least 5 opposite examples.

        There is a “generation” of them, who have an incentive to cooperate and
        follow our norms/laws. Then, the next “generation” of AI’s that occurs
        is relatively more creative, but they still have an incentive to
        cooperate. And etc. This generation cycling can be fast enough to
        account for the overall fast growth that Michael was discussing.

        “There is a “generation” of them, who have an incentive to cooperate and
        follow our norms/laws. Then, the next “generation” of AI’s that occurs
        is relatively more creative, but they still have an incentive to
        cooperate. And etc. This generation cycling can be fast enough to
        account for the overall fast growth that Michael was discussing. ”

        Ok, that seems possible. However, it still remains an open question whether the process is sustainable and for how long? If we assume that the benefit of respecting the rights of humans diminishes with each successive generation of AIs, then at a certain point humans are doomed. You would want to create AIs for whom cooperation between generations necassitates the survival of humans – maybe by instilling the founding myth of constitutionality for human property and human contracts and making every successive contract between AIs dependent on this very same constitution. However, we have witnessed a lot of revolutions throughout history in which constitutions have been scrapped…

      • Andy McKenzie

        > However, it still remains an open question whether the process is sustainable and for how long?

        I agree these are important questions worth considering. However, I still find this scenario more plausible than the AI singleton one. 

      • Strange7person

         >First, I would not classify this type of reasoning as thinking about “existential problems” on a social level. I should have made that clarification. There have been and there will be fringe movements that make this mistake. However, if we take mainstream postitions (and be certain that debate over human – AI relations will be pretty mainstream in 20 years) 
        More fringe movements than I’d care to count thought they’d be mainstream 20 years later.

  • OwenCB

    I might be reading it wrong, but my impression of the difference in Chalmer’s position is not that he thinks AI will go foom, but that he isn’t imagining humans will have substantially increased in power in the meanwhile. (This seems to me to be a mistake.)

    I agree with Paul that your analogy isn’t quite as strong as you imply, though.

    • V V

       If I understand correctly, both Hanson and Chalmers/Yudkowsky believe in AI singularitarian “foom” scenarios.
      Their disagreement seems to be about the timescale (doubling times of years vs days) and the origin of the AIs (brain emulation vs fully artificial).

      I think that, while Hanson’s scenario is less extreme and therefore more probable that Chalmers/Yudkowsky’s scenario, it is still improbable.

      • Brian Huntington

        I don’t believe that Hanson believes emulations will foom. (“Foom” refers to bootstrapping to super-intelligence). Hand-coded AIs would have much cleaner source code. A hand-coded AGI would have to be built from well-understood principles of general intelligence, whereas the whole point of whole-brain emulation is that deep knowledge of the nature of intelligence is unnecessary. 

        Emulations would be monstrously complex, so making any improvements would be exceptionally hard, and the only way the emulation would be able to improve itself would be if it had deep knowledge of how intelligence works.

      • V V

        What do you consider as the AI “source code” exactly?

        You could have a very simple and clean general-purpose algorithm instantiated with an extraordinarily complex model with trillions of parameters. How could you improve something like that any more efficiently than you could improve an emulated brain?

  • Pole

    I think you are pretty limited with your thinking about aligning values of machines and humans. It is actually pretty easy to support global growth, shared dreams of prosperity, human kindness, hapiness and “accidentally” cause a pandemia of mental ilness or fire some nukes from Iran as an American-Jewish trojan horse Stuxnet shows us.

  • V V

    “Since an awful lot of processes will speed up over the next few
    centuries, it is relative rates of speedup that will matter the most.”

    How do you motivate that, expecially in the light of these type of arguments: http://physics.ucsd.edu/do-the-math/2011/07/galactic-scale-energy/

    It seems to me that you, Yudkowsky, Chalmers,
    Kurzweil et al are all arguing for slightly differerent versions of a generally improbable (or at least not well argued) scenario.

    • Brian Huntington

      That article is irrelevant. I’m not sure exactly what you are imagining, but there is no reason creating a brain emulation or fooming AI would require insane levels of energy. 

      Quite the opposite, actually. It’s likely that AI would be the more energy efficient option, considering that human bodies need massive amounts of energy just to survive, and thats before you get into the transportation of physical objects (including ourselves). An upload society would likely be far more energy-efficient, so energy  constraints actually place more pressure on a society to adopt uploads or other AI. 

      Of course, none of this is to say that AI (through whole-brain emulation or otherwise) is actually technically feasible. (It may or may not be, I’m not an AI expert.

      • V V

         Computers require a lot of energy and other physical resources.

        The size (GDP) of the economy of a society of digital agents (AIs or brain emulations) would be limited by the amout of available computational power. Once you cap energy and physical resources, the only way to increase computational power is via hardware efficiency improvements. Efficiency improvements of the order of 100% year are unreasonable, and even a more modest 2% year for 100 years might be over-optimistic.

      • http://www.facebook.com/people/Mark-Bahner/100001061961585 Mark Bahner

        “Computers require a lot of energy and other physical resources.”

        As I pointed out on my blog, the amount of power required for a given number of instructions per second is coming down by factor of 100 every decade. Therefore, supercomputers that now approach the human brain in calculations per second (500 teraflops to 20 petaflops) right now consume on the order of 50,000 times as much power as a human brain.

        But if the factor of 100 per decade reduction continues, they will require the same amount of power as a human brain within 22 years.

      • V V

         Where do you get the estimate of the computational resources of the human brain?

        And how long can Koomey’s law hold? Maybe it will hold for 22 years, but 100 years seems a wild extrapolation.Even today, energy expenditure is becoming a larger and larger operating cost of data centers: http://en.wikipedia.org/wiki/Data_center#Energy_use

      • http://www.facebook.com/people/Mark-Bahner/100001061961585 Mark Bahner

        “Where do you get the estimate of the computational resources of the human brain?”

        I made it up. This is the Internet.

        Seriously, Ray Kurzweil has estimated the human brain at 20 quadrillion instructions per second. And I *thought* Hans Moravec estimated the human brain at 500 trillion instructions per second. But he may have estimated it at 100 trillion instructions per second.

        So….somewhere between 100 trillion instructions per second and 20 quadrillion instructions per second.

        “And how long can Koomey’s law hold? Maybe it will hold for 22 years, but 100 years seems a wild extrapolation.”

        Even if it “only” lasts for 30-40 more years, we’ll be well into Singularity territory.

      • V V

         Neither Kurzweil nor Moravec are neuroscientists.
        IIUC, the Blue Brain Project people estimate 1 exaflop will be required to emulate an human brain in real time at intracellular resolution: http://bluebrain.epfl.ch/page-58110-en.html

        Maybe that resolution is excessive for behavioral equivalence, but it provides an order of magnitude.

        It’s not obvious that Koomey’s law can last for 30-40 years, and even if it does, it wouldn’t necessarily imply the singularitarian scenarios envisioned by Hanson and Chalmers.

  • Klevtie

    I think that much of the worst conflict we have already seen is *because* of foreseeable change (think Lebensraum or Arab Israeli conflict). If existing elites could observe thousand year changes in one generation I predict vast and wrenching conflicts. Variance of responses will go up and risks of civilization destroying changes increase.

  • http://www.facebook.com/CronoDAS Douglas Scheinberg

    When Is “Soon”?

    http://www.wowwiki.com/Soon

  • Pingback: Overcoming Bias : Regulating Infinity