Value Explosions Are Rare

Bryan Caplan:

I’m surprised that Robin is so willing to grant the plausibility of superintelligence in the first place. Yes, we can imagine someone so smart that he can make himself smarter, which in turn allows him to make himself smarter still, until he becomes so smart we lesser intelligences can’t even understand him anymore. But there are two obvious reasons to yawn. 1. … Even high-IQ people who specifically devote their lives to the study of intelligence don’t seem to get smarter over time. If they can’t do it, who can? 2. In the real-world, self-reinforcing processes eventually asymptote. (more)

Bryan expresses a very standard economic intuition, one with which I largely agree. But since many of my readers aren’t economists, perhaps I should elaborate.

Along most dimensions, having more of a good thing leads to less and less more of other good things. In economics we call this “diminishing returns,” and it is a very basic and important principle. Of course it isn’t always true. Sometimes having a bit more of one good thing makes it even easier to get a bit more of other good things. But not only is this rare, it almost always happens within a limited range.

For example, you might hope that if you add one more feature to your product, more customers will buy it, which will give you more money and info to add another feature, and so on in an vast profit explosion. This could make the indirect value of that first new feature much bigger than it might seem. Or you might hope that that if achieve your next personal goal, e.g., to win a race, then you will have more confidence and attract more allies, which will make it easier for you to win more and better contests, which lead to an huge explosion of popularity and achievement. This might make it very important to win this next race.

Yes, such things happen, but rarely, and they soon “run out of steam.” So the value of a small gain is only rarely much more than it seems. If someone ask you to pay extra for a product because it will start you one of these explosions, you should question them skeptically. Don’t let them do a Pascal’s wager on you, saying even if the chance is tiny, a big enough explosion would justify it. Ask instead for concrete indicators that this particular case is an exception to the usual rule. Don’t invest in a startup just because, hey, their hockey-stick revenue projections could happen.

So what are some notable exceptions to this usual rule? One big class of exceptions is when you get value out of destroying the value of others. Explosions that destroy value are much more common that those that create value. If you break just one little part in a car, then the whole car might crash. Start one little part of a house burning and the whole house may burn down. Say just one bad thing about a person to the right audience and their whole career may be ruined. And so on. Which is why there are a lot of explosions, both literal and metaphorical, in war, both literal and metaphorical.

Another key exception is at the largest scale of aggregation — the net effect of on average improving all the little things in the world is usually to make it easier for the world as a whole to improve all those little things. For humans this effect seems to have been remarkably robust. I wish I had a better model to understand these exceptions to the usual rule of rare value explosions.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • rrb

    In molecular biology, scientific discoveries often lead to new experimental techniques.

    Like, Taq polymerase –> PCR –> databases of sequenced genomes with each gene annotated by function, which make it much easier to find and study enzymes like Taq polymerase that are unusual variants on familiar functions

    Is that an explosion?

    • anonnn

      Robin is talking about explosions which lead to large and sustained macro consequences. So I think this case doesn’t count or only does so in a very minimal sense.

      • rrb

        Well obviously molecular biology has lead to large and sustained macro consequences. Isn’t it the basis of modern medical research?

        Another thing to look for though is, does it have some kind of limit? A lot of things have this structure, where progress accelerates progress, but they have some kind of limitation and the growth slows down.

    • VV

      Unless there is a real positive feedback loop, it’s not.

      • rrb

        well, the new experimental techniques lead to new scientific discoveries, that closes the loop

  • Siddharth

    What are the main mechanisms which drive most value explosions to peter out?

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Value explosions are deeply analogous (it seems to me) to “violations” of the law of entropy in physics–the law of diminishing returns the analog of the 2nd law. (See my previous comment to this post.)

      If one were asked, “what are the main mechanisms that drive the law of entropy,” is there an answer? There are all sorts of mechanisms because the law is statistical and very general. I suspect the same is true regarding diminishing returns and value explosions.

      The question itself could reflect the “foom fallacy.” We look for a mechanism, and not being able to anticipate what it will be, we aren’t moved by the realization that some mechanism of constraint must come into play, for a priori, statistical reasons.

      If this is correct, foomists are today’s perpetual-motion machine “inventers.”

      I haven’t watched the original debate, but it sounds like nobody has previously raised Bryan Caplan’s decisive objection to the myth of the unlimited intelligence explosion.

  • Robert Koslover

    I think we can all agree that (1) better computers help us design better computers, and (2) better computers help us make many, many other things better too. But this property of computers (and of some other tools), though very important, also seems to me to be rather rare. After all, having a great pair of pants, for example, will only help me a little bit in designing/building an even greater pair of pants, right?

    • Max

      This has happened for many other technologies than computers. Many information storage, retrieval and communication methods ultimately helped design better such methods over the course of history. I’d say even better pants were of marginal use in this process in small, not easily visible ways.

    • VV

      Actually, that’s a property of many kinds of tools, starting form paleolithic stone carving tools.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    It seems to me that the law of diminishing returns is essentially an expression of statistical regression to the mean. For any incremental input causing improvement, the subsequent improvement caused by an equal increment regresses. If that’s right, it would seem Bryan Caplan has a good argument that diminishing returns apply universally.

    Your examples don’t convince me otherwise. Perhaps I don’t understand the first, where breaking a little part of the car breaks the whole. This confirms (trivially) to diminishing returns: you break something else, and you get no further damage.

    I wish you’d been a little less informal about the improvement of little things, so we might look up a principle, as I don’t see how it violates diminishing returns. Say we adopt a policy of improving all the little things in the world. Do we get more improvement per unit of effort over time? It seems to me we “pick the low hanging fruit” and the process gets harder over time.

    Say you have a technological revolution. You get dramatic effects in the immediate decades, but the rate of change slows, in accordance with the law of diminishing returns. Eventually, it hits a wall with the depletion of resources.

    Getting back to foom! Of any investment of effort, money, brain power–or whatever input you choose to measure–in any project (the term broadly conceived) taking us closer to AI, the prediction that there’s a wall–an asymptote–applies. This seems to entail that we can’t increase intelligence indefinitely.

    Expecting an open-ended explosion denies this conclusion. It’s a kind of magical thinking.Explosions, too, are subject to diminishing returns if considered at the right level of abstraction.

  • Cambias

    I can think of one example: the explosive growth of the “gunpowder empires” of the late Middle Ages. Basically the first state in each region to adopt artillery could expand more or less unopposed until it bumped into another gunpowder empire coming the other way. Physical expansion increased the resource base, which allowed more expansion, etc., and a central government with artillery could efficiently put down the local rebellions which plagued earlier superstates. The borders of those gunpowder empires defined the world political map for the next half millennium.

    • http://overcomingbias.com RobinHanson

      That would be an explosion of value gained from destroying the value of others. I talked about that exception in the post.

      • Doug

        Cambias’ example isn’t directly analogous to your house burning example. Those gunpowder empires didn’t completely destroy their non-gunpowder conquests, but most often converted them to some derivative of their civilization. Debates about the merits of colonialism notwithstanding, many of the European conquests saw large living standards rise associated with the import of European technology and institutions. The guns only destroyed inefficient regimes that were acting as barriers to advancement.

        I’ll posit a similar model about intelligent explosions. Most matter readily available to humans is “dumb.” There’s only about 18 billion pounds of truly smart matter on the planet (6 billion people * 3 pound brains). Even semi-conductors are mostly dumb as there’s large classes of cognitive problem that they’re currently incapable of dealing with. There is no current way to increase the amount of smart matter except very inefficient standard human reproduction.

        AGI unlocks vast amounts of matter that can be made smart. Even at very inefficient intelligence densities, turning an even small fraction of all this dumb matter smart increases aggregated intelligence by at least several orders of magnitude. It doesn’t represent an equilibrium shift, but a phase transition.

  • IMASBA

    Bryan Caplan “But there are two obvious reasons to yawn. 1. … Even high-IQ people who specifically devote their lives to the study of intelligence don’t seem to get smarter over time. If they can’t do it, who can? 2. In the real-world, self-reinforcing processes eventually asymptote.”

    If the Nazis had invented the atomic bomb first then someone else would have gotten the atomic bomb as well 5 years later, except that 5 minutes after the Nazis invented the atomic bomb there’d be no one else left to ever re-invent the atomic bomb again.

    Standard economic intuition tacitly assumes sudden changes are small compared to the overall economy and that everyone will be basically OK because markets, people and power-sharing structures are resilient enough. Suffice to say standard economic intuition does not apply to many of Robin’s topics.

    • http://entitledtoanopinion.wordpress.com TGGP

      I think you have an inaccurate conception of Nazi Germany and the potency of the earliest atomic arsenals.

      • IMASBA

        It’s a figure of speech, but the point should be clear: there are many situations where gaining an advantage can enable one to decimate the competition before they’ve caught up.

      • http://entitledtoanopinion.wordpress.com TGGP

        It’s possible you’re right, but the plausibility of that argument is precisely what Hanson and Yudkowsky disagree about. Falling back on “the point should be clear” qualifies as begging the question (though to avoid confusion we should say “assuming the conclusion”).

  • efalken

    I remember a model where smart agents could ‘learn faster’ than others, and this generated increasing returns to scale…I think the problem is that intelligence is generally domain specific, so any one agent’s knowledge asymptotes in a specific domain. All domains have limited value in themselves, Breakout growth or macro-inventions come from connections between previously unconnected domains is basically serendipity, not anything that benefits from extensive investments, and these generate the raw increases in human civilization.

    So, I’m not sure it’s possible for any one agent to capitalize on this, and become like The Lawnmower Man, someone infinitely smarter than his creator.

    • Marc Geddes

      What if we design an agent that is an expert in the domain ‘Intelligent minds’? We could do this by looking for a general method of knowledge representation in this domain.

      Simply find the minimum number of ‘super concepts’ that once coded, make your program ‘AGI complete’ in the domain ‘intelligent minds’. The idea is to find the minimum set of super-concepts (i.e., the super classes for your class diagram, or equivalently, the primitives in your ontology) which, when used as prototypes, enable your program to form effective representations of any other concept whatsoever in the domain ‘intelligent
      minds’.

      So simply code these super-concepts, hit compile and run. The result? FOOM….

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        We could do this by looking for a general method of knowledge representation in this domain.

        The principle of diminishing returns implies that there can’t be a limited group of concepts that can generate everything else. That would be to deny that the principle of diminishing returns applies to the acquisition of a general concept.

        That concepts can’t be reduced to primitives is the reason that logical positivism collapsed. It’s the reason philosophers now believe that most concepts don’t have necessary and sufficient conditions.

      • Marc Geddes

        I agree that most concepts don’t have necessary and sufficient conditions. I talked about ‘primitives’ only in a weaker sense: the hypothesis is that there is some minimal (finite) set of concepts that can be used as prototypes for all other concepts.
        You start with fuzzy prototypes, not precise definitions, and the program creates new concepts by ‘optimizing’ these initial super-concepts ; generating ever more precise variations on the initial fuzzy prototypes.

    • IMASBA

      “I think the problem is that intelligence is generally domain specific, so any one agent’s knowledge asymptotes in a specific domain. All domains have limited value in themselves”

      Basically increasing the clock speed of the mind would lead to higher IQ scores. Creativity would not improve much until you also improve memory recall, working memory and memory formation (this increases the speed at which you make associations), but that can be done as well. Finally you can build in modules that are geared to specific tasks, such as switching on savant-like abilities at will or adding statistical lintuitions that are correct mathematically (unlike most standard human intuitions).

      Sure, at some point there can be no further progress, but long before then the superminds will have enslaved or eradicated everyone else.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Sure, at some point there can be no further progress, but long before then the superminds will have enslaved or eradicated everyone else.

        How could you know that the second independent clause is true?

        Moreover, how do you and Robin know that whatever technology would serve to copy brain connections won’t hit a wall long before it would enable ems?

        [We seem disposed to deny such walls, even when we know what they are. Isn’t quantum indeterminacy and probabilism “merely” an absolute wall to the knowledge project? The “interpretations” of quantum mechanics seem to express our need to interpret lack of epistemic access as ontological.]

      • IMASBA

        “How could you know that the second independent clause is true?”

        It’s possible and highly likely given the course of human history. Even small differences in intelligence or technology have lead to devastating results over and over again. There’s always the possibility of the future being different but I wouldn’t bet on it in Robin’s eat-or-be-eaten ultra-capitalist dystopia.

        “Moreover, how do you and Robin know that whatever technology would serve to copy brain connections won’t hit a wall long before it would enable ems?”

        EMs aren’t the only way, I talked about upgrading the human brain (research suggests savant like abilities can be unlocked in every brain and the human memory system can be improved) and once you’re at the point where a computer can run an EM you’re also at the point where you can build an artificial mind from scratch.

    • VV

      I remember a model where smart agents could ‘learn faster’ than others, and this generated increasing returns to scale…

      This seems counterintuitive. Can you please elaborate?

      • efalken

        Look at economic papers on “Learning by doing”, such as ‘Learning by doing and introduction of new goods’ by Nancy Stokey circa 1988. The idea is, people choose how much to learn as a strategy, and this investment leads to more learning and more learning productivity. Clearly, this makes intuitive sense at some level, because learning math or language enables you to understand a higher quantity and quality of ideas that would be impossible otherwise.

        Yet, this didn’t generalize as well as expected circa 1990, and the thread kind of died. I think that’s because there’s a limit on what you learn to learn.

        I think defining a domain as ‘intelligence’ to which intelligence can be applied is like assuming a theory can be complete and consistent…it seems possible, but in fact is true only for a handful on uninteresting cases.

      • VV

        I skimmed the paper and it doesn’t seem to me that it supports that claim.

        It seems to be an economical model where one of the inputs to production is a “knowledge capital”, but no assumption is made on how this “knowledge capital” changes over time, other than being monotonically non-decreasing.

        In particular there is no claim about increasing returns.

    • M_1

      “I think the problem is that intelligence is generally domain specific, so any one agent’s knowledge asymptotes in a specific domain.”

      I understand that your main point is cross-domain creativity, but even if you limit the discussion to iteratively improving capabilities within many multiple domains, it isn’t going to take long for an AGI to become arguably superintelligent compared to a plain old human brain.

      I think it’s presumptuous to categorically state that superintelligence either must or can’t result from a self-modifying AGI, but I find it awfully difficult to deny the possibility, considering the analytical capabilities that such a self-modifying entity is likely to possess.

  • Jacob

    Thank you very much for elaborating on your intuitions, Robin.

  • NRWO

    Along
    most dimensions, having more of a good thing leads to less and less
    more of other good things. In economics we call this “diminishing
    returns,” and it is a very basic and important principle. – See more
    at:
    http://www.overcomingbias.com/2013/09/value-explosion-are-rare.html#disqus_thread
    Along
    most dimensions, having more of a good thing leads to less and less
    more of other good things. In economics we call this “diminishing
    returns,” and it is a very basic and important principle. Of course it
    isn’t always true. Sometimes having a bit more of one good thing makes
    it even easier to get a bit more of other good things. – See more at:
    http://www.overcomingbias.com/2013/09/value-explosion-are-rare.html#disqus_thread
    Along
    most dimensions, having more of a good thing leads to less and less
    more of other good things. In economics we call this “diminishing
    returns,” and it is a very basic and important principle. Of course it
    isn’t always true. Sometimes having a bit more of one good thing makes
    it even easier to get a bit more of other good things. – See more at:
    http://www.overcomingbias.com/2013/09/value-explosion-are-rare.html#disqus_thread

  • Matt Young

    Bryan can never see an intelligence far greater than his personal intelligence. What he will observe is an artificial intelligence that always downgrades its revealed aspects so they just stretch the limits of Bryan’s brain.

  • Stephen

    I don’t know, this seems like a fairly superficial criticism of the intelligence explosion hypothesis to me. In regard to the first point, the obvious difference between high IQ individuals and AI’s is that humans don’t have access to their own source code. At least in terms of intelligence amplification, that makes all the difference. As for the second point: obviously yes, there will be diminishing returns and an asymptote – eventually. Simply saying that an asymptote exists doesn’t tell you *where* it exists, or whether it’s at a high enough level to be dangerous. For this point to carry you’d need an argument for why the fundamental limits of intelligence are close enough to human as to not be a concern.

    As for why value explosions are so rare, that seems thermodynamic at heart. Disordered systems tend to be less valuable, and most processes increase disorder, so a typical explosion will decrease value. Exceptions will therefore be due to processes that create order – look closely at any value explosion and I suspect you’ll find at the source some kind of negentropy pump (ie, evolution or intelligence). Of course, that’s not a very predictive model (plenty of intelligent processes don’t lead to value explosions), but I think it gets to the heart of why Eliezer considers the intelligence explosion unique and is so concerned about it.

    • IMASBA

      Not only do humans not have access to their own source code (so they can’t make themselves much more smart), they also have to rely on chance to produce equally, or more, intelligent offspring, have to wait years to find out whether that offspring is smart, then have to spend years teaching it and are then still limited by the human lifespan and the limits of natural human biology.

      An AI could literally expand its own “brain” for millennia.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      As for the second point: obviously yes, there will be diminishing returns and an asymptote – eventually. Simply saying that an asymptote exists doesn’t tell you *where* it exists, or whether it’s at a high enough level to be dangerous. For this point to carry you’d need an argument for why the fundamental limits of intelligence are close enough to human as to not be a concern.

      The asymptote argument undercuts the main reason it’s assumed that super-intelligence will (eventually) exist.

      The argument is that the (imo, well-founded) materialist premise that the mind consists solely of the brain’s information processing entails the eventuality of constructing (or copying) a mind at least as intelligent as the smartest humans possess.

      If we lack rudimentary knowledge of where it will asymptote, we have no grounds for assigning a substantial probability to the asymptote’s exceeding the level of human (cross-domain) intelligence. This doesn’t rule out the possibility that someone will (or even has) made a specific argument showing that AI will asymptote late. (This mere possibility does justify a nonzero probability that it will.)

      But the asymptote argument reverses the accustomed burden of proof in transhumanist discussions, where it is assumed that they’re entitled to the foregone conclusion that continued technical progress plus metaphysical possibility entails eventual super AI–or at the least human-level cross-domain AI. (Thus there are arguments, as between Hanson and Yudkowsky, about whether human-level machine intelligence will first take the form of copying or of constructing, with the common assumption that it will eventually.)

      • IMASBA

        Selective breeding, bionics and genetic manipulation CAN produce superminds inside human skulls one day. There already exist rare individuals who are extremely creative and good at problem solving (Gauss, Euler, Newton, Einstein) or can remember what they had for breakfast 20 years ago or calculate 3645.24 / 841.3 in under a second, so we know these abilities are possible even in naturally born human brains and unless we radically reform our societies these abilities will one day be something billionaires can buy and use to enslave the average-IQ peons.

        Whether it’s the stuff I described here, EMs or bottom-up AI, the future is very bleak for us peons unless we change things while we still can.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Selective breeding, bionics and genetic manipulation CAN produce superminds inside human skulls one day.

        The existence of something doesn’t prove it can be engineered. Why assume we can duplicate flukes?

  • JW Ogden

    It seems to me that it is often not intelligence that is lacking but rather data and machines. I.e. we know how animals convert the chemical energy in sugar into electricity at 98% efficiency but we cannot match that efficiency because we do not have the ability to work on the atomic level. The point being that even with great intelligence has its limits. Some of the people who though that world was flat where plenty intelligent.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Another key exception is at the largest scale of aggregation — the net effect of on average improving all the little things in the world is usually to make it easier for the world as a whole to improve all those little things. For humans this effect seems to have been remarkably robust.

    This seems more than just another exception. The others are examples of rare explosions; this would seem to be an absolute failure of the law of diminishing returns: the other rare explosions ultimately show diminishing returns.

    Does this “robust” effect have a name? I don’t grasp what phenomenon it refers to. Seemingly, if you improve all the little things in the world, you will find it harder to improve them further, having gathered the low-lying fruit with respect to improvements. A case has even been made that this has actually occurred with respect to technology.

    Could someone provide further explanation of what’s meant by this robust effect, an example, a name, or a link?

  • Philip Goetz

    The intuition for intelligence explosion is that, at any one point in time, a self-modifying AI is smarter than the AI that designed it, and therefore can improve its design. But that intuition doesn’t prove that self modification doesn’t converge asymptotically. The question is how the complexity of an artifact that a brain can design scales with the complexity of the brain.

    One approach to this would be to find out how the number of steps it takes to prove or disprove statements in propositional logic of length n, given axioms of length n, scales with n.