Slowing Computer Gains

Whenever I see an article in the popular sci/tech press on the long term future of computing hardware, it is almost always on quantum computing. I’m not talking about articles on smarter software, more robots, or putting chips on most objects around us; those are about new ways to use the same sort of hardware. I’m talking about articles on how the computer chips themselves will change.

This quantum focus probably isn’t because quantum computing is that important to the future of computing, nor because readers are especially interested in distant futures. No, it is probably because quantum computing is sexy in academia, appearing often in top academic journals and university press releases. After all, sci/tech readers mainly want to affiliate with impressive people, or show they are up on the latest, not actually learn about the universe or the future.

If you search for “future of computing hardware”, you will mostly find articles on 3D hardware, where chips are in effect layered directly in top of one another, because chip makers are running into limits to making chip features smaller. This makes sense, as that seems the next big challenge for hardware firms.

But in fact the rest of the computer world is still early in the process of adjusting to the last big hardware revolution: parallel computing. Because of dramatic slowdowns in the last decade of chip speed gains, the computing world must get used to writing a lot more parallel software. Since that is just harder, there’s a real economic sense in which computer hardware gains have slowed down lately.

The computer world may need to make additional adaptations to accommodate 3D chips, as just breaking a program into parallel processes may not be enough; one may also have to to keep relevant memory closer to each processor to achieve the full potential of 3D chips. The extra effort to go into 3D and make these adaptations suggests that the rate of real economic gains from computer hardware will slow down yet again with 3D.

Somewhere around 2035 or so, an even bigger revolution will be required. That is about when the (free) energy used per gate operations will fall to the level thermodynamics says is required to erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely erase bits. See (source):

PowerTrend

Computer operations are irreversible, and use (free) energy to in effect erase bits, when they lack a one-to-one mapping between input and output states. But any irreversible mapping can be converted to a reversible one-to-one mapping by saving its input state along with its output state. Furthermore, a clever fractal trick allows one to create a reversible version of any irreversible computation that takes exactly the same time, costing only a logarithmic-in-time overhead of extra parallel processors and memory to reversibly erase intermediate computing steps in the background (Bennett 1989).

Computer gates are usually designed today to change as rapidly as possible, and as a result in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slowly enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slowly, and it erases only half as many bits per operation (Younis 1994).

Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. This will further slow the rate at which the world gains more economic value from computers. Sometime much further in the future, quantum computing may be feasible enough so it is sometimes worth using special quantum processors inside larger ordinary computing systems. Fully quantum computing is even further off.

My overall image of the future of computing is of continued steady gains at the lowest levels, but with slower rates of economic gains after each new computer hardware revolution. So the “effective Moore’s law” rate of computer capability gains will slow in discrete steps over the next century or so. We’ve already seen a slowdown from a need for parallelism, and within the next decade or so we’ll see more slowdown from a need to adapt to 3D chips. Then about 2030 or so we’ll see a big reversibility slowdown due to a need to divide part gains between more operations and using less energy per operation.

Overall though, I doubt the rate of effective gains will slow down by more than a factor of four over the next half century. So, whatever you might have thought could happen in 50 years if Moore’s law had continued steadily, is pretty likely to happen within 200 years. And since brain emulation is already nicely parallel, including with matching memory usage, I doubt the relevant rate of gains there will slow by much more than a factor of  two.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • http://www.gwern.net/ gwern

    Landauer’s limit and reversible computing is indeed an interesting upcoming computing trend that is beaten out by quantum computing, but missing out on 3D computing isn’t much of a loss. You say that it’ll force increased memory parallelism and increase the importance of memory locality? Those are not news, because they are already true – programmers have been dealing with multicores since the mid-2000s or so when single core performance hit the 4-5ghz wall, and memory locality has likewise been increasingly important (although it seems that in practice, any access of the hard drives have been a bigger performance hits than trashing on-processor caches or arranging data inappropriately inside RAM).

    • VV

       

      Landauer’s limit and reversible computing is indeed an interesting
      upcoming computing trend that is beaten out by quantum computing

      No. Landauer’s limit applies to classical and quantum computers alike.

      • http://www.gwern.net/ gwern

         Not what I meant.

      • VV

        Then what did you mean?

      • http://www.gwern.net/ gwern

         I was replying to Hanson’s complaint that 3D printing was missing out on news coverage that it deserved.

  • Locaha

    What about a switch from silicon to other material (graphene, maybe) that will increase chip speed again?

  • kurt9

    It’s actually worse than this. Scaling the device size will continue down to the molecular level. This will be reached around 2030. Further miniturization beyond this point is unlikely.

    I agree with the blogger that quantum computing is over-hyped. Quantum computation is useful for certain calculations involving factorization. It is not a replacement for conventional digital computation for most algorithmic computation.

    I think Moore’s Law will continue relatively unabated to around 2030, then slow considerably. One must look to other technologies such as bioengineering for future advances.

    • Jess Riedel

      I agree that the promise of quantum computing is sometimes overblown and that it is not known to provide any sort of universal speed up for all types of problem.  But besides factorization, there are at least two other major uses: Grover’s search and the simulation of physical quantum systems.  It’s also a little premature to rule out future breakthroughs.  Once quantum computers are actually built, there will be much more attention applied to how they might be useful.

      • Doulas Knight

        In particular, quantum physics simulation may be relevant to brain emulation.

      • KrisKelvin

        Everything indicates that the brain doesn’t use any quantum mechanical effects (superposition or entanglement).

      • kurt9

        You might be right about your last point. However, I think “quantum” anything has been way over-hyped in the last couple of decades, kind of like sentient A.I.

      • Locaha

        Protein folding! Molecular simulation! REAL In-silico drug design.

      • VV

        Protein folding is NP-hard in the general case. NP-hard problems are conjectured to be hard even for quantum computer

        (specifically, it is conjectured that P < BQP < NP, where BQP means Bounded error Quantum Polynomial time).

        Specific instances of the protein folding problem (many of the biologically relevant ones, in particular) may be tractable even by classical computers. Quantum computers may or may not provide practical performance improvements in this area, depending on technical details currently impossible to predict.

  • VV

    Keep in mind that a physically reversible computer would effectively a perpetual motion machine. It is not be physically realizable, and getting closer to it would become increasingly harder as the amount of irreversibility becomes smaller.

    Moreover, even if such computer existed, it would be highly sensitive to noise, generally exhibiting chaotic behavior, because its phase space wouldn’t have attractors (this is in sharp contrast to digital electronics, which is inherently robust against noise because, in the operating regime, the phase space consists of discrete attractors that correspond to the discrete logical states). In principle, reversible computers can be equipped with error correction mechanisms that dissipate energy, and their efficiency is also thermodynamically bounded.

    You can’t even make an irreversible computer that operates at Landauer’s bound, for the same reason you can’t make an engine that operates at Carnot’s efficiency: Such bounds are only attainable in quasi-static conditions that imply vanishing speed.

    Computer gates are usually designed today to change as rapidly as
    possible, and as a result in effect irreversibly erase many bits per
    gate operation.

    Actually, NAND and NOR gates lose at most one logical bit per operation.

    To erase fewer bits instead, gates must be run
    “adiabatically,” i.e., slowly enough so key parameters can change
    smoothly. In this case, the rate of bit erasure is inverse in speed; run
    a gate twice as slowly, and it erases only half as many bits (Younis 1994).

    The issue is not bit erasure rate, which is fixed depending on the type of logical operation. The issue is that in order to approach the thermodynamic bound for that logical operation you need to go slower and slower.

    Once you go down to the quantum level the bounds become not even asymptotically attainable (except in a stochastic sense) due to the quantization of energy levels.

    Overall though, I doubt the rate of effective gains will slow down by
    more than a factor of four over the next half century. So, whatever you
    might have thought could happen in 50 years if Moore’s law continued
    steadily, is pretty likely to happen within 200 years.

    I don’t see how this follows. Moore’s law is an exponential with positive rate. If you slow it down by a factor of four it will still remain an exponential with positive rate. But it can’t be like that near an asymptote. It will be something close to an exponential with negative rate, yielding an overall sigmoidal shape.

    Anyway, I think that before we start approaching the thermodynamic bounds, increasing difficulty of miniaturization will become the limiting factor.

    • Robin Hanson

      Thermodynamically all entropy increase is equivalent to bit erasure. That is the sense in which I say that entropy increase is “in effect” bit erasure.

      • VV

         Ok, but mixing logical bits with bits of uncertainty over the physical state of world was a bit misleading.

      • AnthonyC

        Is it? Entropy is the logarithm of the number of possible arrangements of quanta of energy among the allowed degrees of freedom. That is, the number of bits required to specify the system’s exact state. It *is* a kind of information in a very real, physical sense.

      • VV

        Entropy is the logarithm of the number of possible arrangements of quanta of energy among the allowed degrees of freedom.

        I can’t really parse that. Classical statistical mechanical/informational entropy is defined in terms of Gibbs/Shannon entropy, which are both generalized to quantum statistical mechanics/quantum information theory in terms of Von Neumann entropy.

        Statistical/informational entropy and classical thermodynamical entropy are related but distinct concepts.

        But that’s beside the point. We were discussing computers. In any practical computer, you can’t use all the physical degrees of freedom to perform computation.

        You have a logical state space (a finite space in the classical case, or a finite-dimensional Hilbert space in the quantum case) which is implemented on a much bigger (infinite-dimensional for all practical purposes) physical state space. Each logical state corresponds to a region of the physical state space.
        Implementing reversible logical operations on a physically irreversible system is trivial, but doesn’t yield any energy efficiency gain: logical bits are preserved, but physical entropy is still being generated.

        In order to get arbitrarily small, ideally zero, energy consumption you need physical reversibility, or an arbitrarily close approximation of that. Such systems can’t implement irreversible logic operations, hence the interest in reversible logical operations and how to convert irreversible algorithms to reversible ones.

        But I wanted to point out that physically reversible systems don’t exist (with the possible exception of the universe as a whole) and approximating them is as hard as approximating perpetual motion machines for the very same thermodynamical reasons.

      • AnthonyC

        Statistical/informational entropy and classical thermodynamical entropy are related by Boltzmann’s constant. S = k log W

        And I think you mean any *currently* practical computer

      • VV

        Statistical/informational entropy and classical thermodynamical entropy are related by Boltzmann’s constant. S = k log W

        No. Boltzmann’s constant relates statistical mechanical (Gibbs) and informational (Shannon) entropy. Classical thermodynamical entropy is a related but different concept: http://en.wikipedia.org/wiki/Entropy_%28classical_thermodynamics%29

        And I think you mean any *currently* practical computer

        No.

  • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

    Nice analysis. I wish we could get more Moore’s Law extrapolation from people who take into account more than one trend and who aren’t biased toward either insisting that it continues on apace or people who want you to think it’s ground to a halt.

  • michael vassar

    This matches my analysis for the most part, with caveats already states, such as 3D being effectively a continuation of parallelism, and others not-yet-stated, such as the possibility of continuing the effective economic trend by a) manufacturing computers more cheaply or using them for longer, and b) letting them spend more energy. Energy from b could come from any sort of wealth or energy advances, or from cloud-based computing and placing most computations in locations with inexpensive energy.

    All of the above buys, at a reasonable guess, a single extra order of magnitude more available computing power over the next 40 years than the above analysis suggests and no impact on growth rate beyond that, so on the one hand, it’s well within the error bar for the analysis already done. On the other hand, the fraction of the economy devoted to computational processing has been growing exponentially, and by 40 years from now, even if that trend slows significantly, it seems plausible that processing would constitute a majority of the economy. If so, an order of magnitude more computing power in 40 years constitutes a several-fold larger world economy, seemingly dwarfing most policy considerations when estimating likely economic growth from now until then.

  • arch1

    Thanks Robin for the helpful summary. I get that quantum computing is a ways off, has constraints and technical risks. etc. What I would really like to understand better (& will shamelessly query here) is:

    What’s the gap in principle between quantum computing (QC) and non-quantum parallel computing (NQPC), wrt the kinds of computations each can radically accelerate? (I infer that there’s *some* gap because typical lists of candidate QC applications seem quite limited relative to the vast array of applications which have *already* proven amenable to acceleration by various flavors of NQPC.)

    • Jess Riedel

      I wouldn’t think factoring-type problems or general quantum simulation are amenable to NQPC. Whether a database search can be done faster with QC (Grover’s algorithm) vs NQPC is going to depend on something like the marginal technical cost of additional qubits vs. additional processors. I have terrible intuition here but my guess is NQPC would win.

      • KrisKelvin

        Grover’s algorithm and similar ones do “only” provide a quadratic speedup on several problems. Which could be nice but is no really big thing. In contrast, Shor’s algorithm provides a exponential speedup for factorization. (Which IS a really big thing.)

      • VV

        Theoretically, yes.

        Practically, it depends on how much of a big thing factorization is. Currently, the only one application seems to be breaking certain cryptographic primitives like the RSA algorithm, and replacements for these primitives are already available.

      • KrisKelvin

        Yes you are right. As far as I know the only practical use for quantum computers would be the simulation of quantum systems. Although there are still necessary quantum algorithms missing for this. The last noticeable progress in the area I know of is already two years old:
        http://vcq.quantum.at/press-release-nature-publication-quantum-metropolis-sampling-.4919.html

    • KrisKelvin

      “NQPC” can’t “radically” accelerate any computation since it is still a classical computer. E.g. if you double the cores of a super computer it will double i’s speed at best. Or if you for example search a unsorted database the time complexity is always O(N). Regardless of whether you use a classical computer with hardware parallelization or not.

  • IMASBA

    Well, we know for sure brain emulations can run on small computers: the human brain is the proof and without a body with muscles and nerves the brain could be even smaller. I don’t think we need quantum computers to build an artificial brain. If we can figure out the algorithms the brain uses that’ll be a big step already.

  • Tim Tyler

    I don’t think digitizing heat (a.k.a reversible computing) will be much of a thing. You do gain the ability to pipe heat around better – but that’s about it. Nobody is going to bother actually running their machines backwards to recover a bit of wasted energy – and without that, reversible machines will just get clogged up with digital heat, and you gain little. I have a page giving more details on this topic: “Cellular Automata, Reversible Logic”.

  • anon

    Another consideration must be factored in, waste heat. The more compact a circuit becomes, especially a 3d one the harder it will become to dump heat. I’ve seen a few Journal articles that suggest this might be as hard a limit for circuit density as the light speed limit is for practical size of computers.

  • dmytryl

    Die shrinkage decreased cost per transistor – same steps produced more of the more energy efficient transistors. 3D layering does none of that, and as such, is nearly irrelevant (absent dramatic improvements in cost of the process).

  • Alexander Gabriel

    If anybody wants to point me to a good source claiming that parallel computing will slow economic impacts, I’d say thanks. I’ve found a few things like this:

    http://www.cs.berkeley.edu/~agearh/cs267.sp10/files/CS267_Term_Paper_Thompson.pdf
    http://www.economist.com/node/18750706

    But seems pretty ambiguous. This early paper seems to suggest many people thought parallel computing would be good. Could it be that expectations are just diminishing to moderate levels from too-high ones?

    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.917

  • Pingback: Overcoming Bias : A Future Of Pipes

  • Pingback: Overcoming Bias : Computing Cost Floor Soon?