A Future Of Pipes

Back in March I wrote:

Somewhere around 2035 or so … the (free) energy used per [computer] gate operation will fall to the level thermodynamics says is required to [logically] erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely [logically] erase bits. … Computer gates … today … in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slow enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slow, and it erases only half as many bits per operation. Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. (more)

The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.

We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.

Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.

Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?

Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.

For example, there is a fractal design for piping both smoothly flowing and turbulent cooling fluids where, holding constant the fluid temperature and pressure as well as the cooling required per unit volume, the fraction of city volume devoted to cooling pipes goes as the logarithm of the city’s volume. That is, every time the total city volume doubles, the same additional fraction of that volume must be devoted to a new kind of pipe to handle the larger scale. The pressure drop across such pipes also goes as the logarithm of city volume.

The economic value produced in a city is often modeled as a low power (greater than one) of the economic activity enclosed in that city. Since mathematically, for a large enough volume a power of volume will grow faster than the logarithm of volume, the greater value produced in larger cities can easily pay for their larger costs of cooling. Cooling does not seem to limit feasible city size. At least when there are big reservoirs of cool fluids like air or water around.

I don’t know if the future is still plastics. But I do know that a big chuck of it will be pipes.

Added 10Nov 4p:  Proof of “When such hardware runs …” : V = value, C = cost, N = # processors, s = speed run them at, p,q = prices. V = N*s, C = p*N + q*N*s2.  So C/V = p/s + q*s. Pick s to min C/V gives p = q*s2, so two parts of cost C are equal. Also, C/s = 2*sqrt(p*q).

Added 6Nov2014: According to 2012 data, pipes turn out to be the most “complex” product, i.e. the product that most indicates that a nation is able to produce many difficult things.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • Doug

    Very interesting conjecture.

    I’d add another possible application that may influence the design of the cooling systems. By 2035 I’d assume that crypto-security would be highly integrated into virtually all aspects of the software stack. It’s likely that true random bits will become a computationally scarce resource much as processor time, memory and network bandwidth are today.

    Pseudo-random number generators are inferior as potentially insecure and comprisable. The cooling pipe systems could be used to deliver true random bits through their thermal noise. In which case it would bias the piping designers to more turbulent designs that contain more noise.

    • Flow would probably be turbulent anyway, just to enable more throughput. Your home AC ducts are usually turbulent today, for example.

    • Hardware RNGs are apparently up to gigs per second. It’s funny to think about pipes doubling as RNGs, but is it really plausible that current approaches to RNGs will be outperformed by cooling systems?

      • Doug

        I’d imagine that you’d use hardware RNGs but optimally place them next to the most turbulent junctures of cooling flows. A RNG next to noisy thermal flows will have a higher (secure) random bit generation rate.

      • Jess Riedel

        In principle, the thermal noise you’d pick up from the pipes would be correlated with the results of the gates being cooled. You really want quantum-generated noise. Of course it’s possible that all the correlated info in the pipes would be destroyed by being XOR’ed with quantum fluctuations in the fluid, but in that case it’s unlikely that turbulent fluid is a denser source of quantum fluctuations than some other quantum-based dedicated hardware RNG. (I’d expect difference sources to vary by many orders of magnitude.)

    • VV
  • Tim Tyler

    Heat can be digitized. Futuristic ways of piping digital heat around are pretty easy to imagine – just use a signal channel. Using digital heat will have some advantages in terms of the ability to pipe it around easily.

    • Doug

      Already the ideas been floating around for a few years of using data furnaces. It’s somewhat wasteful that most residential homes and offices spend so much fuel on heating, when massive datacenters spend so much on cooling.

      Stick some cloud server stacks in the basement of people’s homes. When people want to turn up the temperature route more jobs to increase the CPU utilization of the servers.

      This becomes more cost beneficial as the cost of cooling rises relative to the cost of hardware. The benefit of free cooling for the server operators outweigh the cost of idle hardware when people want to turn the heat down.

      • Tim Tyler

        Note that that is ordinary heat from digital processes. Digital heat is entropy in a digital form. Rather than using an irreversible computer to generate real heat, you can use reversible computation to generate digital heat (i.e. a disordered pattern of 1s and 0s). It’s like ordinary heat but, because it is digital, it is easier to confine to a channel and transport around. I have an essay titled “Reversible Logic” that goes into this in more detail. Robin’s essay seems to be lacking this concept.

      • Doug

        Ah. Thanks for the clarification. Essay sounds interesting, link?

      • Jess Riedel

        Currently, bajillions of particles are used to stabilize each individual digital bit in a computer. The physical heat latent in those particles dwarfs the heat encoded in the digital bit. Why would that digital heat be more efficient to transport than simply dumping into a single particle in a fluid as normal, physical heat? Even if you imagine a future where many fewer particles are needed to encode a bit, the robustness necessary for a digital bit will still always necessitate a significant overhead. What is the point of keeping it digital (i.e. robust)?

      • Tim Tyler

        Digitization is typically wasteful. Its virtues are things like that it provides more control, and is more deterministic. The digital revolution has already overtaken most information-theoretic tasks – except for advanced intelligence, power supply and cooling. Those application domains could well follow along – though they might take a while.

      • VV

        A reversible computer is a perpetual motion machine. Literaly.

        It can be an useful model for some aspects of information theory, but that’s not something we are going to build, as long as the known laws of thermodynamics hold true.

      • Tim Tyler

        I think you are taking the term “reversible computation” too literally. Real implementations just apply the theory of reversible computation. This doesn’t mean that they don’t have power supplies or don’t generate heat. Perhaps look into “adiabatic circuits”.

      • VV

        If they are not physically reversible then they can’t process or transfer “digitalized heat” without producing more physical heat.

      • Tim Tyler

        So what? The *usual* point of the exercise is to reduce heat production locally – not to make it equal to zero globally. Reversible components reduce heat production locally, and provide the option of dumping the heat at the location of your choice.

      • VV

        If they are not physically reversible, they will also produce heat locally.

        It is trivial to create a logically reversible gate (e.g. Toffoli or Fredkin) from common logically irreversible gates, but it would serve no purpose.

      • Philip Goetz

        That isn’t converting physical heat into digital heat. How can I cool my refrigerator and pipe the heat out a fiber optics cable?

      • Tim Tyler

        Only you are talking about converting physical heat into digital heat. Everyone else is discussing going in the opposite direction – converting digital heat into physical heat.

        Of course, in principle, you can do the conversion you specify. Generating disordered data from hot objects can cool them down a little. Heat is just disordered motion – which is a substrate-neutral concept. Heat is portable.

      • Philip Goetz

        No; you brought it up by saying you could digitize heat. Talking about reversible computation at all implies knowing digital entropy must produce physical heat. I thought you were saying we could transport heat by digitizing it.

      • Tim Tyler

        ISWYM. I intended to refer to reducing local heat generation by keeping the entropy generated by computations in the digital domain – in order to pipe it around more effectively. Converting conventional heat into digital heat doesn’t seem likely to be very useful.

    • A typical home air conditional cools 60000 btu per hour, which is 6 x 10^25 bits per second. The fastest known fiber optic cable does 10^14 bits per second. This isn’t remotely a close contest.

      • Tim Tyler

        Robin, your essay is discussing what the situation will be in 2035. In that time, conventional heat processing will stay roughly the same, while digital heat processing capacities can be reasonably expected to double every 18 months. Also, if conventional heat processing is allowed to use parallel pipes, digital heat processing should similarly be allowed to use parallel signal channels. It is true that there will be an overhead associated with going digital, but there will also be benefits – in that you can make the heat go where you want to – and keep it out of your important high-performance components. If you want to argue that the costs will remain much bigger than the benefits, go ahead, but I think that your “home air conditioner” example fails to make that case.

      • Silent Cal

        2 ^ ((2035 – 2013) * 12 / 18) = 26008 << (6 * 10^25 / 10^14).

      • Tim Tyler

        That calculation represents an unfair comparison. Apart from totally ignoring parallelism, it uses Landauer’s limit on one side – and not on the other. Doing that would be reasonable when the tech for erasing a bit really is that efficient – but that is a long way away.

    • Philip Goetz

      Can you elaborate on what it means to digitize heat?

      • Tim Tyler

        Heat is high levels of entropy. Digital heat is high levels of digital entropy – i.e. a disordered / incompressible string of bits. If you erased those bits, you would convert digital heat into conventional heat – along the lines suggested by Landauer’s principle.

      • Philip Goetz

        That’s going in the opposite direction. But how do you digitize physical heat (end up with lower temperature and random bits)?

      • Tim Tyler

        For example, by using the piezoelectric effect to translate between random molecular collisions and digital radiation bursts. Any radiation of signals will reduce the temperature of the radiating body. Hot bodies spit out digital signals – in the form of photons and other particles -all the time, and they are cooler as a result of their emissions. It is no big deal.

  • Robert Koslover

    I knew way back in the 1970s that there would always be “a big future in computer maintenance.” See, for example, http://www.youtube.com/watch?v=gFLvhKv-Lbo


    It won’t be pipes: http://www.bbc.co.uk/news/science-environment-24571219 but the limit of the cooling liquid giving off heat along the surface while the computer hardware generates heat throughout its volume does remain.

  • adrianratnapala

    “Thus in future cities crammed with computer hardware, …”

    Put the data-centres ate the bottom of the ocean, or in Antarctica or something. After fixing the CO2 thing, create a direct-heating version of global-warming.

  • Sean

    Why won’t we just move much/most of the computing to data centres in the arctic? Like facebook’s data centre in Sweden: http://www.bbc.co.uk/news/business-22879160

    Is it that the communication lags would be too long?

    • IMASBA

      Why not in space?

      • VV

        In space you can only get rid of heat by radiating it, which isn’t very much efficient.

      • IMASBA

        You’re right, that would be inefficient, what about floating in the liquid hydrogen atmosphere of a gas giant?

      • PhasmaFelis

        Gaseous hydrogen, not liquid. If it’s liquid it’s not an “atmosphere”.

        And if you’re going to have a data center suspended from balloons, you might as well just put it in the chill air of Earth’s own upper atmosphere. I don’t see much advantage to spending loads of extra time and money just to stick it somewhere with a ping time of up to two hours (lightspeed delay from Earth to Jupiter).

      • IMASBA

        Jupiter (the liquid part which may not be called atmosphere but that’s a technicality since there is no real surface) is colder than the coldest place in Earth’s atmosphere and communication with Earth is unncessary because Hanson intends for civilization to exist in digital form inside the data cities, the only other purpose I can think of also doesn’t require much communication: extensive computer simulations.

      • IMASBA

        Also, putting these data cities on Jupiter prevents global warming on Earth.

      • Doug

        Unless you use laser cooling, in which case radiation is the natural method of dissipating heat.

      • VV


    • It might be nice to have cooler air and water to pull in, but that doesn’t stop the need to have lots of pipes to move fluids around.

  • VV

    Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers.

    Where did you get your numbers from? According to Wikipedia: http://en.wikipedia.org/wiki/Data_center#Energy_use

    “By 2012 the cost of power for the data center is expected to exceed the cost of the original capital investment.”

    • It might happen to be true now in some places, but there are stronger theoretical reasons for it to be true after 2035.

      • VV

        Can you elaborate?

      • See my added to the post.

      • IMASBA

        Why does the cooling cost go up as s^2 but also as N^1 ? Why does it cost more power to increase the clockspeed than to increase the number of processors (is it because you assume a fixed volume when increasing clockspeed while the volume would have to increase if the number of processors was increased, meaning the cooling surface would automatically increase as well)?

      • The slower you go, the closer you get to perfect reversibility.

      • IMASBA

        Ah, so q only goes up as s^2 when you already have very efficient processors (which don’t exist yet but may in the future)?

      • Robin Hanson

        q is the price of energy/cooling, which is assumed to not vary as N or s vary. Some sorts of adiabatic cpus are feasible now: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=4428860

      • VV

        You are assuming perfect parallelizability. What about Amdahl’s law? http://en.wikipedia.org/wiki/Parallel_computing#Amdahl.27s_law_and_Gustafson.27s_law

    • Mark Hahn

      This is a stupid little document compiled by someone who doesn’t understand tech. Yes, you can build a DC that has a PUE>3, but it’s just a sign of stupidity; there’s no excuse for anything over 1.5. Further, the flyer was written at a time when the media was abuzz about falling off the Dennard scaling curve. But look at what’s happened since then: dramatic improvements in power efficiency.

      In short, the claim simply isn’t true.

      • Philip Goetz

        What is a PUE?

  • Pingback: Pipes and the future of computing | Enjoying The Moment()

  • Handle

    This conversation sounds a lot like an Intelligent Designer trying to create more complex biological life from unicellular organisms.

    “Well, the thing is, as we scale, we’re going to start having problems with surface-area-to-volume ratios. Heat will build up on the inside, and so will waste products. But the nutrients and oxygen will be on the outside. There’s also a problem with communication signals of information from one side to the other in time for a coordinated and synchronized reaction of the whole.”

    “So, you’re going to have to devote a lot of attention to transport of heat, air, nutrients, water, waste, energy, information, etc. You know, general traffic logistics. With a ‘fractal’ like pattern of large primary arteries which branch further and further down into little capillaries. Large ganglions and central computation centers, then big bandwidth fibers, then little individual nerve cells at the extremities.”

    “It would be particularly convenient if we could kill two birds with one stone. Say, one fluid could transport oxygen from central supply, and nutrients in, and waste out, to central sewage. It could also manage temperature and general hydraulic pressure. Finally it would contain a lot of little nanobots which would link together and form a giant band-aid in there’s a link”

    So, in the future, the problems of making a city-sized collection of computers are like making a body-sized collection of cells.

    • Handle

      ‘if there’s a leak’, not ‘in there’s a link’.

    • Philip Goetz

      But it’s also how we already do things. Look at a map of the streets of a city, or its electric or water system.


    What about this? http://arxiv.org/abs/1211.3633 cooling costs would still increase with increasing power consumption but the volume restriction on a dense data city would go away: the data city can be as big as you want, as long as you can connect it to a large enough surface somewhere else.

    • That does indeed look like a game changer. But it also seems a very long way off.

    • Tim Tyler

      This idea works with digital heat too – and digital heat is commonplace.

    • Philip Goetz

      Interesting. Relies on the existence of “time crystals”, on which see http://phys.org/news/2013-08-physicist-impossibility-quantum-crystals.html . Also, Google shows zero references in the world to “heat superconductivity” other than this paper.

      Not a permanent game-changer, as continued operation of economic forces should rapidly result in converting the Earth to computronium under the tech level we’re talking about. So what really matters is how to radiate heat away from the Earth.

  • smaudet

    Assuming economic theories hold any merit.

    It is bogus to assume that these theories are correct. I suspect that economic productivity reaches a tipping point after which there is little to no return from a larger city size.

    As evidence, current events seem to be a good indication that as the virtual city size grows, its overall economic productivity actually shrinks, any growth is produced by confounding factors such as expansion into limited unmeasured resources.

  • Kirk Holden

    Computronium is a series of tubes.

  • Philip Goetz

    Ah. I see how you arrived at the misunderstanding of reversible computing in your em draft. “Physically-reversible” computing would be like a Babbage engine that you could turn backwards to reproduce the inputs. But this also requires logical reversibility. Logical reversibility requires not erasing bits, /including the “random” bits stored in memory before beginning the computation/.

    This means that a reversible-computing em can’t store any memories, both because memory storage necessarily erases the bits that were there before, and because the next computation will need to re-use the same memory storage area.

    Reversible computing works by doing the computation, copying the few bits you need to store, and then undoing the computation. If the em’s brain itself is computing reversibly, it cannot save any memories of what it has done without paying the entropy cost.

    • The “few bits you need to store” can be the entire state of the brain. See the paper I cite about the fractal approach in the previous post I link to here.

      • Philip Goetz

        I suppose it’s possible that the final state of the brain would take fewer bits than some computations, but that would imply either that our memories of what we’ve done are only an infinitessimally small fraction of what we experienced while doing it, or else that most of the brain’s operations are unconscious.

      • IMASBA

        Most of the brain’s operations ARE unconscious (depending on which metaphysical theory you go with most of these operations are either really unconscious or consciously experienced by subdivisions of the brain, not by “you”).

        Btw, why does this even matter? Writing and accessing memories isn’t that computationally extensive (and will thus produce relatively small amounts of heat), so why can’t memory be done by a non-reversible computer with all the complicated calculations of intuition, emotion and empathy being done by a reversible computer?

      • VV

        A Blu-ray Disc video uses data rates up to 54 Mbit per second. Since our memories aren’t high-definition movies, that’s probably an overestimation of the data rate that our brain uses to store its memories.

        Storing at that data rate on hardware operating at Landauer’s limit at room temperature requires dissipating about 0.15 picowatt of heat.

        A human brain (the most energy-efficient type of computing hardware currently known) has an energy consumption of 20-40 watts. That’s nowhere near thermodynamical limits.

      • IMASBA

        Memories appear at most as crisp as HD video (of which Blu-ray is already a very inefficient form: an mp4 file can deliver the same noticeable quality in a much smaller file), but memories do contain elements not found in video such as smell, touch and emotions, however there is no reason to assume those take up a lot of space (it’s not hard to imagine memories merely contain index references to tables of smells, touch and emotions). HD resolution at, say 48fps, with two video and audio streams (for both eyes and ears), is enough to make something look real to us. Then it’s a matter of what form of compression the human mind uses, all in all memories indeed don’t have to be computationally expensive.

        The “kernel” of core memories that include skills, personality traits and possibly tables of emotions, smell, touch cannot be separated from the computations of the mind so easily but then again this form of memory doesn’t have to be spacious at all, maybe only a day worth of conscious memories, maybe much less even (as in a video game or operating system where the engine/kernel is dwarfed in size by the textures/apps)

  • tony

    So no matter if we are talking networks or cooling, the future is a series of tubes?

    • stevesailer

      According to Terry Gilliam’s movie “Brazil,” the future is ducts, lots and lots of ducts.

  • Pingback: Overcoming Bias : First Person Em Shooter()

  • Pingback: Overcoming Bias : Monster Pumps()

  • Pingback: Overcoming Bias : City Travel Scaling()