Tag Archives: Physics

Slowing Computer Gains

Whenever I see an article in the popular sci/tech press on the long term future of computing hardware, it is almost always on quantum computing. I’m not talking about articles on smarter software, more robots, or putting chips on most objects around us; those are about new ways to use the same sort of hardware. I’m talking about articles on how the computer chips themselves will change.

This quantum focus probably isn’t because quantum computing is that important to the future of computing, nor because readers are especially interested in distant futures. No, it is probably because quantum computing is sexy in academia, appearing often in top academic journals and university press releases. After all, sci/tech readers mainly want to affiliate with impressive people, or show they are up on the latest, not actually learn about the universe or the future.

If you search for “future of computing hardware”, you will mostly find articles on 3D hardware, where chips are in effect layered directly in top of one another, because chip makers are running into limits to making chip features smaller. This makes sense, as that seems the next big challenge for hardware firms.

But in fact the rest of the computer world is still early in the process of adjusting to the last big hardware revolution: parallel computing. Because of dramatic slowdowns in the last decade of chip speed gains, the computing world must get used to writing a lot more parallel software. Since that is just harder, there’s a real economic sense in which computer hardware gains have slowed down lately.

The computer world may need to make additional adaptations to accommodate 3D chips, as just breaking a program into parallel processes may not be enough; one may also have to to keep relevant memory closer to each processor to achieve the full potential of 3D chips. The extra effort to go into 3D and make these adaptations suggests that the rate of real economic gains from computer hardware will slow down yet again with 3D.

Somewhere around 2035 or so, an even bigger revolution will be required. That is about when the (free) energy used per gate operations will fall to the level thermodynamics says is required to erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely erase bits. See (source):

PowerTrend

Computer operations are irreversible, and use (free) energy to in effect erase bits, when they lack a one-to-one mapping between input and output states. But any irreversible mapping can be converted to a reversible one-to-one mapping by saving its input state along with its output state. Furthermore, a clever fractal trick allows one to create a reversible version of any irreversible computation that takes exactly the same time, costing only a logarithmic-in-time overhead of extra parallel processors and memory to reversibly erase intermediate computing steps in the background (Bennett 1989).

Computer gates are usually designed today to change as rapidly as possible, and as a result in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slowly enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slowly, and it erases only half as many bits per operation (Younis 1994).

Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. This will further slow the rate at which the world gains more economic value from computers. Sometime much further in the future, quantum computing may be feasible enough so it is sometimes worth using special quantum processors inside larger ordinary computing systems. Fully quantum computing is even further off.

My overall image of the future of computing is of continued steady gains at the lowest levels, but with slower rates of economic gains after each new computer hardware revolution. So the “effective Moore’s law” rate of computer capability gains will slow in discrete steps over the next century or so. We’ve already seen a slowdown from a need for parallelism, and within the next decade or so we’ll see more slowdown from a need to adapt to 3D chips. Then about 2030 or so we’ll see a big reversibility slowdown due to a need to divide part gains between more operations and using less energy per operation.

Overall though, I doubt the rate of effective gains will slow down by more than a factor of four over the next half century. So, whatever you might have thought could happen in 50 years if Moore’s law had continued steadily, is pretty likely to happen within 200 years. And since brain emulation is already nicely parallel, including with matching memory usage, I doubt the relevant rate of gains there will slow by much more than a factor of  two.

GD Star Rating
loading...
Tagged as: , ,

Functions /= Tautologies

Bryan:

Calling the mind a computer is just a metaphor – and using metaphors to infer literal truths about the world is a fallacy.

Me:

I’m saying that your mind is literally a signal processing system. … While minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function.

Bryan:

The “standard theories of minds as signal processors” that Robin refers to aren’t theories at all. They’re just eccentric tautologies. As Robin has frankly admittedly to me several times, he uses the term “signal processors” so broadly that everything whatsoever is a signal processor. On Robin’s terms, a rock is a signal processor. What “signals” do rocks “process”? By moving or not moving, rocks process signals about the mass and distance of other objects in the universe.

Consider an analogy. Our theory of table legs is that they function mainly for structural support; table legs hold up tables. Yes, anything can be analyzed for the structural support it provides, and most objects can be arranged to as to provide some degree of structural support to something else. But that doesn’t make our theories of structural support tautologies. Our theories can tell us how efficient and effective any given arrangement of objects is at achieving this function. It we believe that something was designed to be a table leg, our theories of structural support make predictions about what sort of object arrangement it will be. And if our table is missing a leg, such theories recommend object arrangements to use as a substitute table leg.

Similarly, while any object arrangement can be analyzed in terms of the signals it sends out and the ways that it transforms incoming signals into outgoing signals, all of these do not function equally well as signal processors. If we know that something was designed as a signal processor, and know something about the kinds of signals it was designed to process for what purposes, then our theories of signal processing make predictions about how this thing will be designed. And if we find ourselves missing a part of a signal processor, such theories tell us what sort of replacement part(s) can efficiently restore the signaling function.

Animal brains evolved to direct animal actions. Fish, for example, swim toward prey and away from predators. So fish brains need to take in external signals about the locations of other fish, and process those signals into useful directions to give muscles about how to change the direction and intensity of swimming. This makes all sorts of predictions about how fish brains will be designed by evolution.

Human brains evolved to achieve many more functions than to merely to direct our speed and direction of motion. But we understand many of those functions in quite some detail, and that understanding implies many predictions about how human brains are efficiently designed to simultaneously achieve these functions.

This same combination of general signal processing theory and specific understandings about the functions evolution designed human brains to perform also implies predictions on how to substitute wholesale for human brain functions. For example, knowing that brain cells function mainly to take signals coming from other cells, transform them, and pass them on to other cells, implies predictions on what cell details one needs to emulate to replicate the signaling function of a human brain cell. It also makes predictions like:

In order manage its intended input-output relation, a single processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. (more)

All of which goes to show that signal processing theory is far from a tautology, even if every object can be seen as in some way processing signals.

GD Star Rating
loading...
Tagged as: ,

Theories vs. Metaphors

I have said things like:

We should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor.

Bryan Caplan says I make:

the Metaphorical Fallacy. Its general form:

1. X is metaphorically Y.

2. Y is literally Z.

3. Therefore, X is literally Z.

…. To take a not-so-random example, … Robin says many crazy things … like:

1. The human mind is a computer.

2. Computers’ data can be uploaded to another computer.

3. Therefore, the human mind can be uploaded to a computer.

No, I’m pretty sure that I’m saying that your mind is literally a signal processing system. Not just metaphorically; literally. That is, while minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function. And our standard theories of how physical devices achieve signal processing functions predicts that we can replicate, or “emulate”, the same signal processing functions in quite different physical devices. In fact, such theories tell us how to replicate such functions in other devices.

Of course you can, like Bryan, disagree with our standard theory that the main function of minds is to process signals. Or you could disagree with our standard theories of how that function is achieved by physical devices. Or you could note that since the brain is a signal processor of unparalleled complexity, we are a long way away from knowing how to replicate it in other physical hardware.

But given how rich and well developed are our standard theories of minds as signal processors, signal processors in general, and the implementation of signal processors in physical hardware, it hardly seems fair to reject my conclusion based on a mere “metaphor.”

GD Star Rating
loading...
Tagged as: ,

Murphy on Growth Limits

Physicist Tom Murphy says he argued with “an established economics professor from a prestigious institution,” on whether economic growth can continue forever. They both agreed to assume Earth-bound economies, and quickly also agreed that total energy usage must reach an upper bound within centuries, because of Earth’s limited ability to discard waste heat via radiation.

Murphy then argued that economic growth cannot grow exponentially if any small but necessary part of the economy is fails to grow, or if any small fraction of people fail to give value to the things that do grow:

Not everyone will want to live this virtual existence. … Many would prefer the smell of real flowers. … You might be able to simulate all these things, but not everyone will want to live an artificial life. And as long as there are any holdouts, the plan of squeezing energy requirements to some arbitrarily low level fails. …

Energy today is roughly 10% of GDP. Let’s say we cap the physical amount available each year at some level, but allow GDP to keep growing. … Then in order to have real GDP growth on top of flat energy, the fractional cost of energy goes down relative to the GDP as a whole. … But if energy became arbitrarily cheap, someone could buy all of it. … There will be a floor to how low energy prices can go as a fraction of GDP. … So once our fixed annual energy costs 1% of GDP, the 99% remaining will find itself stuck. If it tries to grow, energy prices must grow in proportion and we have monetary inflation, but no real growth. …

Chefs will continue to innovate. Imagine a preparation/presentation 400 years from now that would blow your mind. … No more energy, no more ingredients, yet of increased value to society. … [But] Keith plopped the tuna onto the bread in an inverted container-shaped lump, then put the other piece of bread on top without first spreading the tuna. … I asked if he intended to spread the tuna before eating it. He looked at me quizzically, and said—memorably, “It all goes in the same place.” My point is that the stunning presentation of desserts will not have universal value to society. It all goes in the same place, after all. (more; HT Amara Graps)

While I agree with Murphy’s conclusion that the utility an average human-like mind gains from their life cannot increase exponentially forever, Murphy’s arguments for that conclusion are wrong. In particular, if only a fixed non-zero fraction of such minds could increase their utility exponentially, the average utility would also increase exponentially.

Also, the standard power law (Cobb-Douglas) functional form for how utility depends on several inputs says that utility can grow without bound when one sector of the economy grows without bound, even when another needed sector does not grow at all and takes a fixed fraction of income. For example, if utility U is given by U = Ea N1-a, where E is energy and N is non-energy, then at competitive prices the fraction of income going to the energy sector is fixed at a, no matter how big N gets. So N can grow without bound, making U grow without bound, while E is fixed.

My skepticism on exponential growth is instead based on an expectation of strongly diminishing returns to everything, including improved designs:

Imagine that … over the last million years they’ve also been searching the space of enjoyable virtual reality designs. From the very beginning they had designs offering people vast galaxies of fascinating exotic places to visit, and vast numbers of subjects to command. (Of course most of that wasn’t computed in much detail until the person interacted with related things.) For a million years they have searched for possible story lines to create engaging and satisfying experiences in such vast places, without requiring more computational resources behind the scenes to manage.

Now in this context, imagine what it means for “imagination” to improve by 4% per year. That is a factor of a billion every 529 years. If we are talking about utility gains, this means that you’d be indifferent between keeping a current virtual reality design, or taking a one in a two billion chance to get a virtual reality design from 529 years later. If you lose this gamble, you have to take a half-utility design, which gives you only half of the utility of the design you started with. …

It may be possible to create creatures who have such strong preferences for subtle differences, differences that can only be found after a million or trillion years of a vast galactic or larger civilization searching the space of possible designs. But humans do not seem remotely like such creatures. (more)

Neither mass, nor energy usage, nor population, nor utility per person for fixed mass and energy can grow exponentially forever.

GD Star Rating
loading...
Tagged as: ,

Henson On Ems

Keith Henson, of whom I’ve long been a fan, has a new article where he imagines our descendants as fragmenting Roman-Empire-like into distinct cultures, each ~300 meter spheres holding ~30 million ems each ~1 million times faster than a human, using ~1TW of power, in the ocean for cooling. The 300m radius comes from a max two subjective seconds of communication delay, and the 30 million number comes from assuming a shell of ~10cm cubes, each an em. (Quotes below)

The 10cm size could be way off, but the rest is reasonable, at least given Henson’s key assumptions that 1) competition to seem sexy would push ems to run as fast as feasible, and 2) the scale of em “population centers” and culture is set by the distance at which talk suffers a two subjective seconds delay.

Alas those are pretty unreasonable assumptions. Ems don’t reproduce via sex, and would be selected for not devoting lots of energy to sex. Yes, sex is buried deep in us, so ems would still devote some energy to it. But not so much as to make sex the overwhelming factor that sets em speeds. Not given em econ competitive pressures and the huge selection factors possible. I’m sure it is sexy today to spend money like a billionaire, but most people don’t because they can’t afford to. Since running a million times faster should cost a million times more, ems might not be able to afford that either.

Also, the scale at which we can talk without delay has just not been that important historically in setting our city and culture scales. We had integrated cultures even when talking suffered weeks of delay, we now have many cultures even though we can all talk without much delay, and city scales have been set more by how far we can commute in an hour than by communication delays. So while ems might well have a unit of organization corresponding to their easy-talk scale, important interactions should also exist at larger scales.

Those promised quotes from Henson’s article: Continue reading "Henson On Ems" »

GD Star Rating
loading...
Tagged as: , ,

Turbulence Contrarians

A few months ago I came across an intriguing contrarian theory:

Hydrogravitional-dynamics (HGD) cosmology … predicts … Earth-mass planets fragmented from plasma at 300 Kyr [after the big bang]. Stars promptly formed from mergers of these gas planets, and chemicals C, N, O, Fe etc. were created by the stars and their supernovae. Seeded gas planets reduced the oxides to hot water oceans [at 2 Myr], … [which] hosted the first organic chemistry and the first life, distributed to the 1080 planets of the cosmological big bang by comets. … The dark matter of galaxies is mostly primordial planets in proto globular star cluster clumps, 30,000,000 planets per star (not 8!). (more)

Digging further, I found that these contrarians have related views on the puzzlingly high levels of mixing found in oceans, atmospheres, and stars. For example, some invoke fish swimming to explain otherwise puzzling high levels of ocean water mixing. These turbulence contrarians say that most theorists neglect an important long tail of rare bursts of intense turbulence, each followed by long-lasting “contrails.” These rare bursts not only mix oceans and atmospheres, they also supposedly create a more rapid clumping of matter in the early universe, leading to more earlier nomad planets (not tied to stars), which could then lead to early life and its rapid spread.

I didn’t understand turbulence well enough to judge these theories, so I set it all aside. But over the last few months I’ve noticed many reports about puzzling numbers and locations of planets:

What has puzzled observers and theorists so far is the high proportion of planets — roughly one-third to one-half — that are bigger than Earth but smaller than Neptune. … Furthermore, most of them are in tight orbits around their host star, precisely where the modellers say they shouldn’t be. (more)

Last year, researchers detected about a dozen nomad planets, using a technique called gravitational microlensing, which looks for stars whose light is momentarily refocused by the gravity of passing planets. The research produced evidence that roughly two nomads exist for every typical, so-called main-sequence star in our galaxy. The new study estimates that nomads may be up to 50,000 times more common than that. (more)

This new study was theoretical. It used a best fit power law fit to the distribution of nomad planet microlensing observations to predict ~60 Pluto sized or larger nomad planets per star.  When projected down to the comet scale, this power law actually matches known bounds on comet density. The 95% c.l. upper bound for the power law parameter gives 100,000 such wandering Plutos or larger per star.

I take all this as weak support for something in the direction of these contrarian theories – there are more nomad planets than theorists expected, and some of that may come from neglect of early universe turbulence. But thirty million nomad Plutos per star still seems pretty damn unlikely.

FYI, here is part of an email I sent the authors in mid December, as yet unanswered: Continue reading "Turbulence Contrarians" »

GD Star Rating
loading...
Tagged as: , , ,

Impossible Is Real

Fermi famously argued that either aliens aren’t out there, or they can’t or don’t want to get here, since we see none around us. An undated Stephen Hawking lecture makes a similar argument about time travel:

If sometime in the future, we learn to travel in time, why hasn’t someone come back from the future, to tell us how to do it. Even if there were sound reasons for keeping us in ignorance, human nature being what it is, it is difficult to believe that someone wouldn’t show off, and tell us poor benighted peasants, the secret of time travel. … If governments were hiding something, they are doing a pretty poor job of extracting useful information from the aliens. … Once you admit that some are mistakes, or hallucinations, isn’t it more probable that they all are, than that we are being visited by people from the future, or the other side of the galaxy? If they really want to colonize the Earth, or warn us of some danger, they are being pretty ineffective.

Many people seem quite resistant to the idea that fundamental limits might apply to our descendants, limits that continue even after trillions of years of advancement. But if we have vast hordes of descendants over trillions of years, almost none of them have the ability and inclination to travel back in time to visit us now. Because almost none are visiting. Some things really are impossible.

GD Star Rating
loading...
Tagged as: ,

Physics vs. Economics

At my prodding, Sean Carrol considered the differing public treatment of physicists and economists:

In the public imagination, natural scientists have figured out a lot more reliable and non-obvious things about the world, compared to what non-experts would guess, than social scientists have. The insights of quantum mechanics and relativity are not things that most of us can even think sensibly about without quite a bit of background study. Social scientists, meanwhile, talk about things most people are relatively familiar with.

Hey, economists can talk obscure technical jargon just as easily as physicists. We don’t actually do that so much in public, because the public respects us less. Talking more technically wouldn’t make the public respect us more. Continue reading "Physics vs. Economics" »

GD Star Rating
loading...
Tagged as: , , ,

Still Scandalous Heat

Me in ’09:

Physicists have long considered [thermodynamics] the physics area least likely to be overturned by future discoveries, in part because they understand it so well via “statistical mechanics.” Alas, not only are we far from understanding thermodynamics, the situation is much worse than most everyone admits! (more; related posts)

Many hope the theory of inflation can solve this, by predicting that a universe like ours would arise naturally out of “chaos.” But in the April Scientific American, Paul Steinhardt, a major contributor to inflationary theory, says:

Something peculiar has happened to inflationary theory in the 30 years since Guth introduced it. As the case for inflation has grown stronger, so has the case against. … Not only is bad inflation more likely than good inflation, but no inflation is more likely than either. University of Oxford physicist Roger Penrose first made this point in the 1980s. … Obtaining a flat universe without inflation is much more likely than with inflation — by a factor of 10 to the googol (10100) power! …

Many leading theorists argued that the problems with inflation are mere teething pains and should not shake our confidence in the basic idea. Others (including me) contended that the problems cut to the core of the theory, and it needs a major fix or must be replaced. (more)

Sean Carrol’s latest post reaffirms the point:

Imagine that you want to wait long enough to see something like the Big Bang fluctuate randomly out of empty space. How will it actually transpire? It will not be a sudden WHAM! in which nothingness turns into the Big Bang. Rather, it will be just like the observed history of our universe — just played backward. A collection of long-wavelength photons will gradually come together; radiation will focus on certain locations in space to create white holes; those white holes will spit out gas and dust that will form into stars and planets; radiation will focus on the stars, which will break down heavy elements into lighter ones; eventually all the matter will disperse as it contracts and smooths out to create a giant Big Crunch. Along the way people will un-die, grow younger, and be un-born; omelets will convert into eggs; artists will painstakingly remove paint from their canvases onto brushes. Now you might think: that’s really unlikely. And so it is! But that’s because fluctuating into the Big Bang is tremendously unlikely. (more)

We are not remotely close to having a reasonable account for the incredibly low entropy we seem to see in our past.

GD Star Rating
loading...
Tagged as:

Are Spirits Dark?

We see stars and galaxies moving in ways they should not; … we deduce the existence of hitherto unobserved substances, provisionally called dark matter and dark energy. … [Dark matter] outweighs ordinary matter by a factor of 6 to 1. Galaxies and galaxy clusters are embedded in giant balls, or “halos,” of dark matter. … It has to consist of particles that scarcely interact with ordinary matter. …

Could there be a whole sector of hidden particles? Could there be a hidden world that is an exact copy of ours, containing hidden versions of electrons and protons, which combine to form hidden atoms and molecules, which combine to form hidden planets, hidden stars and even hidden people? …

Hidden worlds cannot be an exact copy of our visible world. … Halos would have flattened out to form disks like that of the Milky Way. … [They] would have affected cosmic expansion, altering the synthesis of hydrogen and helium in the early universe. … That said, the dark world might indeed be a complicated web of particles and forces. … Dark matter may be accompanied by … a hidden version of electromagnetism, implying that dark matter may emit and reflect hidden light. … The observation that small galaxies are systematically rounder than their larger cousins would be a telltale sign of dark matter interacting through new forces. …

The theoretical case for a complex dark world is now so compelling that many researchers would find it more surprising if dark matter turned out to be nothing more than an undifferentiated swarm of [weakly interacting massive particles]. After all, visible matter comprises a rich spectrum of particles with multiple interactions determined by beautiful underlying symmetry principles, and nothing suggests that dark matter and dark energy should be any different. (more)

Many people have a strong intuition that around us there are “spirits”, i.e., unseen intelligences who are usually hidden, but who sometimes touch our lives and world. The hypothesis that these spirits are made of ordinary matter and big enough to see is extremely hard to square with common observations. And the hypotheses that they are made of ordinary matter but too small to see, or usually hiding in space or deep underground but popping into our areas on occasion, are also pretty hard to square with expert observations. We keep getting better at seeing things, and see little evidence of anything remotely similar. Intelligences must eat something, defecate something, have evolved from something, etc., all of which leaves traces.

But physics today does offer one plausible place for a spirit hypothesis, in “dark matter.” We know that our kind of matter (electrons, protons, etc.) makes up less than 5% of the mass of the universe around us. The rest is a mysterious matter that interacts only very weakly with our kind of matter, but could interact more strongly with itself, and in complex ways. And while we know most of this dark matter cannot be made of heavy things that clump tightly like our matter does, a substantial fraction of it (say ~1%) could.

So there could well be complex intelligences made out of dark matter, and there might also be ways for them to rarely interact with our world, though such interaction would probably require them to exert great and careful efforts. And furthermore, since dark matter is a high priority research topic, if there is complex clumping dark matter we’ll probably know about it within a half century. Perhaps even a decade.

We thus face a unique chance for folks with strong intuitions for or against spirits to make testable predictions, and even put their money where their mouth is. Those who think spirits likely should think substantial clumping dark matter is likely, and there should also be many who think neither is likely. I’ll go on record saying I doubt any substantial fraction of dark matter can support complex structures conducive to the evolution of life. What say you?

GD Star Rating
loading...
Tagged as: