Tag Archives: Engineering

The Planiverse

I recently praised Planiverse as peak hard science fiction. But as I hadn’t read it in decades, I thought maybe I should reread it to see if it really lived up to my high praise.

The basic idea is that a computer prof and his students in our universe create a simulated 2D universe, which then somehow becomes a way to view and talk to one particular person in a real 2D universe. This person is contacted just as they begin a mystical quest across their planet’s one continent, which lets the reader see many aspects of life there. Note there isn’t a page-turning plot nor interesting character development; the story is mainly an excuse to describe its world.

The book seems crazy wrong on how its mystical quest ends, and on its assumed connection to a computer simulation in our universe. But I presume that the author would admit to those errors as the cost of telling his story. However, the book does very well on physics, chemistry, astronomy, geology, and low level engineering. That is, on noticing how such things change as one moves from our 3D world to this 2D world, including via many fascinating diagrams. In fact this book does far better than most “hard” science fiction. Which isn’t so surprising as it is the result of a long collaboration between dozens of scientists.

But alas no social scientists seem to have been included, as the book seem laughably wrong there. Let me explain.

On Earth, farming started when humans had a world population of ten million, and industry when that population was fifty times larger. Yet even with a big fraction of all those people helping to innovate, it took several centuries to go from steam engines to computers. Compared to that, progress in this 2D world seems crazy fast relative to its population. There people live about 130 years, and our hero rides in a boat, balloon, and plane, meets the guy who invented the steam engine, meets another guy who invented a keyboard-operated computer, and hears about a space station to which rockets deliver stuff every two weeks.

Yet the entire planet has only 25,000 people, the biggest city has 6000 people, and the biggest research city has 1000 people supporting 50 scientists. Info is only written in books, which have a similar number of pages as ours but only one short sentence per page. Each building has less than ten rooms, and each room can fit only a couple of people standing up, and only a handful of books or other items. In terms of the space to store stuff, their houses make our “tiny houses” look like warehouses by comparison. (Their entire planet has fewer book copies than did our ancient Library at Alexandria.)

There are only 20 steam engines on their planet, and only one tiny factory that makes them. Only one tiny factory makes steel. In fact most every kind of thing is made a single unique small factory of that type, and only produces a modest number of units of whatever it makes. Most machines shown have only a tiny number of parts.

Their 2D planet has a 1D surface, with one continent divided into two halves by one mountain peak. The two ends of that continent are two shores, and on each shore the fishing industry consists of ~6 boats that each fit two people each and an even smaller mass of fish. I have a hard time believing that enough fish would drift near enough to the shore to fill even these boats once a day.

As the planet surface is 1D, everyone must walk over or under everything and everyone else in order to walk any nontrivial distance. Including every rock and plant. So our hero has to basically go near everyone and everything in his journey from one shore to the mountain peak. Homes are buried underground, and must close their top door for the rivers that wash over them periodically.

So in sum, the first problem with Planiverse is that it has far too few people to support an industrial economy, especially one developing at the rate claimed for this. Each industry is too small to support much in the way of learning, scale economies, or a division of labor. It is all just too small.

So why not just assume a much larger world? Because then transport costs get crazy big. If there’s only one factory that makes a king of thing, then to get one of it to everyone each item has to be moved on average past half of everything and everyone. A cost that grows linearly with how many things and people there are. Specialization and transportation are in conflict.

A second lessor problem is that the systems shown seem too small and simple to actually function. Two dimensions just don’t seem to offer enough room to hold all needed subsystems, nor can they support as much modularity in subsystem design. Yet modularity is central to system design in our world. Let me explain.

In our 3D world, systems such as cells, organisms, machines, buildings, and cities consist of subsystems, each of which achieves a different function. For example, each of our buildings may have at least 17 separate subsystems. These deal with: structural support, fresh air, temperature control, sunlight, artificial light, water, sewage, gas, trash, security surveillance, electricity, internet, ambient sound, mail transport, human activities, and human transport. Most such subsystems have a connected volume dedicated to that function, a volume that reaches close to every point in the building. For example, the electrical power system has connected wires that go to near every part of the building, and also connect to an outside power source.

In 2D, however, a volume can only have two or fewer subsystems of connected volumes that go near every point. To have more subsystem volumes, you have to break them up, alternating control over key connecting volumes. For example, in a flat array of streets, you can’t have arrays of north-south streets and east-west streets without having intersections that alternate, halting the flow of one direction of streets to allow flow in the other direction.

If you wanted to also have two more arrays of streets, going NW-SE and NE-SW, you’d need over twice as many intersections, or each intersection with twice as many roads going in and out of it. With more subsystems you’d need even more numerous or conflicting intersections, making such subsystem even more limited and dependent on each other.

Planiverse presents some designs with a few such subsystem intersections, such as “zipper” organs inside organisms that allow volumes to alternate between being used for structural support and for transporting fluids, and a similar mechanism in buildings. It also shows how switches can be used to let signal wires cross each other. But it doesn’t really take seriously the difficulty of having 16 or more subsystem volumes all of which need to cross each other to function. The designs shown only describe a few subsystems.

If I look at the organisms, machines, buildings, and cities in my world, most of them just have far more parts with much more detail than I see in Planiverse design sketches. So I think that in a real 2D world these would all just have to be a lot more intricate and complicated, a complexity that would be much harder to manage because of all these intersection-induced subsystem dependencies. I’m not saying that life or civilization there is impossible, but we’d need to be looking at far larger and more complicated designs.

Thinking about this did make me consider how one might minimize such design complexity. And one robust solution is: packets. For example, in Planiverse instead of moving electricity via wires, it is moved via batteries, which can use a general transport system that moves many other kinds of objects. And instead of air pipes they used air bottles. So the more kinds of subsystems that can be implemented via packets that are all transported via the same generic transport system, the less you have to worry about subsystem intersections. Packets are what allow many kinds of signal systems to all share the same internet communication network. Even compression structural support can in principle be implemented via mass packets flying back and forth.

In 1KD dimensions, there is plenty of volume for different subsystems to each have their own connected volume. The problem there is that it is crazy expensive to put walls around such volumes. Each subsystem might have its own set of wires along which signals and materials are moved. But then the problem is to keep these wires from floating away and bumping into each other. Seems better to share fewer subsystems of wires with each subsystem using its own set of packets moving along those wires. Thus outside of our 3D world, the key to designing systems with many different kinds of subsystems seems to be packets.

In low D, one pushes different kinds of packets through tubes, while in high D, one drags different kinds of packets along attached to wires. Packets moving along wires for the 1KD win. Though I as of yet have no idea how to attach packets to move along a structure of wires in 1KD. Can anyone figure that out please?

GD Star Rating
loading...
Tagged as: ,

Monster Pumps

Yesterday’s Science has a long paper on an exciting new scaling law. For a century we’ve known that larger organisms have lower metabolisms, and thus lower growth rates. Metabolism goes as size to the power of 3/4 over at least twenty orders of magnitude:

BodyScaling

So our largest organisms have a per-mass metabolism one hundred thousand times lower than our smallest organisms.

The new finding is that local metabolism also goes as local biomass density to the power of roughly 3/4, over at least three orders of magnitude. This implies that life in dense areas like jungles is just slower and lazier on average than is life in sparse areas like deserts. And this implies that the ratio of predator to prey biomass is smaller in jungles compared to deserts.

When I researched how to cool large em cities I found that our best cooling techs scale quite nicely, and so very big cities need only pay a small premium for cooling compared to small cities. However, I’d been puzzled about why biological organisms seem to pay much higher premiums to be large. This new paper inspired me to dig into the issue.

What I found is that human engineers have figured ways to scale large fluid distribution systems that biology has just never figured out. For example, the hearts that pump blood through animals are periodic pumps, and such pumps have the problem that the pulses they send through the blood stream can reflect back from joints where blood vessels split into smaller vessels. There are ways to design joints to eliminate this, but those solutions create a total volume of blood vessels that doesn’t scale well. Another problem is that blood vessels taking blood to and from the heart are often near enough to each other to leak heat, which can also create a bad scaling problem.

The net result is that big organisms on Earth are just noticeably sluggish compared to small ones. But big organisms don’t have to be sluggish, that is just an accident of the engineering failures of Earth biology. If there is a planet out there where biology has figured out how to efficiently scale its blood vessels, such as by using continuous pumps, the organisms on that planet will have fewer barriers to growing large and active. Efficiently designed large animals on Earth could easily have metabolisms that are thousands of times faster than in existing animals. So, if you don’t already have enough reasons to be scared of alien monsters, consider that they might have far faster metabolisms, and also be very large.

This seems yet another reason to think that biology will soon be over. Human culture is inventing so many powerful advances that biology never found, innovations that are far easier to integrate into the human economy than into biological designs. Descendants that integrate well into the human economy will just outcompete biology.

I also spend a little time thinking about how one might explain the dependence of metabolism on biomass density. I found I could explain it by assuming that the more biomass there is in some area, the less energy each biomass gets from the sun. Specifically, I assume that the energy collected from the sun by the biomass in some area has a power law dependence on the biomass in that area. If biomass were very efficiently arranged into thin solar collectors then that power would be one. But since we expect some biomass to block the view of other biomass, a problem that gets worse with more biomass, the power is plausibly less than one. Let’s call a this power that relates biomass density B to energy collected per area E. As in E = cBa.

There are two plausible scenarios for converting energy into new biomass. When the main resource need to make new biomass via metabolism is just energy to create molecules that embody more energy in their arrangement, then M = cBa-1, where M is the rate of production of new biomass relative to old biomass. When new biomass doesn’t need much energy, but it does need thermodynamically reversible machinery to rearrange molecules, then M = cB(a-1)/2. These two scenarios reproduce the observed 3/4 power scaling law when a = 3/4 and 1/2 respectively. When making new biomass requires both simple energy and reversible machinery, the required power a is somewhere between 1/2 and 3/4.

Added 14Sep: On reflection and further study, it seems that biologists just do not have a good theory for the observed 3/4 power. In addition, the power deviates substantially from 3/4 within smaller datasets.

GD Star Rating
loading...
Tagged as: , ,

Signal Mappers Decouple

Andrew Sullivan notes that Tim Lee argues that ems (whole brain emulations) just won’t work:

There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)

Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.

A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a signal processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.

Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.

The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.

Added 7p: Tim Lee responds:

Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …

Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.

My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.

GD Star Rating
loading...
Tagged as: , , , ,