Tag Archives: Physics

The Planiverse

I recently praised Planiverse as peak hard science fiction. But as I hadn’t read it in decades, I thought maybe I should reread it to see if it really lived up to my high praise.

The basic idea is that a computer prof and his students in our universe create a simulated 2D universe, which then somehow becomes a way to view and talk to one particular person in a real 2D universe. This person is contacted just as they begin a mystical quest across their planet’s one continent, which lets the reader see many aspects of life there. Note there isn’t a page-turning plot nor interesting character development; the story is mainly an excuse to describe its world.

The book seems crazy wrong on how its mystical quest ends, and on its assumed connection to a computer simulation in our universe. But I presume that the author would admit to those errors as the cost of telling his story. However, the book does very well on physics, chemistry, astronomy, geology, and low level engineering. That is, on noticing how such things change as one moves from our 3D world to this 2D world, including via many fascinating diagrams. In fact this book does far better than most “hard” science fiction. Which isn’t so surprising as it is the result of a long collaboration between dozens of scientists.

But alas no social scientists seem to have been included, as the book seem laughably wrong there. Let me explain.

On Earth, farming started when humans had a world population of ten million, and industry when that population was fifty times larger. Yet even with a big fraction of all those people helping to innovate, it took several centuries to go from steam engines to computers. Compared to that, progress in this 2D world seems crazy fast relative to its population. There people live about 130 years, and our hero rides in a boat, balloon, and plane, meets the guy who invented the steam engine, meets another guy who invented a keyboard-operated computer, and hears about a space station to which rockets deliver stuff every two weeks.

Yet the entire planet has only 25,000 people, the biggest city has 6000 people, and the biggest research city has 1000 people supporting 50 scientists. Info is only written in books, which have a similar number of pages as ours but only one short sentence per page. Each building has less than ten rooms, and each room can fit only a couple of people standing up, and only a handful of books or other items. In terms of the space to store stuff, their houses make our “tiny houses” look like warehouses by comparison. (Their entire planet has fewer book copies than did our ancient Library at Alexandria.)

There are only 20 steam engines on their planet, and only one tiny factory that makes them. Only one tiny factory makes steel. In fact most every kind of thing is made a single unique small factory of that type, and only produces a modest number of units of whatever it makes. Most machines shown have only a tiny number of parts.

Their 2D planet has a 1D surface, with one continent divided into two halves by one mountain peak. The two ends of that continent are two shores, and on each shore the fishing industry consists of ~6 boats that each fit two people each and an even smaller mass of fish. I have a hard time believing that enough fish would drift near enough to the shore to fill even these boats once a day.

As the planet surface is 1D, everyone must walk over or under everything and everyone else in order to walk any nontrivial distance. Including every rock and plant. So our hero has to basically go near everyone and everything in his journey from one shore to the mountain peak. Homes are buried underground, and must close their top door for the rivers that wash over them periodically.

So in sum, the first problem with Planiverse is that it has far too few people to support an industrial economy, especially one developing at the rate claimed for this. Each industry is too small to support much in the way of learning, scale economies, or a division of labor. It is all just too small.

So why not just assume a much larger world? Because then transport costs get crazy big. If there’s only one factory that makes a king of thing, then to get one of it to everyone each item has to be moved on average past half of everything and everyone. A cost that grows linearly with how many things and people there are. Specialization and transportation are in conflict.

A second lessor problem is that the systems shown seem too small and simple to actually function. Two dimensions just don’t seem to offer enough room to hold all needed subsystems, nor can they support as much modularity in subsystem design. Yet modularity is central to system design in our world. Let me explain.

In our 3D world, systems such as cells, organisms, machines, buildings, and cities consist of subsystems, each of which achieves a different function. For example, each of our buildings may have at least 17 separate subsystems. These deal with: structural support, fresh air, temperature control, sunlight, artificial light, water, sewage, gas, trash, security surveillance, electricity, internet, ambient sound, mail transport, human activities, and human transport. Most such subsystems have a connected volume dedicated to that function, a volume that reaches close to every point in the building. For example, the electrical power system has connected wires that go to near every part of the building, and also connect to an outside power source.

In 2D, however, a volume can only have two or fewer subsystems of connected volumes that go near every point. To have more subsystem volumes, you have to break them up, alternating control over key connecting volumes. For example, in a flat array of streets, you can’t have arrays of north-south streets and east-west streets without having intersections that alternate, halting the flow of one direction of streets to allow flow in the other direction.

If you wanted to also have two more arrays of streets, going NW-SE and NE-SW, you’d need over twice as many intersections, or each intersection with twice as many roads going in and out of it. With more subsystems you’d need even more numerous or conflicting intersections, making such subsystem even more limited and dependent on each other.

Planiverse presents some designs with a few such subsystem intersections, such as “zipper” organs inside organisms that allow volumes to alternate between being used for structural support and for transporting fluids, and a similar mechanism in buildings. It also shows how switches can be used to let signal wires cross each other. But it doesn’t really take seriously the difficulty of having 16 or more subsystem volumes all of which need to cross each other to function. The designs shown only describe a few subsystems.

If I look at the organisms, machines, buildings, and cities in my world, most of them just have far more parts with much more detail than I see in Planiverse design sketches. So I think that in a real 2D world these would all just have to be a lot more intricate and complicated, a complexity that would be much harder to manage because of all these intersection-induced subsystem dependencies. I’m not saying that life or civilization there is impossible, but we’d need to be looking at far larger and more complicated designs.

Thinking about this did make me consider how one might minimize such design complexity. And one robust solution is: packets. For example, in Planiverse instead of moving electricity via wires, it is moved via batteries, which can use a general transport system that moves many other kinds of objects. And instead of air pipes they used air bottles. So the more kinds of subsystems that can be implemented via packets that are all transported via the same generic transport system, the less you have to worry about subsystem intersections. Packets are what allow many kinds of signal systems to all share the same internet communication network. Even compression structural support can in principle be implemented via mass packets flying back and forth.

In 1KD dimensions, there is plenty of volume for different subsystems to each have their own connected volume. The problem there is that it is crazy expensive to put walls around such volumes. Each subsystem might have its own set of wires along which signals and materials are moved. But then the problem is to keep these wires from floating away and bumping into each other. Seems better to share fewer subsystems of wires with each subsystem using its own set of packets moving along those wires. Thus outside of our 3D world, the key to designing systems with many different kinds of subsystems seems to be packets.

In low D, one pushes different kinds of packets through tubes, while in high D, one drags different kinds of packets along attached to wires. Packets moving along wires for the 1KD win. Though I as of yet have no idea how to attach packets to move along a structure of wires in 1KD. Can anyone figure that out please?

GD Star Rating
loading...
Tagged as: ,

Life in 1KD

Years ago I read Flatland and Planiverse, stories set in a two-dimensional universe. To me these are the epitome of “hard science fiction”, wherein one makes one (or a few) key contrary assumptions, and then works out their physical and social consequences. I’ve tried to do similarly in my work on the Age of Em and the Hardscrapple Frontier.

Decades ago I thought: why not flip the dimension axis, and consider life in a thousand spatial dimensions? I wrote up some notes then, and last Thursday I was reminded of Flatland, which inspired me to reconsider the issue. Though I couldn’t find much prior work on what life is like in this universe, I feel like I’ve been able to quickly guess many plausible implications in just a few days.

But rather than work on this in secret for more months or years, perhaps with a few collaborators, I’d rather show everyone what I have now, in the hope of inspiring others to contribute. This seems the sort of project on which we can more easily work together, as we less need to judge the individual quality of contributors; we can instead just ask each to “prove” their claims via citations, sims, or math.

Here is what I have so far. I focus on life made out of atoms, but now in a not-curved unlimited space of dimension D=1024 (=2^10), plus one time dimension. I assume that some combination of a big bang and hot stars once created hot dense plasmas with equal numbers of electrons and protons, and with protons clumped into nuclei of varying sizes. As the universe or star regions expanded and cooled, photons bound nuclei and electrons into atoms, and then atoms into molecules, after which those clumped into liquids or solids. Molecules and compounds first accreted atoms, then merged with each other, and finally perhaps added internal bonds.

A cubic array of atoms of length L with as many surface as interior atoms satisfies (L/(L-2))^D = 2, which for D = 1024 gives L = 2956. Such a cube has (2956)^1024 atoms in total. As I hereby define 2^(2^10) to be “crazy huge” and 2^(-2^10) to be “crazy tiny”, this is a more than crazy huge array. (“Crazy huge” is ~100K times a “centillion”. “Astronomical” numbers are tiny by comparison to these.)

We thus conclude that solids or liquids substantially smaller than crazy huge have almost no interiors; they are almost all surface. If they are coupled strongly enough to a surrounding volume of uniform temperature or pressure, then they also have uniform parameters like that. Thus not-crazy-huge objects can’t have separated pipes or cavities. Stars with differing internal temperatures must also be extra crazy huge.

The volume V(r,D) of a sphere of radius r in D dimensions is V = r^D pi^(D/2) / (D/2)!. For dimensions D = (1,2,3,8,24), the densest packing of spheres of unit radius is known to be respectively (0.5,0.28,0.18,0.063,1) spheres per unit volume. The largest D for which this value is known is 24, where the sphere volume fraction (i.e., fraction of volume occupied by spheres) is V(1,24) ~= 1/518. If we assume that for D=1024 the densest packing is also no more than one unit sphere per unit volume, then the sphere volume fraction there is no more than V(1,1024) = 10^-912. So even when atoms are packed as closely as possible, they fill only a crazy tiny fraction of the volume.

If the mean-free path in a gas of atoms of radius r is the gas volume per atom divided by atom collision cross-section V(2r,D-1), and if the maximum packing density for D=1024 is one atom of unit radius per unit volume, then the mean free path is 10^602.94. It seems that high dimensional gases have basically no internal interactions. I worry that this means that the big bang doesn’t actually cause nuclei, atoms, and molecules to form. But I’ll assume they do form as otherwise we have no story to tell.

Higher dimensions allow far more direction and polarization degrees of freedom for photons. The generalized Stefan-Boltzmann law, which says the power is radiated by a black body at temperature T, has product terms T^(D+1), (2pi^0.5)^(D-1), and Gamma(D/2), all of which make atoms couple much more strongly to photons. Thus it seems high D thermal coupling is mainly via photons and phonons, not via gas.

Bonds between atoms result from different ways to cram electrons closer to atomic nuclei. In our world, ionic bonds come from moving electrons from higher energy orbital shells at one atom into lower energy shells at other atoms. This can be worth the cost of giving each atom a net charge, which then pulls the atoms together. Covalent bonds are instead due to electrons finding configurations in the space between two atoms that allow them to simultaneously sit in low shells of both atoms. Metallic bonds are covalent bonds spread across a large regular array of atoms.

Atoms seem to be possible in higher dimensions. Electrons can have more degrees of spin, and there are far more orbitals all at the lowest energy level around nuclei. Thus nuclei would need to have very large numbers of protons to fill up all the lowest energy levels. I assume that nuclei are smaller than this limit. Thus different types of atoms become much more similar to each other than they are in our D=3 universe. There isn’t a higher shell one can empty out to make an ionic bond, and all of the covalent bonds have the same simple spatial form.

The number of covalent bonds possible per atom should be < ~3*D, and B < ~D-10 creates a huge space of possible relative rotations of bonds. Also, in high dimensions the angles between random vectors are nearly right angles. Furthermore, irregularly-shaped mostly-surface materials don’t seem to have much scope for metallic bonds. Thus in high dimensions most atom bonding comes from nearly right angle covalent bonds. Which if they form via random accretion creates molecules in the shape of spatial random walks of bonds in 1024 dimensions.

It is hard to imagine making life and complex machines without making rigid structures. But rigid structures require short loops in the network of bonds, and for high D these seem unlikely to form due to random meetings of atoms in a gas or liquid; other random atoms would bond at a site long before nearby connected atoms got around to trying.

If a network of molecular bonds between N atoms has no loops, then it is a tree, and thus has N-1 bonds, giving less than two bonds per atom on average. But for P>>2, this requires almost all potential bonds to be unrealized. Thus if most atoms in molecules have P>>2 and most potential bonds are realized, those molecules can’t be trees, and so must have many loops. So in this case we can conclude that molecular bond loops are typically quite long. (How long?) Also, the most distinctive types of atoms are those with P =1,2, as enough of these can switch molecules between being small and very large.

Molecules with only long loops allow a lot of wiggling and reshaping along short stretches, and only resist deformations only on relatively large scales. And when many atoms with B < D-2 are close to each other, most neighboring atoms will not be bonded, and can thus slide easily past each other. Thus on the smallest scales natural objects should be liquids, not solids nor metals. And in a uniform density fluid of atoms that randomly forms local bonds as it cools, the connectivity should be global, extending across the entire expanded-and-cooled-together region.

Perhaps short molecular loops might be produced by life-like processes wherein some rare initial loops catalyze the formation of other matching loops. However, as it seems harder to form higher dimensional versions, perhaps life structures are usually low dimensional, and so must struggle to maintain the relative orientation of the “planes” of their different life parts. Life made this way might envy our ease of creating bond loops in low spatial dimensions; did they create our universe as their life utopia?

We have yet to imagine how to construct non-crazy-huge machines and signal processing devices in such a universe. What are simple ways to make wires, gates, levers, joints, rotors, muscles, etc.? Could the very high D space of molecule vibrations be used to good effect? Copying the devices in our universe by extending them in all dimensions is possible but often results in crazy huge objects. Nor do we know what would be the main sources of negentropy. Perhaps gravity clumping, or non-interacting materials that drift out of equilibrium as the universe expands?

The dynamics of a uniformly expanding universe is described by a scale factor a(t), which says how far things have spread apart at each time. For a matter-dominated universe a(t) goes as t^(2/(D-1)), and for a radiation-dominated universe a(t) goes as t^(2D/((D-1)(D+1)). For matter, density goes as a(t)^-D, while for radiation it goes as a(t)^-(D+1). In both cases, we have density falling as t^-2D/(D-1), which is roughly t^-2 for large D. Thus as a high D universe expands, its density falls in time much like it does in low D, but its distances increase far more slowly. There is little expansion-based redshift in high D.

When an expanding region cools enough for molecules to connect across long distances, its further expansion will tend to pull molecular configurations from initially random walks in space more toward long straight lines between key long-loop junctures. This makes it easier for phonons to travel along these molecules, as bond angles are no longer nearly right angles. For the universe, this added tension is not enough to kick it into an exponentially expanding mode; instead the expansion power law changes slightly. Eventually the tension gets large enough to break the atomic bonds, but this takes a long time as widths change only slowly with volumes in high D. (What are typical diameters of the remaining broken molecules?)

As the universe ages, the volume and amount of stuff that one could potentially see from any one vantage point increases very rapidly, like t^(D-1). However, the density or intensity of any emissions that one might intercept also falls very fast as distance d via d^-(D-1), making it hard to see anything very far. In high dimensions it is extremely hard to have a comprehensive view of everything in all directions, and also very hard to see very far in any one direction, even if you focus all of your attention there.

When two powers have a physical fight in this universe, their main problem seems to be figuring out their relative locations and orientation. It might be easy to send a missile to hit any particular location, and nearly impossible for the target to see such a missile coming or to block its arrival. But any extended object probably does not know very well the locations or orientations of its many parts, nor is it even well informed about most of the other objects which it directly touches. It knows far less about objects even a few atom’s width away in all directions. So learning the locations of enemies could be quite hard.

Finding good ways to learn locations and orientations, and to fill and update maps of what is where, would be major civilization achievements. As would accessing new sources of negentropy. Civilizations should also be able to expand in space at a very rapid t^(D-1) speed.

A high D universe of trivial topology and any decent age encompasses crazy huge volumes and numbers of atoms. The origin of life becomes much less puzzling in such a universe, given the crazy huge number of random trials that can occur. It should also be easy to move a short distance and then quickly encounter many huge things about which one had very little information. One has not seen it nor heard about them via one’s network of news and talk. This creates great scope not only for adventure stories, but also for actual personal adventure.

I’ve only scratched the surface here of all the questions one could ask, and some of my answers are probably wrong. Even so, I hope I’ve whetted your appetite for more. If so, please, figure something out about life in 1KD and tell the rest of us, to help this universe come more sharply into view. In principle our standard theories already contain the answers, if only we can think them through.

Thanks to Anders Sandberg and Daniel Martin for comments.

Added 1Feb: One big source of negentropy for life to consume is all of the potential bonds not made into actual bonds on surface atoms. Life could try to carefully assemble atoms into larger dimensional structures with fewer surface atoms.

Added 2Feb: In low D repulsive forces can be used to control things, but in high D it seems that only attractive forces are of much use.

GD Star Rating
loading...
Tagged as: ,

What Holds Up A North Pole of Dust?

I recently came across this news item:

Factoring in gravitomagnetism could do away with dark matter

By disregarding general relativistic corrections to Newtonian gravity arising from mass currents, … Ludwig asserts [standard] models also miss significant modifications to [galaxy] rotational curves … because of an effect in general relativity not present in Newton’s theory of gravity — frame-dragging … Ludwig presents a new model for the rotational curves of galaxies which is in agreement with previous efforts involving general relativity. … even though the effects of gravitomagnetic fields are weak, factoring them into models alleviates the difference between theories of gravity and observed rotational curves — eliminating the need for dark matter.

My initial reaction was skeptical snark. Yes, gravity has a magnetism, just as does electricity, and yes magnetism can push on stuff in a way that mimics the effects of dark matter. But I knew that this is an old hope, usually dropped after people do a standard quick calculation and see that its effect looks really weak, given the usual speeds of stars rotating in a galaxy.

But then a few days later I actually read the paper, and found myself impressed and persuaded. When I tweeted this fact, I got a lot of indignant pushback. Many said there’s no point to a paper that explains galaxy rotation curves, if it doesn’t also explain all the other data said to support dark matter. Many publicly said that the paper is almost surely wrong, because of the usual quick calculation. For example Garrett Lisi posted this:

Yet I could prod few of these denouncers to actually read the paper. (And most who did seemed to fail some basic comprehension tests.) Some even said I have too few physics journal publications (only 3) to speak publicly on the topic; I should leave that to those who refused to read the paper.

But the whole point of news and research is be surprised to learn things you’d didn’t expect. Why even have news or research if you will only allow them to confirm what you expect? The author, Gerson Ludwig, is well aware of the usual expectations, and published his finding saying they are wrong in a good peer reviewed journal. Furthermore, Ludwig is part of a research tradition of a least 5 papers I’ve found (1 2 3 4 5), all of which say there’s much less need to invoke dark matter to explain galaxy rotation curves if one does calculations closer to full general relativity. If even after that everyone is going to reject the idea based on priors and a quick heuristic calculation, why do research?

So I decided to dig into this paper, to see if I couldn’t either find its mistake or explain its reasoning better. Bottom line: I found a big very questionable assumption made not only by Ludwig, by also by the other 4 papers. See if you can spot it before I tell you.

For planets orbiting stars, or moons orbiting planet, it is widely accepted that simple Newtonian gravity is an excellent approximation. But when this approach was used to study orbits of stars around galaxies, it was found to badly predict their orbital speeds (i.e., “rotation curves”). To explain this puzzle, many posit a lot more “dark matter” than what is easily seen, distributed quite differently than the stuff we easily see.

Even though the usual quick calculation suggests it won’t make a difference, a number of authors have tried to calculate these rotation curves using something closer to (but still far from) full general relativity (GR). And all of those (that I’ve found) claim that it makes a big difference, enough to solve the puzzle. For which they are also widely criticized, because priors and usual quick calculation.

Ludwig tried a standard approximation to GR that is closer than Newton, but still linear, one which we understand well as it is very like Maxwell’s equations for electromagnetism:

Here E is gravity’s “electric” field that pushes still stuff toward each other, and B is gravity’s “magnetic” field, that in addition pushes away from each stuff that is moving in parallel.

As star-star collisions are very rare, Ludwig assumes a time-invariant (i.e., “equilibrium”) rotationally-symmetric system of zero-pressure (p=0) dust that only moves in the azimuthal direction. (That is “around” the galaxy, in a direction perpendicular to the radial and vertical dimensions). This implies that E and B have only radial components ERBR and vertical components EZ, BZ, and also that:

The first (radial balance) equation says that magnetism is only a big effect on star motions in galaxies if vB becomes comparable in magnitude to E, while the second (axial balance) equation says that E and vB are in fact comparable in magnitude! Yes these are talking about different (R vs. Z) components of these vectors, but over the whole galaxy these components are connected in ways that ensure that large values of one component in one place imply large values of the other component in other related places. For example, here is a calculated B field around a spinning uniform mass sphere:

 

Thus Ludwig finds gravitomagnetism to be always important for equilibrium rotating gravitating dust! Using his model, he does a decent job of predicting rotation curves for three galaxies, using only mass distributions estimated from the light we see, though he allows some corrections and fits the ratio of mass to light to each galaxy.

So how could the usual quick calculation go so wrong here? Well, consider a point as indicated by the big red arrow here near the “North Pole” of this galaxy.

Gravity’s E should be pulling it downward, toward the center, but according to the only-azimuthal motion-assumption it is not falling down. Yet according to Equation 2.2 above, if the pressure is zero then the only other force left to hold it up is v x B. So of course these assumptions must imply a large magnetic field B, with a comparable influence to E.

But is this right, and if not which of Ludwig’s assumptions is wrong, or at least high questionable? I say it is his assumption of zero pressure, an assumption also made by all of the other related papers I found. Ludwig justifies his zero-pressure assumption by saying that stars almost never collide. But the concept of pressure just doesn’t depend much on collision rates!

Consider that astronomers usually say that what “holds up” stars near the red arrow is momentum. Previously, their velocities started high closer to the center, and declined as they climbed the gravitational potential to reach near that red arrow. They have recently or will soon stop rising and begin to fall back toward the center.

One can tell exactly this same story about atoms in an atmosphere. Even if they never collided, their average density and velocity would still change just the same with altitude as they flew up from the ground. Atmospheric “pressure” declines with altitude because the number of atoms that pass through any given area (and how fast and massive) declines, not because they actually collide. Pressure tells of momentum transfer that would happen if the objects moving through a plane were instead to bounce off that plane; but they don’t actually have to bounce for there to be pressure.

So similarly the usual picture of galaxies “held up” by momentum is actually a picture of a non-zero pressure, a pressure highest near the center and declining away from it, and a pressure strong enough to counter gravity and “hold up” the average density of stars near the North pole. So the pressure is not near zero, even though collisions are very rare.

What about Ludwig’s empirical fits? Well he never compares them to models using non-zero pressure, so his fits don’t tell us which better explains rotation curves, pressure or gravitomagnetism. Same for all the other papers in his area.

So yes, the skeptics were right; Ludwig’s analysis contains a big questionable assumption, and so their quick calculation doesn’t obviously mislead here. And yes if you are busy and this is not your area it makes sense to just ignore his paper if you think it unlikely to be right. But if you are going to publicly denounce it as mistaken, especially on the basis of your high level of physics authority, it is more helpful if you do what I’ve tried to do, namely try to find and publicize its error, or publicly admit if you can’t. That’s how research moves forward.

Added 16Mar: These authors got at least five publications out of their mistakes, and no serious academic journal would consider publishing my rebuttal. As those publications are full of complex math and technical work, and my blog post looks doesn’t show much technical prowess. Which shows a well-known big bias in academia (econ too, not just physics): why bother to learn the concepts deeply if you can get many more publications and prestige via more manipulation of symbols you insufficiently understand? Why bother to look for such errors in others’ work if you can’t get publication credit from it? Even if you do make a conceptual mistake, your referees aren’t likely to notice, and even if someone publicly shows the mistake, that will likely be someone/someplace with too little academic prestige to count, or even be noticed.

 

GD Star Rating
loading...
Tagged as: ,

We Will Never Learn More On Consciousness

Some complained that I didn’t include a question on consciousness in my list of big questions. My reason is that I can’t see how we will ever know more than we do now. There’s nothing to learn:

Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. (More)

Your brain is made out of quite ordinary physical materials, driven by ordinary physical processes that we understand very well at near-atomic levels of organization. It is only processes at higher levels of organization that we haven’t traced out in detail. We will eventually be able to trace in great detail and at all levels the causes of what makes you, or an em, or any variation on either, inclined to passionately claim, and believe, that you really do feel. And that will let us predict well what changes to you, or anything, might induce you, or it, to claim or believe something different.

But if you insist that none of that can possibly verify that you, or an em, actually do feel, then it can’t add any info on that issue. Yes, maybe you have intuitions inside you that often tell you if you think something that you see in front of you really feels. But such intuitions are already available to you now. Just imagine various things you might see, note your intuitions about each, compare those to others’ intuitions, and then draw your conclusions about consciousness. After all, we already have a pretty good idea of all the things we will eventually be able to see.

Okay, yes, you are probably in denial about how much the intuitions of others would influence yours, and about how strong would be the social pressures on your intuitions to accept that ems feel, if in fact you lived in a would where ems dominated. I predict that in such a situation most would accept that ems feel. Not because new info has been offered, but because of familiar social pressures. And yes, we can learn more about how our intuitions respond to such pressures. But that won’t give us any more info on the truth of what “really” feels.

GD Star Rating
loading...
Tagged as:

Progeny Probs: Souls, Ems, Quantum

Consider three kinds of ancestry trees: 1) souls of some odd human mothers, 2) ems and their copies, and 3) splitting quantum worlds. In each kind of tree, agents can ask themselves, “Which future version of me will I become?”

SOULS  First, let’s start with some odd human mothers. A single uber-mother can give rise to a large tree of descendants via the mother relation. Each branch in the tree is a single person. The leaves of this tree are branches that lead to no more branches. In this case, leaves are either men, or they are women who never had children. When a mother looks back on her history, she sees a single chain of branches from the uber-mother root of the tree to her. All of those branches are mothers who had at least one child.

Now here is the odd part: imagine that some mothers see their personal historical chain as describing a singular soul being passed down through the generations. They believe that souls can be transferred but not created, and so that when a mother has more than one child, at most one of those children gets a soul.

Yes, this is an odd perspective to have regarding souls, but bear with me. Such an odd mother might wonder which one of her children will inherit her soul. Her beliefs about the answer to this question, and about other facts about this child, might be expressed in a subjective probability distribution. I will call such a distribution a “progeny prob”.

EMS  Second, let’s consider ems, the subject of my book The Age of Em: Work, Love, and Life when Robots Rule the Earth. Ems don’t yet exist, but they might in the future. Each em is an emulation of a particular human brain, and it acts just like that human would in the same subjective situation, even though it actually runs on an artificial computer. Each em is part of an ancestry tree that starts with a root that resulted from scanning a particular human brain.

This em tree branches when copies are made of individual ems, and the leaves of this tree are copies that are erased. Ems vary in many ways, such as in how much wealth they own, how fast their minds run relative to humans, and how long they live before they end or next split into copies. Split events also differ, such as re how many copies are made, what social role each copy is planned to fill, and which copies get what part of the original’s wealth or friends.

An em who looks toward its next future split, and foresees a resulting set of copies, may ask themselves “Which one of those copies will I be?” Of course they will actually become all of those copies. But as human minds never evolved to anticipate splitting, ems may find it hard to think that way. The fact that ems remember only one chain of branches in the past can lead them to think in terms of continuing on in only one future branch. Em “progeny prob” beliefs about who they will become can also include predictions about life details of that copy, such as wealth or speed. These beliefs can also be conditional on particular plans made for this split, such as which copies plan to take which jobs.

QUANTUM  Third, let’s consider quantum states, as seen from the many worlds perspective. We start with a large system of interest, a system that can include observers like humans and ems. This system begins in some “root” quantum state, and afterward experiences many “decoherence events”, with each such event aligned to a particular key parameter, like the spatial location of a particular atom. Soon after each such decoherence event, the total system state typically becomes closely approximated by a weighted sum of component states. Each component state is associated with a different value of the key parameter. Each subsystem of such a component state, including subsystems that describe the mental states of observers, have states that match this key parameter value. For example, if these observers “measured” the location of an atom, then each observer would have a mental state corresponding to their having observed the same particular location. Continue reading "Progeny Probs: Souls, Ems, Quantum" »

GD Star Rating
loading...
Tagged as: ,

Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
loading...
Tagged as: ,

Perpetual Motion Via Negative Matter?

One of the most important things we will ever learn about the universe is just how big it is, practically, for our purposes. In the last century we’ve learned that it it is far larger than we knew, in a great many ways. At the moment we are pretty sure that it is about 13 billion years old, and that it seems much larger in spatial directions. We have decent estimates for both the total space-time volume we can ever see, and all that we can ever influence.

For each of these volumes, we also have decent estimates of the amount of ordinary matter they contain, how much entropy that now contains, and how much entropy it could create via nuclear reactions. We also have decent estimates of the amount of non-ordinary matter, and of the much larger amount of entropy that matter of all types could produce if collected into black holes.

In addition, we have plausible estimates of how (VERY) long it will take to actually use all that potential entropy. If you recall, matter and volume is what we need to make stuff, and potential entropy, beyond current actual entropy, (also known as “negentropy”) is they key resource needed to drive thus stuff in desired directions. This includes both biological life and artificial machinery.

Probably the thing we most care about doing with all that stuff in the universe this is creating and sustaining minds like ours. We know that this can be done via bodies and brains like ours, but it seems that far more minds could be supported via artificial computer hardware. However, we are pretty uncertain about how much computing power it takes (when done right) to support a mind like ours, and also about how much matter, volume, and entropy it takes (when done right) to produce any given amount of computing power.

For example, in computing theory we don’t even know if P=NP. We think this claim is false, but if true it seems that we can produce vastly more useful computation with any given amount of computing power, which probably means sustaining a lot more minds. Though I know of no concrete estimate of how many more.

It might seem that at least our physics estimates of available potential entropy are less uncertain that this, but I was recently reminded that we actually aren’t even sure that this amount is finite. That is, it might be that our universe has no upper limit to entropy. In which case, one could keep run physical processes (like computers) that increase entropy forever, create proverbial “perpetual motion machines”. Some say that such machines are in conflict with thermodynamics, but that is only true if there’s a maximum entropy.

Yes, there’s a sense in which a spatially infinite universe has infinite entropy, but that’s not useful for running any one machine. Yes, if it were possible to perpetually create “baby universes”, then one might perpetually run a machine that can fit each time into the entrance from one universe into its descendant universe. But that may be a pretty severe machine size limit, and we don’t actually know that baby universes are possible. No, what I have in mind here is the possibility of negative mass, which might allow unbounded entropy even in a finite region of ordinary space-time.

Within the basic equations of Newtonian physics lie the potential for an exotic kind of matter: negative mass. Just let the mass of some particles be negative, and you’ll see that gravitationally the negative masses push away from each other, but are drawn toward the positive masses, which are drawn toward each other. Other forces can exist too, and in terms of dynamics, it’s all perfectly consistent.

Now today we formally attribute the Casimir effect to spatial regions filled with negative mass/energy, and we sometimes formally treat the absence of a material as another material (think of bubbles in water), and these often formally have negative mass. But other than these, we’ve so far not seen any material up close that acts locally like it has negative mass, and this has been a fine reason to ignore the possibility.

However, we’ve known for a while now that over 95% of the universe seems to be made of unknown stuff that we’ve never seen interact with any of the stuff around us, except via long distance gravity interactions. And most of that stuff seems to be a “dark energy” which can be thought of as having a negative mass/energy density. So negative mass particles seem a reasonable candidate to consider for this strange stuff. And the reason I thought about this possibility recently is that I came across this article by Jamie Farnes, and associated commentary. Farnes suggests negative mass particles may fill voids between galaxies, and crowd around galaxies compacting them, simultaneously explaining galaxy rotation curves and accelerating cosmic expansion.

Apparently, Einstein considered invoking negative mass particles to explain (what he thought was) the observed lack of cosmic expansion, before he switched to a more abstract explanation, which he dropped after cosmic expansion was observed. Some say that Farnes’s attempt to integrate negative mass into general relative and quantum particle physics fails, and I have no opinion on that. Here I’ll just focus on simpler physics considerations, and presume that there must be some reasonable way to extend the concept of negative mass particles in those directions.

One of the first things one usually learns about negative mass is what happens in the simple scenario wherein two particles with exactly equal and opposite masses start off exactly at rest relative to one another, and have any force between them. In this scenario, these two particles accelerate together in the same direction, staying at the same relative distance, forevermore. This produces arbitrarily large velocities in simple Newtonian physics, and arbitrarily larger absolute masses in relativistic physics. This seems a crazy result, and it probably put me off from of the negative mass idea when I first heard about it.

But this turns out to be an extremely unusual scenario for negative mass particles. Farnes did many computer simulations with thousands of gravitationally interacting negative and positive mass particles of exactly equal mass magnitudes. These simulations consistently “reach dynamic equilibrium” and “no runaway particles were detected”. So as a matter of practice, runaway seems quite rare, at least via gravity.

A related worry is that if there were a substantial coupling associated with making pairs of positive and negative mass particles that together satisfy relative conservation laws, such pairs would be created often, leading to a rapid and apparently unending expansion in total particle number. But the whole idea of dark stuff is that it only couples very weakly to ordinary matter. So if we are to explain dark stuff via negative mass particles, we can and should postulate no strong couplings that allow easy creation of pairs of positive and negative mass particles.

However, even if the postulate of negative mass particles were consistent with all of our observations of a stable pretty-empty universe (and of course that’s still a big if), the runaway mass pair scenario does at least weakly suggest that entropy may have no upper bound when negative masses are included. The stability we observe only suggests that current equilibrium is “metastable” in the sense of not quickly changing.

Metastability is already known to hold for black holes; merging available matter into a few huge black holes could vastly increase entropy, but that only happens naturally at a very slow rate. By making it happen faster, our descendants might greatly increase their currently available potential entropy. Similarly, our descendants might gain even more potential entropy by inducing interactions between mass and negative mass that would naturally be very rare.

That is, we don’t even know if potential entropy is finite, even within a finite volume. Learning that will be very big news, for good or bad.

GD Star Rating
loading...
Tagged as: ,

The Aristillus Series

There’s a contradiction at the heart of science fiction. Science fiction tends to celebrate the engineers and other techies who are its main fans. But there are two conflicting ways to do this. One is to fill a story with credible technical details, details that matter to the plot, and celebrate characters who manage this detail well. The other approach is to present tech as the main cause of an impressive future world, and of big pivotal events in that world.

The conflict comes from it being hard to give credible technical details about an impressive future world, as we don’t know much about future tech. One can give lots of detail about current tech, but people aren’t very impressed with the world they live in (though they should be). Or one can make up detail about future tech, but that detail isn’t very credible.

A clever way to mitigate this conflict is to introduce one dramatic new tech, and then leave all other tech the same. (Vinge gave a classic example.) Here, readers can be impressed by how big a difference one new tech could make, and yet still revel in heroes who win in part by mastering familiar tech detail. Also, people like me who like to think about the social implications of tech can enjoy a relatively manageable task: guess how one big new tech would change an otherwise familiar world.

I recently enjoyed the science fiction book pair The Aristillus Series: Powers of the Earth, and Causes of Separation, by Travis J I Corcoran (@MorlockP), funded in part via Kickstarter, because it in part followed this strategy. Also, it depicts betting markets as playing a small part in spreading info about war details. In addition, while most novels push some sort of unrealistic moral theme, the theme here is at least relatively congenial to me: nice libertarians seek independence from a mean over-regulated Earth:

Earth in 2064 is politically corrupt and in economic decline. The Long Depression has dragged on for 56 years, and the Bureau of Sustainable Research is making sure that no new technologies disrupt the planned economy. Ten years ago a band of malcontents, dreamers, and libertarian radicals used a privately developed anti-gravity drive to equip obsolete and rusting sea-going cargo ships – and flew them to the moon.There, using real world tunnel-boring-machines and earth-moving equipment, they’ve built their own retreat.

The one big new tech here is anti-gravity, made cheaply from ordinary materials and constructible by ordinary people with common tools. One team figures it out, and for a long time no other team has any idea how to do it, or any remotely similar tech, and no one tries to improve it; it just is.

Attaching antigrav devices to simple refitted ocean-going ships, our heroes travel to the moon, set up a colony, and create a smuggling ring to transport people and stuff to there. Aside from those magic antigravity devices, these books are choc full of technical mastery of familiar tech not much beyond our level, like tunnel diggers, guns, space suits, bikes, rovers, crypto signatures, and computers software. These are shown to have awkward gritty tradeoffs, like most real tech does.

Alas, Corcoran messes this up a bit by adding two more magic techs: one superintelligent AI, and a few dozen smarter-than-human dogs. Oh and the same small group is implausibly responsible for saving all three magic techs from destruction. As with antigravity, in each case one team figures it out, no other team has any remotely similar tech, and no one tries to improve them. But these don’t actually matter that much to the story, and I can hope they will be cut if/when this is made into a movie.

The story begins roughly a decade after the moon colony started, when it has one hundred thousand or a million residents. (I heard conflicting figures at different points.) Compared to Earth folk, colonists are shown as enjoying as much product variety, and a higher standard of living. This is attributed to their lower regulation.

While Earth powers dislike the colony, they are depicted at first as being only rarely able to find and stop smugglers. But a year later, when thousands of ships try to fly to the moon all at once from thousands of secret locations around the planet, Earth powers are depicted as being able to find and shoot down 90% of them. Even though this should be harder when thousands fly at once. This change is never explained.

Even given the advantage of a freer economy, I find it pretty implausible that a colony could be built this big and fast with this level of variety and wealth, all with no funding beyond what colonists can carry. The moon is a long way from Earth, and it is a much harsher environment. For example, while colonists are said to have their own chip industry to avoid regulation embedded in Earth chips, the real chip industry has huge economies of scale that make it quite hard to serve only one million customers.

After they acquire antigrav tech, Earth powers go to war with the moon. As the Earth’s economy is roughly ten thousand times larger that the moon’s, without a huge tech advantage is a mystery why anyone thinks the moon has any chance whatsoever to win this war.

The biggest blunder, however, is that no one in the book imagines using antigrav tech on Earth. But if the cost to ship stuff to the moon using antigrav isn’t crazy high, then antigravity must make it far cheaper to ship stuff around on Earth. Antigrav could also make tall buildings cheaper, allowing much denser city centers. The profits to be gained from these applications seem far larger than from smuggling stuff to a small poor moon colony.

So even if we ignore the AI and smart dogs, this still isn’t a competent extrapolation of what happens if we add cheap antigravity to a world like ours. Which is too bad; that would be an interesting scenario to explore.

Added 5:30p: In the book, antigrav is only used to smuggle stuff to/from moon, until it is used to send armies to the moon. But demand for smuggling should be far larger between places on Earth. In the book thousands of ordinary people are seen willing to make their own antigrav devices to migrate to moon, But a larger number should be making such devices to smuggle stuff around on Earth.

GD Star Rating
loading...
Tagged as: , , ,

All Is Simple Parts Interacting Simply

In physics, I got a BS in ’81, a MS in ’84, and published two peer-reviewed journal articles in ’03 & ’06. I’m not tracking the latest developments in physics very closely, but what I’m about to tell you is very old standard physics that I’m quite sure hasn’t changed. Even so, it seems to be something many people just don’t get. So let me explain it.

There is nothing that we know of that isn’t described well by physics, and everything that physicists know of is well described as many simple parts interacting simply. Parts are localized in space, have interactions localized in time, and interactions effects don’t move in space faster than the speed of light. Simple parts have internal states that can be specified with just a few bits (or qubits), and each part only interacts directly with a few other parts close in space and time. Since each interaction is only between a few bits on a few sides, it must also be simple. Furthermore, all known interactions are mutual in the sense that the state on all sides is influenced by states of the other sides.

For example, ordinary field theories have a limited number of fields at each point in space-time, with each field having a limited number of degrees of freedom. Each field has a few simple interactions with other fields, and with its own space-time derivatives. With limited energy, this latter effect limits how fast a field changes in space and time.

As a second example, ordinary digital electronics is made mostly of simple logic units, each with only a few inputs, a few outputs, and a few bits of internal state. Typically: two inputs, one output, and zero or one bits of state. Interactions between logic units are via simple wires that force the voltage and current to be almost the same at matching ends.

As a third example, cellular automatons are often taken as a clear simple metaphor for typical physical systems. Each such automation has a discrete array of cells, each of which has a few possible states. At discrete time steps, the state of each cell is a simple standard function of the states of that cell and its neighbors at the last time step. The famous “game of life” uses a two dimensional array with one bit per cell.

This basic physics fact, that everything is made of simple parts interacting simply, implies that anything complex, able to represent many different possibilities, is made of many parts. And anything able to manage complex interaction relations is spread across time, constructed via many simple interactions built up over time. So if you look at a disk of a complex movie, you’ll find lots of tiny structures encoding bits. If you look at an organism that survives in a complex environment, you’ll find lots of tiny parts with many non-regular interactions.

Physicists have learned that we only we ever get empirical evidence about the state of things via their interactions with other things. When such interactions the state of one thing create correlations with the state of another, we can use that correlation, together with knowledge of one state, as evidence about the other state. If a feature or state doesn’t influence any interactions with familiar things, we could drop it from our model of the world and get all the same predictions. (Though we might include it anyway for simplicity, so that similar parts have similar features and states.)

Not only do we know that in general everything is made of simple parts interacting simply, for pretty much everything that happens here on Earth we know those parts and interactions in great precise detail. Yes there are still some areas of physics we don’t fully understand, but we also know that those uncertainties have almost nothing to say about ordinary events here on Earth. For humans and their immediate environments on Earth, we know exactly what are all the parts, what states they hold, and all of their simple interactions. Thermodynamics assures us that there can’t be a lot of hidden states around holding many bits that interact with familiar states.

Now it is true that when many simple parts are combined into complex arrangements, it can be very hard to calculate the detailed outcomes they produce. This isn’t because such outcomes aren’t implied by the math, but because it can be hard to calculate what math implies. When we can figure out quantities that are easier to calculate, as long as the parts and interactions we think are going on are in fact the only things going on, then we usually see those quantities just as calculated.

Now what I’ve said so far is usually accepted as uncontroversial, at least when applied to the usual parts of our world, such as rivers, cars, mountains laptops, or ants. But as soon as one claims that all this applies to human minds, suddenly it gets more controversial. People often state things like this:

I am sure that I’m not just a collection of physical parts interacting, because I’m aware that I feel. I know that physical parts interacting just aren’t the kinds of things that can feel by themselves. So even though I have a physical body made of parts, and there are close correlations between my feelings and the states of my body parts, there must be something more than that to me (and others like me). So there’s a deep mystery: what is this extra stuff, where does it arise, how does it change, and so on. We humans care mainly about feelings, not physical parts interacting; we want to know what out there feels so we can know what to care about.

But consider a key question: Does this other feeling stuff interact with the familiar parts of our world strongly and reliably enough to usually be the actual cause of humans making statements of feeling like this?

If yes, this is a remarkably strong interaction, making it quite surprising that physicists have missed it so far. So surprising in fact as to be frankly unbelievable. If this type of interaction were remotely as simple as all the interactions we know, then it should be quite measurable with existing equipment. Any interaction not so measurable would have be vastly more complex and context dependent than any we’ve ever seen or considered. Thus I’d bet heavily and confidently that no one will measure such an interaction.

But if no, if this interaction isn’t strong enough to explain human claims of feeling, then we have a remarkable coincidence to explain. Somehow this extra feeling stuff exists, and humans also have a tendency to say that it exists, but these happen for entirely independent reasons. The fact that feeling stuff exists isn’t causing people to claim it exists, nor vice versa. Instead humans have some sort of weird psychological quirk that causes them to make such statements, and they would make such claims even if feeling stuff didn’t exist. But if we have a good alternate explanation for why people tend to make such statements, what need do we have of the hypothesis that feeling stuff actually exists? Such a coincidence seems too remarkable to be believed.

Thus it seems hard to square a belief in this extra feeling stuff with standard physics in either cases, where feeling stuff does or does not have strong interactions with ordinary stuff. The obvious conclusion: extra feeling stuff just doesn’t exist.

Note that even if we are only complex arrangements of interacting parts, as social creatures it makes sense for us to care in a certain sense about each others’ “feelings.” Creatures like us maintain an internal “feeling” state that tracks how well things are going for us, with high-satisfied states when things are going well and and low-dissatisfied states when things are going badly. This internal state influences our behavior, and so social creatures around us want to try to infer this state, and to influence it. We may, for example, try to notice when our allies have a dissatisfied state and look for ways to help them to be more satisfied. Thus we care about others’ “feelings”, are wary of false indicators of them, and study behaviors in some detail to figure out what reliably indicates these internal states.

In the modern world we now encounter a wider range of creature-like things with feeling-related surface appearances. These include video game characters, movie characters, robots, statues, paintings, stuffed animals, and so on. And so it makes sense for us to apply our careful-study habits to ask which of these are “real” feelings, in the sense of being the those where it makes sense to apply our evolved feeling-related habits. But while it makes sense to be skeptical that any particular claimed feeling is “real” in this sense, it makes much less sense to apply this skepticism to “mere” physical systems. After all, as far as we know all familiar systems, and all the systems they interact with to any important degree, are mere physical systems.

If everything around us is explained by ordinary physics, then a detailed examination of the ordinary physics of familiar systems will eventually tells us everything there is to know about the causes and consequences of our feelings. It will say how many different feelings we are capable of, what outside factors influence them, and how our words and actions depend on them.

What more is or could be there to know about feelings than this? For example, you might ask: does a system have “feelings” if it has some of the same internal states as a human, but where those states have no dependence on outside factors and no influence on the world? But questions like this seem to me less about the world and more about what concepts are the most valuable to use in this space. While crude concepts served us well in the past, as we encounter a wider range of creature-like systems than before, we will need refine our concepts for this new world.

But, again, that seems to be more about what feelings concepts are useful in this new world, and much less about where feelings “really” are in the world. Physics call tell us all there is to say about that.

(This post is a followup to my prior post on Sean Carroll’s Big Picture.)

GD Star Rating
loading...
Tagged as:

Once More, With Feeling

Sean Carroll’s new best-selling book The Big Picture runs the risk of preaching to the choir. To my mind, it gives a clear and effective explanation of the usual top physicists’ world view. On religion, mysticism, free will, consciousness, meaning, morality, etc. (The usual view, but an unusually readable, articulate, and careful explanation.) I don’t disagree, but then I’m very centered in this physicist view.

I read through dozens of reviews, and none of them even tried to argue against his core views! Yet I have many economist colleagues who often give me grief for presuming this usual view. And I’m pretty sure the publication of this book (or of previous similar books) won’t change their minds. Which is a sad commentary on our intellectual conversation; we mostly see different points of view marketed separately, with little conversation between proponents.

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. You would have been just as likely to say it if it were not true. What could possibly be the point of hypothesizing and forming beliefs about states about which one can never get any info?

If you have learned anything about overcoming bias, you should be very suspicious of such beliefs, and eager for points of view where you don’t have to rely on possibly-false and info-free beliefs. Carroll presents such a point of view:

There’s nothing more disheartening than someone telling you that the problem you think is most important and central isn’t really a problem at all. As poetic naturalists, that’s basically what we’ll be doing. .. Philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems. The phrase “experiencing the redness of red” is part of a higher-level vocabulary we use to talk about the emergent behavior of the underlying physical system, not something separate from the physical system.

There’s not much to it, but that’s as it should be. I agree with Carroll; there literally isn’t anything to talk about here.

GD Star Rating
loading...
Tagged as: , ,