Tag Archives: Future

How To Prep For War

In my last two posts I’ve noted while war deaths have fallen greatly since the world wars, the magnitude and duration of this fall isn’t that far out of line with previous falls over the last four centuries, falls that have always been followed by rises, as part of a regular cycle of war. I also noted that the theory arguments that have been offered to explain why this trend will long continue, in a deviation from the historical pattern, seem weak. Thus there seems to be a substantial and neglected chance of a lot more war in the next century. I’m not the only one who says this; so do many war experts.

If a lot more war is coming, what should you do personally, to help yourself, your family, and your friends? (Assuming your goal is mainly to personally survive and prosper.) While we can’t say that much specifically about future war’s style, timing, or participants, we know enough to suggest some general advice.

1. Over the last century most war deaths have not been battle deaths, and the battle death share has fallen. Thus you should worry less about dying in battle, and more about other ways to die.

2. War tends to cause the most harm near where its battles happen, and near concentrations of supporting industrial and human production. This means you are more at risk if you live near the nations that participate in the war, and in those nations near dense concentrations and travel routes, that is, near major cities and roads.

3. If there are big pandemics or economic collapse, you may be better off in more isolated and economically self-sufficient places. (That doesn’t include outer space, which is quite unlikely to be economically self-sufficient anytime soon.) Of course there is a big tradeoff here, as these are the places we expect to do less well in the absence of war.

4. Most of your expected deaths may happen in scenarios where nukes are used. There’s a big literature on how to prepare for and avoid harms from nukes, so I’ll just refer you to that. Ironically, you may be more at risk from being hurt by nukes in places that have nukes to retaliate with. But you might be more at risk from being enslaved or otherwise dominated if your place doesn’t have nukes.

5. Most of our computer systems have poor security, and so are poorly protected against cyberwar. This is mainly because software firms are usually more eager to be first to market than to add security, which most customers don’t notice at first. If this situation doesn’t change much, then you should be wary of depending too much on standard connected computer systems. For essential services, rely on disconnected, non-standard, or high-security-investment systems.

6. Big wars tend to induce a lot more taxation of the rich, to pay for wars. So have your dynasty invest more in having more children, relative to fewer richer kids, or invest in assets that are hidden from tax authorities. Or less bother to invest for the long run.

7. The biggest wars so far, the world wars and the thirty years war, have been driven by strong ideologies, such as communism and catholicism. So help your descendants avoid succumbing to strong ideologies, while also avoiding the appearance of publicly opposing locally popular versions. And try to stay away from places that seem more likely to succumb.

8. While old ideologies still have plenty of fire, the big new ideology on the block seems related to woke identity. While this seems to inspire sufficiently confident passions for war, it seems far from clear who would fight who and how in a woke war. This scenario seems worth more thought.

Added 27July: 

9. If big governance changes and social destruction are coming, that may create opportunities for the adoption of more radical social reforms. And that can encourage us to work more on developing such reforms today.

GD Star Rating
loading...
Tagged as: ,

Big War Remains Possible

The following poll suggests that a majority of my Twitter followers think war will decline; in the next 80 years we won’t see a 15 year period with a war death rate above the median level we’ve see over the last four centuries:

To predict a big deviation from the simple historical trend, one needs some sort of basis in theory. Alas, the theory arguments that I’ve heard re war optimism seem quite inadequate. I thus suspect much wishful thinking here.

For example, some say the world economy today is too interdependent for war. But interdependent economies have long gone to war. Consider the world wars in Europe, or the American civil war. Some say that we don’t risk war because it is very destructive of complex fragile physical capital and infrastructure. But while such capital was indeed destroyed during the world wars, the places most hurt rebounded quickly, as they had good institutional and human capital.

Some note that international alliances make war less likely between alliance partners. But they make war more likely between alliances. Some suggest that better info tells us more about rivals today, and so we are less likely to misjudge rival abilities and motives. But there still seems plenty of room for errors here as “brinkmanship” is a key dynamic. Also, this doesn’t prevent powers from investing in war abilities to gain advantages via credible threats of war.

Some point to a reduced willingness by winners to gain concrete advantages via the ancient strategies of raping and enslaving losers, and demanding great tribute. But we still manage to find many other motives for war, and there’s no fundamental obstacles to reviving ancient strategies; tribute is still quite feasible, as is slavery. Also, the peak war periods so far have been associated with ideology battles, and we still have plenty of those.

Some say nuclear weapons have made small wars harder. But that is only between pairs of nations both of which have nukes, which isn’t most nation pairs. Pairs of nations with nukes can still fight big wars, there are more such pairs today than before, over 80 years there’s plenty of time for some pair to pick a fight, and nuke wars casualties may be enormous.

I suspect that many are relying on modern propaganda on our moral superiority over our ancestors. But while we mostly count humans of the mid twentieth century as morally superior to humans from prior centuries, that was the period of peak war mortality.

I also suspect that many are drawing conclusions about war from long term trends regarding other forms of violence, as in slavery, crime, and personal relations, as well as from apparently lower public tolerance for war deaths and overall apparent disapproval and reluctance regarding war. But just before World War I we had also seen such trends:

Then, as now, Europe had lived through a long period of relative peace, … rapid progress … had given humanity a sense of shared interests that precluded war, … world leaders scarcely believed a global conflagration was possible. (more)

The world is vast, eighty years is a long time, and the number of possible global social & diplomatic scenarios over such period is vast. So it seems crazy to base predictions on future war rates on inside view calculations from particular current stances, deals, or inclinations. The raw historical record, and its large long-term fluctuations, should weigh heavily on our minds.

GD Star Rating
loading...
Tagged as: ,

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
loading...
Tagged as: , , ,

Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
loading...
Tagged as: ,

How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
loading...
Tagged as: , ,

Distant Future Tradeoffs

Over the last day on Twitter, I ran three similar polls. One asked:

Software design today faces many tradeoffs, e.g., getting more X costs less Y, or vice versa. By comparison, will distant future tradeoffs be mostly same ones, about as many but very different ones, far fewer (so usually all good features X,Y are feasible together), or far more?

Four answers were possible: mostly same tradeoffs, as many but mostly new, far fewer tradeoffs, and far more tradeoffs. The other two polls replaced “Software” with “Physical Device” and “Social Institution.”

I now see these four answers as picking out four future scenarios. A world with fewer tradeoffs is Utopian, where you can more get everything you want without having to give up other things. In contrast, a world with many more tradeoffs is more Complex. A world where most of the tradeoffs are like those today is Familiar. And a world where the current tradeoffs are replaced by new ones is Radical.  Using these terms, here are the resulting percentages:

The polls got from 105 to 131 responses each, with an average entry percentage of 25%, so I’m willing to believe differences of 10% or more. The most obvious results here are that only a minority foresee a familiar future in any area, and answers vary greatly; there is little consensus on which scenarios are more likely.

Beyond that, the strongest pattern I see is that respondents foresee more complexity, relative to a utopian lack of tradeoffs, at higher levels of organization. Physical devices are the most utopian, social institutions are the most complex, and software sits in the middle. The other possible result I see is that respondents foresee a less familiar social future. 

I also asked:

Which shapes the world more in the long run: the search for arrangements allowing better compromises regarding many complex tradeoffs, or fights between conflicting groups/values/perspectives?

In response, 43% said search for tradeoffs while 30% said value conflicts, and 27% said hard to tell. So these people see tradeoffs as mattering a lot.  

These respondents seriously disagree with science fiction, which usually describes relatively familiar social worlds in visibly changed physical contexts (and can’t be bothered to have an opinion on software). They instead say that the social world will change the most, becoming the most complex and/or radical. Oh brave new world, that has such institutions in it!

GD Star Rating
loading...
Tagged as: ,

How Does Brain Code Differ?

The Question

We humans have been writing “code” for many decades now, and as “software eats the world” we will write a lot more. In addition, we can also think of the structures within each human brain as “code”, code that will also shape the future.

Today the code in our heads (and bodies) is stuck there, but eventually we will find ways to move this code to artificial hardware. At which point we can create the world of brain emulations that is the subject of my first book, Age of Em. From that point on, these two categories of code, and their descendant variations, will have near equal access to artificial hardware, and so will compete on relatively equal terms to take on many code roles. System designers will have to choose which kind of code to use to control each particular system.

When designers choose between different types of code, they must ask themselves: which kinds of code are more cost-effective in which kinds of applications? In a competitive future world, the answer to this question may be the main factor that decides the fraction of resources devoted to running human-like minds. So to help us envision such a competitive future, we should also ask: where will different kinds of code work better? (Yes, non-competitive futures may be possible, but harder to arrange than many imagine.)

To think about which kinds of code win where, we need a basic theory that explains their key fundamental differences. You might have thought that much has been written on this, but alas I can’t find much. I do sometimes come across people who think it obvious that human brain code can’t possibly compete well anywhere, though they rarely explain their reasoning much. As this claim isn’t obvious to me, I’ve been trying to think about this key question of which kinds of code wins where. In the following, I’ll outline what I’ve come up with. But I still hope someone will point me to useful analyses that I’ve missed.

In the following, I will first summarize a few simple differences between human brain code and other code, then offer a deeper account of these differences, then suggest an empirical test of this account, and finally consider what these differences suggest for which kinds of code will be more cost-effective where. Continue reading "How Does Brain Code Differ?" »

GD Star Rating
loading...
Tagged as: , , , ,

Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible. 

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating
loading...
Tagged as: , , ,

Perpetual Motion Via Negative Matter?

One of the most important things we will ever learn about the universe is just how big it is, practically, for our purposes. In the last century we’ve learned that it it is far larger than we knew, in a great many ways. At the moment we are pretty sure that it is about 13 billion years old, and that it seems much larger in spatial directions. We have decent estimates for both the total space-time volume we can ever see, and all that we can ever influence.

For each of these volumes, we also have decent estimates of the amount of ordinary matter they contain, how much entropy that now contains, and how much entropy it could create via nuclear reactions. We also have decent estimates of the amount of non-ordinary matter, and of the much larger amount of entropy that matter of all types could produce if collected into black holes.

In addition, we have plausible estimates of how (VERY) long it will take to actually use all that potential entropy. If you recall, matter and volume is what we need to make stuff, and potential entropy, beyond current actual entropy, (also known as “negentropy”) is they key resource needed to drive thus stuff in desired directions. This includes both biological life and artificial machinery.

Probably the thing we most care about doing with all that stuff in the universe this is creating and sustaining minds like ours. We know that this can be done via bodies and brains like ours, but it seems that far more minds could be supported via artificial computer hardware. However, we are pretty uncertain about how much computing power it takes (when done right) to support a mind like ours, and also about how much matter, volume, and entropy it takes (when done right) to produce any given amount of computing power.

For example, in computing theory we don’t even know if P=NP. We think this claim is false, but if true it seems that we can produce vastly more useful computation with any given amount of computing power, which probably means sustaining a lot more minds. Though I know of no concrete estimate of how many more.

It might seem that at least our physics estimates of available potential entropy are less uncertain that this, but I was recently reminded that we actually aren’t even sure that this amount is finite. That is, it might be that our universe has no upper limit to entropy. In which case, one could keep run physical processes (like computers) that increase entropy forever, create proverbial “perpetual motion machines”. Some say that such machines are in conflict with thermodynamics, but that is only true if there’s a maximum entropy.

Yes, there’s a sense in which a spatially infinite universe has infinite entropy, but that’s not useful for running any one machine. Yes, if it were possible to perpetually create “baby universes”, then one might perpetually run a machine that can fit each time into the entrance from one universe into its descendant universe. But that may be a pretty severe machine size limit, and we don’t actually know that baby universes are possible. No, what I have in mind here is the possibility of negative mass, which might allow unbounded entropy even in a finite region of ordinary space-time.

Within the basic equations of Newtonian physics lie the potential for an exotic kind of matter: negative mass. Just let the mass of some particles be negative, and you’ll see that gravitationally the negative masses push away from each other, but are drawn toward the positive masses, which are drawn toward each other. Other forces can exist too, and in terms of dynamics, it’s all perfectly consistent.

Now today we formally attribute the Casimir effect to spatial regions filled with negative mass/energy, and we sometimes formally treat the absence of a material as another material (think of bubbles in water), and these often formally have negative mass. But other than these, we’ve so far not seen any material up close that acts locally like it has negative mass, and this has been a fine reason to ignore the possibility.

However, we’ve known for a while now that over 95% of the universe seems to be made of unknown stuff that we’ve never seen interact with any of the stuff around us, except via long distance gravity interactions. And most of that stuff seems to be a “dark energy” which can be thought of as having a negative mass/energy density. So negative mass particles seem a reasonable candidate to consider for this strange stuff. And the reason I thought about this possibility recently is that I came across this article by Jamie Farnes, and associated commentary. Farnes suggests negative mass particles may fill voids between galaxies, and crowd around galaxies compacting them, simultaneously explaining galaxy rotation curves and accelerating cosmic expansion.

Apparently, Einstein considered invoking negative mass particles to explain (what he thought was) the observed lack of cosmic expansion, before he switched to a more abstract explanation, which he dropped after cosmic expansion was observed. Some say that Farnes’s attempt to integrate negative mass into general relative and quantum particle physics fails, and I have no opinion on that. Here I’ll just focus on simpler physics considerations, and presume that there must be some reasonable way to extend the concept of negative mass particles in those directions.

One of the first things one usually learns about negative mass is what happens in the simple scenario wherein two particles with exactly equal and opposite masses start off exactly at rest relative to one another, and have any force between them. In this scenario, these two particles accelerate together in the same direction, staying at the same relative distance, forevermore. This produces arbitrarily large velocities in simple Newtonian physics, and arbitrarily larger absolute masses in relativistic physics. This seems a crazy result, and it probably put me off from of the negative mass idea when I first heard about it.

But this turns out to be an extremely unusual scenario for negative mass particles. Farnes did many computer simulations with thousands of gravitationally interacting negative and positive mass particles of exactly equal mass magnitudes. These simulations consistently “reach dynamic equilibrium” and “no runaway particles were detected”. So as a matter of practice, runaway seems quite rare, at least via gravity.

A related worry is that if there were a substantial coupling associated with making pairs of positive and negative mass particles that together satisfy relative conservation laws, such pairs would be created often, leading to a rapid and apparently unending expansion in total particle number. But the whole idea of dark stuff is that it only couples very weakly to ordinary matter. So if we are to explain dark stuff via negative mass particles, we can and should postulate no strong couplings that allow easy creation of pairs of positive and negative mass particles.

However, even if the postulate of negative mass particles were consistent with all of our observations of a stable pretty-empty universe (and of course that’s still a big if), the runaway mass pair scenario does at least weakly suggest that entropy may have no upper bound when negative masses are included. The stability we observe only suggests that current equilibrium is “metastable” in the sense of not quickly changing.

Metastability is already known to hold for black holes; merging available matter into a few huge black holes could vastly increase entropy, but that only happens naturally at a very slow rate. By making it happen faster, our descendants might greatly increase their currently available potential entropy. Similarly, our descendants might gain even more potential entropy by inducing interactions between mass and negative mass that would naturally be very rare.

That is, we don’t even know if potential entropy is finite, even within a finite volume. Learning that will be very big news, for good or bad.

GD Star Rating
loading...
Tagged as: ,

The Aristillus Series

There’s a contradiction at the heart of science fiction. Science fiction tends to celebrate the engineers and other techies who are its main fans. But there are two conflicting ways to do this. One is to fill a story with credible technical details, details that matter to the plot, and celebrate characters who manage this detail well. The other approach is to present tech as the main cause of an impressive future world, and of big pivotal events in that world.

The conflict comes from it being hard to give credible technical details about an impressive future world, as we don’t know much about future tech. One can give lots of detail about current tech, but people aren’t very impressed with the world they live in (though they should be). Or one can make up detail about future tech, but that detail isn’t very credible.

A clever way to mitigate this conflict is to introduce one dramatic new tech, and then leave all other tech the same. (Vinge gave a classic example.) Here, readers can be impressed by how big a difference one new tech could make, and yet still revel in heroes who win in part by mastering familiar tech detail. Also, people like me who like to think about the social implications of tech can enjoy a relatively manageable task: guess how one big new tech would change an otherwise familiar world.

I recently enjoyed the science fiction book pair The Aristillus Series: Powers of the Earth, and Causes of Separation, by Travis J I Corcoran (@MorlockP), funded in part via Kickstarter, because it in part followed this strategy. Also, it depicts betting markets as playing a small part in spreading info about war details. In addition, while most novels push some sort of unrealistic moral theme, the theme here is at least relatively congenial to me: nice libertarians seek independence from a mean over-regulated Earth:

Earth in 2064 is politically corrupt and in economic decline. The Long Depression has dragged on for 56 years, and the Bureau of Sustainable Research is making sure that no new technologies disrupt the planned economy. Ten years ago a band of malcontents, dreamers, and libertarian radicals used a privately developed anti-gravity drive to equip obsolete and rusting sea-going cargo ships – and flew them to the moon.There, using real world tunnel-boring-machines and earth-moving equipment, they’ve built their own retreat.

The one big new tech here is anti-gravity, made cheaply from ordinary materials and constructible by ordinary people with common tools. One team figures it out, and for a long time no other team has any idea how to do it, or any remotely similar tech, and no one tries to improve it; it just is.

Attaching antigrav devices to simple refitted ocean-going ships, our heroes travel to the moon, set up a colony, and create a smuggling ring to transport people and stuff to there. Aside from those magic antigravity devices, these books are choc full of technical mastery of familiar tech not much beyond our level, like tunnel diggers, guns, space suits, bikes, rovers, crypto signatures, and computers software. These are shown to have awkward gritty tradeoffs, like most real tech does.

Alas, Corcoran messes this up a bit by adding two more magic techs: one superintelligent AI, and a few dozen smarter-than-human dogs. Oh and the same small group is implausibly responsible for saving all three magic techs from destruction. As with antigravity, in each case one team figures it out, no other team has any remotely similar tech, and no one tries to improve them. But these don’t actually matter that much to the story, and I can hope they will be cut if/when this is made into a movie.

The story begins roughly a decade after the moon colony started, when it has one hundred thousand or a million residents. (I heard conflicting figures at different points.) Compared to Earth folk, colonists are shown as enjoying as much product variety, and a higher standard of living. This is attributed to their lower regulation.

While Earth powers dislike the colony, they are depicted at first as being only rarely able to find and stop smugglers. But a year later, when thousands of ships try to fly to the moon all at once from thousands of secret locations around the planet, Earth powers are depicted as being able to find and shoot down 90% of them. Even though this should be harder when thousands fly at once. This change is never explained.

Even given the advantage of a freer economy, I find it pretty implausible that a colony could be built this big and fast with this level of variety and wealth, all with no funding beyond what colonists can carry. The moon is a long way from Earth, and it is a much harsher environment. For example, while colonists are said to have their own chip industry to avoid regulation embedded in Earth chips, the real chip industry has huge economies of scale that make it quite hard to serve only one million customers.

After they acquire antigrav tech, Earth powers go to war with the moon. As the Earth’s economy is roughly ten thousand times larger that the moon’s, without a huge tech advantage is a mystery why anyone thinks the moon has any chance whatsoever to win this war.

The biggest blunder, however, is that no one in the book imagines using antigrav tech on Earth. But if the cost to ship stuff to the moon using antigrav isn’t crazy high, then antigravity must make it far cheaper to ship stuff around on Earth. Antigrav could also make tall buildings cheaper, allowing much denser city centers. The profits to be gained from these applications seem far larger than from smuggling stuff to a small poor moon colony.

So even if we ignore the AI and smart dogs, this still isn’t a competent extrapolation of what happens if we add cheap antigravity to a world like ours. Which is too bad; that would be an interesting scenario to explore.

Added 5:30p: In the book, antigrav is only used to smuggle stuff to/from moon, until it is used to send armies to the moon. But demand for smuggling should be far larger between places on Earth. In the book thousands of ordinary people are seen willing to make their own antigrav devices to migrate to moon, But a larger number should be making such devices to smuggle stuff around on Earth.

GD Star Rating
loading...
Tagged as: , , ,