Tag Archives: Future

Elois Ate Your Flying Car

J Storrs Hall’s book Where Is My Flying Car?: A Memoir of Future Past, told me new things I didn’t know about flying cars. The book is long, and says many things about tech and the future, including some with which I disagree. But his main thesis is a contrarian one that I’ve heard many times from engineers over my lifetime. Which is good, because by putting it all in one place, I can now tell you about it, and tell you that I agree:

We have had a very long-term trend in history going back at least to the Newcomen and Savery engines of 300 years ago, a steady trend of about 7% per year growth in usable energy available to our civilization. …

One invariant in futurism before roughly 1980 was that predictions of social change overestimated, and of technological change underestimated, what actually happened. Now this invariant itself has been broken. With the notable exception of information technology, technological change has slowed and social change has mounted its crazy horse. …

In the 1970s, the centuries-long growth trend in energy (the “Henry Adams curve”) flatlined. Most of the techno-predictions from 50s and 60s SF had assumed, at least implicitly, that it would continue. The failed predictions strongly correlate to dependence on plentiful energy. American investment and innovation in transportation languished; no new developments of comparable impact have succeeded highways and airliners. …

The war on cars was handed off from beatniks to bureaucrats in the 70s. Supersonic flight was banned. Bridge building had peaked in the 1960s. … The nuclear industry found its costs jacked up by an order of magnitude and was essentially frozen in place. Interest and research in nuclear physics languished. … Green fundamentalism has become the unofficial state church of the US (and to an even greater extent Western Europe). …

In technological terms, bottom line is simple: we could very easily have flying cars today. Indeed we could have had them in 1950, but for the Depression and WWII. The proximate reason we don’t have them now is the Henry Adams curve flatline; the reasons for the flatline have taken a whole book to explore. We have let complacent nay-sayers metamorphose from pundits uttering “It can’t be done” predictions a century ago, into bureaucrats uttering “It won’t be done” prescriptions today. …

Nanotech would enable cheap home isotopic separation. Short of that, it would enable the productivity of the entire US military-industrial complex in an area the size of, say, Singapore. It’s available to anyone who has the sense to follow Feynman’s pathway and work in productive machinery instead of ivory-tower tiddley-winks. The amount of capital needed for a decent start is probably similar to a well-equipped dentist’s office.

If our pre-1970 energy use trend had continued, we’d now use ~30 times as much energy per person, mostly via nuclear power. Which is enough energy for cheap small flying cars. The raw fuel cost of nuclear power is crazy cheap; almost all the cost today is for reactors to convert power, a cost that has been made and kept high via crazy regulation and liability. Like the crazy restrictive regulations that now limit innovation in cars and planes, destroyed the small plane market, and prevented the arrival of flying cars.

Anything that goes into a certificated airplane costs ten times what the thing would otherwise. (As a pilot and airplane owner, I have personal experience of this.) It’s a lot like the high cost of human medical drugs compared with the very same drugs for veterinary use.… Building of airports remains so regulated (not just by the FAA) that only one major new one (KDEN) has been built [since 1990]. …

It seems virtually certain that if we had had [recent] cultural and regulatory environment … from, say, 1910, the development of universal private automobiles would have been suppressed. … By the end of the 70s there was virtually nothing about a car that was not dictated by regulation.

With nuclear power, we’d have had far more space activity by now. Without it, most innovation in energy intensive things has gone into energy efficiency, and into smaller ecological footprints. Which has cut growth and prevented many things. The crazy regulation that killed nuclear energy is quite unjustified, not only because according to standard estimates nuclear causes far fewer deaths, but also because standard estimates are greatly inflated via wide use of a “linear no threshold model”, regarding which there are great doubts:

Several places are known in Iran, India and Europe [with high] natural background radiation … However, there is no evidence of increased cancers or other health problems arising from these high natural levels. The millions of nuclear workers that have been monitored closely for 50 years have no higher cancer mortality than the general population but have had up to ten times the average dose. People living in Colorado and Wyoming have twice the annual dose as those in Los Angeles, but have lower cancer rates. Misasa hot springs in western Honshu, a Japan Heritage site, attracts people due to having high levels of radium, with health effects long claimed, and in a 1992 study the local residents’ cancer death rate was half the Japan average.

To explain this dramatic change of regulation and litigation, Hall says culture changed:

Western culture had essentially succeeded in supplying the needs of the physical layers of [Maslow’s] hierarchy, including the security of a well-run society; and that the shift to the Eloi [of the Well’s Time Machine story] could be thought of as people beginning to take those things—the Leave It To Beaver suburban life—for granted, and beginning to spend the bulk of their energy, efforts, and concerns on the love, esteem, and self-actualization levels. … “Make Love, Not War” slogan of the 60s … neatly sums up the Eloi shift from bravery to sensuality. …

The nuclear umbrella meant that economic, political, and moral strength of the society was no longer at a premium.

I’ll say more about explaining this cultural change in another post.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Russell’s Human Compatible

My school turned back on its mail system as we start a new semester, and a few days ago out popped Stuart Russell’s book Human Compatible (published last Oct.), with a note inside dated March 31. Here’s my review, a bit late as a result.

Let me focus first on what I see as its core thesis, and then discuss less central claims.

Russell seems to say that we still have a lot of time, and that he’s only asking for a few people to look into the problem:

The arrival of super intelligence AI is inherently unpredictable. … My timeline of, say eighty years is considerably more conservative than that of the typical AI researcher. … If just one conceptual breakthrough were needed, …superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I’m, however, fairly confident that wee have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. (pp.77-78)

Scott Alexander … summed it up brilliantly: … The skeptic’s position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers,” meanwhile [take exactly the same position.] (pp.169-170)

Yet his ask is actually much larger: unless we all want to die, AI and related disciplines must soon adopt a huge and expensive change to their standard approach: we must stop optimizing using simple fixed objectives, like the way a GPS tries to minimize travel time, or a trading program tries to maximize profits. Instead we must make systems that attempt to look at all the data on what all humans have ever done to infer a complex continually-updated integrated representation of all human preferences (and meta-preferences) over everything, and use that complex representation to make all automated decisions. Modularity be damned: Continue reading "Russell’s Human Compatible" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Sim Argument Confidence

Nick Bostrom once argued that you must choose between three options re the possibility that you are now actually living in and experiencing a simulation created by future folks to explore their past: (A) its true, you are most likely a sim person living in a sim, either of this sort or another, (B) future folk will never be able to do this, because it just isn’t possible, they die first, or they never get rich and able enough, or (C) future folk can do this, but they do not choose to do it much, so that most people experiencing a world like yours are real humans now, not future sim people.

This argument seems very solid to me: future folks either do it, can’t do it, or choose not to. If you ask folks to pick from these options you get a simple pattern of responses:

Here we see 40% in denial, hoping for another option, and the others about equally divided among the three options. But if you ask people to estimate the chances of each option, a different picture emerges. Lognormal distributions (which ignore the fact that chances can’t exceed 100%) are decent fits to these distributions, and here are their medians:

So when we look at the people who are most confident that each option is wrong, we see a very different picture. Their strongest confidence, by far, is that they can’t possibly be living in a sim, and their weakest confidence, by a large margin, is that the future will be able to create sims. So if we go by confidence, poll respondents’ favored answer is that the future will either die soon or never grow beyond limited abilities, or that sims are just impossible.

My answer is that the future mostly won’t choose to sim us:

I doubt I’m living in a simulation, because I doubt the future is that interested in simulating us; we spend very little time today doing any sort of simulation of typical farming or forager-era folks, for example. (More)

If our descendants become better adapted to their new environment, they are likely to evolve to become rather different from us, so that they spend much less of their income on sim-like stories and games, and what sims they do like should be overwhelmingly of creatures much like them, which we just aren’t. Furthermore, if such creatures have near subsistence income, and if a fully conscious sim creature costs nearly as much to support as future creatures cost, entertainment sims containing fully conscious folks should be rather rare. (More)

If we look at all the ways that we today try to simulate our past, such as in stories and games, our interest in sims of particular historical places and times fades quickly with our cultural distance from them, and especially with declining influence over our culture. We are especially interested in Ancient Greece, Rome, China, and Egypt, because those places were most like us and most influenced us. But even so, we consume very few stories and games about those eras. And regarding all the other ancient cultures even less connected to us, we show far less interest.

As we look back further in time, we can track decline in both world population, and in our interest in stories and games about those eras. During the farming era population declined by about a factor of two every millennium, but it seems to me that our interest in stories and games of those eras declines much faster. There’s far less than half as much interest in 500AD than in 1500AD, and that fact continues for each 1000 year step backward.

So even if future folk make many sims of their ancestors, people like us probably aren’t often included. Unless perhaps we happen to be especially interesting.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Remote Work Specializes

We seem on track to spend far more preventing pandemic health harm than we will suffer from it, which seems too much spending given the apparent low elasticity of harm w.r.t. prevention. But an upside is that some of this prevention effort is being invested in remote work, which is helping to develop and innovate such capacities. Which matters because remote work (a.k.a. telecommuting) is my guess for the most important neglected trend over the next 30 years. (At least of trends we can foresee now.)

My recent polls put remote work at #24 out of 44 future trends, which IMHO greatly underrates it. AGI, biotech, crypto, space, and quantum computing are far overrated (due to drama & status). Automation matters but will continue steadily as it has for many decades, not causing much trend deviation. Global warming, non-carbon energy, the rise of Asia, falling fertility, and the rise of cybersecurity and privacy are important trends, but their trend deviation implications tend more to be correctly anticipated. However, I see remote work as big and mattering more than and driving trends in migration, aug./virtual reality, and self-driving cars. And remote work implications seem neglected and unappreciated.

Remote work has been a topic of speculation for many decades, so likely somewhere out there is an author who sees it right. But I haven’t yet found that author. I’ve recently read a dozen or so recent discussions of remote work, and all of them seem to miss the main reason that remote work will be such a big deal: specialization due to agglomeration (i.e., more interaction options). The two most formal math analyses I could find actually explicitly assume that remote work, in contrast to traditional work,  produces no agglomeration gains! In contrast, these discussions get closer to the truth: Continue reading "Remote Work Specializes" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

What Future Areas Matter Most?

I made a list of 44 possibly important future areas, and just did 22 Twitter polls (with N from 379 to 1178), each time asking this question re 4 areas:

Over next 30 years, changes in which are likely to matter most?

I fit the answers to a simple model wherein respondents either pick randomly (~26% of time) or pick in proportion to each area’s (non-negative) “strength”. Here are the estimated area strengths, relative to the strongest set to 100:

Some comments:

  1. The area with the largest modeling error is migration, so politics may be messing that up.
  2. Governance mechanisms looks surprisingly strong, especially relative to its media attention.
  3. The top 7 areas hold half the total strength, and there’s a big drop to #8. ~20% is in automation, AGI, and self-driving cars.
  4. 19 areas have strengths lying within about the same factor of two. So many things seem important.
  5. Relative to these strength ratings, it seems to me that media focus is only roughly correlated. Media seems disproportionately focused on areas involving more direct social conflict.
  6. Areas add roughly linearly. For example, biotech arguably includes life extension, meat, and materials, and pandemics, and its strength is near their strength sum.
GD Star Rating
a WordPress rating system
Tagged as: ,

Future Timeline, in Econ Growth Units

Polls on the future often ask by what date one expects to see some event X. That approach, however, is sensitive to expectations on overall rates of progress. If you expect progress to speed-up a lot, but aren’t quite sure when that will start, your answers for quite different post-speed-up events should all cluster around the date at which you expect the speed-up to start.

To avoid this problem, I just did 20 related Twitter polls on the distant future, all using econ growth factors as the timeline unit: “By how much more will world economy grow between now and the 1st time when X”.

POLLS ON FUTURE (please retweet)

World economy (& tech ability) increased by ~10x between each: 3700BC, 800BC, 1700, 1895, 1966, 2018. In each poll, assume more growth, & give best (median) guess of how much more grow by then.

Note that I’ve required a key assumption: growth continues indefinitely.

The four possible growth factor answers for each poll were <100, 100-10K, 10K-1M, and “>10M or never”. If the average growth rate from 1966 to 2018 continues into the future at a constant growth rate, then these factor milestones of 100, 10K, 1M will be reached in the years 2122, 2226, and 2330. That is, the world economy has been growing lately by roughly a factor of 100 every 104 years.

I’ve found that lognormals often fit well to poll response distributions over positive numbers that vary by many orders of magnitude. So I’ve fit these poll responses to a lognormal distribution, plus a chance that the event never happens. Here are the poll % answers, % chance it never happens, and median dates (if it happens) assuming constant growth. (Polls had 95 to 175 responses each.)

Many of these estimates seem reasonable, or at least not crazy. On the whole I’d put this up against any other future timeline I know of. But I do have some complaints. For example, 21 years seems way too short for when <10% of human protein comes from animals. And 35 years until <20% of energy comes from fossil fuels seems more possible, but still rather ambitious.

I also find it implausible that median estimates for these four events cluster so closely: ems to appear, frozen humans to be revived, and AI to earn 9x humans, and AI to earn 9x humans+ems. They are all in the same ~2x growth factor range (factors 670-1350), and thus all appear in the same constant-growth 16 year period 2165-2181. As if these are very similar problems, or even the same problem, and as if they reject what seems obvious to me: it is much harder for AI to compete cost-effectively with ems than with humans. (Note also that these are far later dates than often touted in AI forecasts.)

My main complaint, however, is of overly high chances that things never happen. Such high chances make sense if you think something might actually be completely impossible. For example, a 46% chance of never finding aliens makes sense if aliens just aren’t there to be found. A 25% chance that human lifespan never goes over 1000 might result if that is biologically impossible, and a 11% chance of no colony to another star could fit with such travel being physically impossible.

A 31% chance nukes never give >50% or energy could result from them being fundamentally less efficient than collecting sunlight. And a 6% chance that AI never beats humans, an 12% chance that we never get ems, and an 19% chance that AI never beat ems could all make sense if you think AI or ems are just impossible. (Though I’m not sure these numbers are consistent with each other.) Most of these impossibility chances seem too high to me, but not crazy.

But high estimates of “never” make a lot less sense for things we know to be possible. If there is a small chance of an event happening each time period (or each growth doubling period), then unless that chance is falling exponentially toward zero, the event will almost surely happen eventually, at least if the underlying system persists indefinitely.

So I can’t believe a 50% chance that the human population never falls to <50% of its prior peak. Some predict that will result from the current fertility decline, and it could also happen when ems becomes possible and many humans then choose to convert to becoming ems. Both of these scenarios could fit with the estimated median growth factor 152, date 2132. But a great many other events could also cause such a population decline later, and forever is a long time.

The situation is even worse for an event where we have theoretical arguments that it must happen eventually. For example, continued exponential economic growth seems incompatible with our physical universe, where there’s a speed of light limit and finite entropy and atoms per unit volume. So it seems crazy to have a 22% chance that growth never slows down. Oddly, the median estimate is that if that does happen it will happen within a century.

The 13% chance that off Earth never gets larger than on-Earth economy seems similarly problematic, as we can be quite sure that the universe outside of Earth has more resources to support a larger economy.

For many of these other estimates, we don’t have as strong a theoretical reason to think they must happen eventually, but they still seem like things that each generation or era can choose for itself. So it just takes one era to choose it for it to happen. This casts doubt on the 39% chance that the biosphere never falls to <10% of current level, the 28% chance that ten nukes are never used in war, the 24% chance that authorities never monitor >90% of spoken & written words, and the 22% chance we never have whole-Earth government.

The 28% chance that we never see >1/2 of world economy destroyed in less than a doubling time is more believable given that we’ve never seen that happen in our history. But in light of that, the median of 70 years till it happens seems too short.

Perhaps these high estimates of “never” would be suppressed if respondents had to directly pick “never”, or if polls explicitly offered more larger growth factor options, such as 1M-1B, 1B-1T, 1T-1Q, etc. It might also help if respondents could express both their chances that such high levels might ever be reached, separately from their expectations for when events would happen given that such high levels are reached. These would require more than Twitter polls can support, but seem reasonably cheap should anyone want to support such efforts.

GD Star Rating
a WordPress rating system
Tagged as: ,

We Colonize The Sun First

Space is romantic; most people are overly obsessed with space in their view of the future. Even so, these remain valid questions:

  1. When will off-Earth economy be larger than the on-Earth?
  2. Where in the solar system will that off-Earth economy be then?

Here is a poll I just did on this last question:

(“Closer” here really means ease of transport, not spatial distance.)

On (1), for many centuries the economics gains from clumping have been very important, and we’ve only spend a few percent of income on energy (and cooling) and raw materials. Also, human bodies are fragile and designed for Earth, making space quite expensive for humans. As long as all these conditions remain, economic activity beyond Earth will remain a small fraction of our total economy.

However, eventually ems or other kinds of human level robots will appear and quickly come to dominate the economy. Space is much easier for them. And eventually, continued (exponential) growth will cause Earth to run out of stuff. At recent rates of growth probably not for at least several centuries, but it will happen.

On (2), human level robots probably appear before Earth runs out of stuff. So even though most science fiction looks at where humans would want to be off Earth, to think about this point in time you should be thinking instead about robots; where will robots want to be? Robots can do fine in a much wider range of physical environments. So ask less which locations are comfortable and safe for robots, and ask more where is there useful stuff to attract them.

Clumping will probably remain important; the big question is how important. The more important is clumping, the longer that the off-Earth economy will be concentrated near Earth, even when other locations are much more attractive in other ways.

Since the main reason to leave Earth at this point in time is that it is running out of energy (and cooling) and raw materials, the key attractions of other locations in the Solar System, aside from nearness to Earth, is their abundance of energy (and cooling) and raw materials.

Robots running reversible computing hardware should spend about as much on making their hardware as they do on the energy (and cooling) to run it. And the sum of these expenses should be a big fraction of an em or other robot economy. So from this point of view, both energy and raw materials are important, and about equally important.

However, it seems to me that planet Earth has a lot more raw materials than it does energy. Our planet is huge; its energy is more limited. And raw materials can be recycled, while energy cannot. So my guess is that Earth will run out of energy long before it runs out of raw materials. Thus the main attraction of non-Earth locations, besides nearness to Earth, will be energy (and cooling). And for energy, the overwhelmingly obvious location is the Sun. Which has the vast majority of mass as well, and is also on average located “closer” to most things.

Yes, the sun is very hot, and while at some cost of refrigeration robots could live in or on the Sun itself, it is probably cheaper to live a bit further away, where materials are stable without refrigeration. But that would still be a lot closer to the Sun than to anything else. Dense robot cities on Earth would have already pushed to find computer hardware that can function efficiently at high temperatures. Being near the Sun makes it a lot easier to collect the Sun’s energy without paying extra energy transport costs. And once others are there, they all gain economies of clumping by being together.

Hydrogen and helium are plentiful in the Sun, and for other elements it is probably cheaper to transport mass to the Sun than to transport energy away from it. Probably mostly from Mercury for a long while. Some say computers are more efficient when run at low temperatures, but I don’t see that. So it seems to me that once our descendants go beyond merely clumping around Earth to be near activity there, the main place they will want to go is near the Sun.

Oddly, though space colonization is a hugely popular topic in science fiction, I can’t find examples of stories set in this scenario, of most activity cramming close to the Sun. Some stories mention energy collection happening there, but rarely much other activity, and the story never happens among dense Sun-near activity. As in the poll results above, most stories focus on activity moving in the other direction, away from the Sun. Oh there are a few stories about colonies on Mercury, and of scientific or military visits to the Sun. But not the Sun as the main place that our descendants hang out near after Earth.

In fact, “colonizing the sun” is a well known example of a crazy impossible idea, considered worthy of ridicule. (“Oh, we’ll do it at night, when its cooler.”) So the actual most likely scenario, according to my analysis, is also the one thought the most crazy, and never the setting of stories. Weird.

Added 9July: Some tell me that atoms for fusion can be gained more easily from large gas giant planets than from the Sun, at least until those run out, and that they expect a long period when that is the cheapest way to make energy. For the period when those atoms, or that energy, is transported to near Earth, that is consistent with what I’ve said above.

But if the economy is pushed to move first en mass closer to those gas giants to avoid transport costs of energy or atoms, that would contradict my claim above that the Sun is the first place our descendants move after Earth. Note that we are now entering an era of mass solar energy, which will advance that tech more than fusion tech.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Three Futures

Recently, 1539 people responded to this poll:

This pattern looks intriguingly bimodal. Are there in some sense two essentially different stories about the future? So I did a more detailed poll. Though only ~95 responded (thank you!), that is enough to reveal an apparently trimodal pattern:

Respondents were asked to estimate the number of future creatures that most people today would call “human”, and also the number who would likely call themselves “human”, even if we today might disagree. The y-axis here is in terms of log10 units. In those units, world population today is 9.89 and ~11.03 humans have ever lived. So the highest number here, 20, is larger than the square of the number alive today.

As you can see, respondents expect a lot more future creatures who call themselves “human”, relative to creatures we would call “human”. And substantial fractions seem to insist that these numbers are higher than any specific number you might mention (20 here). Among the rest, the most popular answer is the 11-12 range (i.e., 0.1-1 trillion “humans”). Note that this can’t be due to a belief that we face huge risks over the next few centuries; that belief suggests the answer <11.

When I set aside the highest (>20) response, and fit a mixture of two lognormals to the rest of each response distribution, I find that regarding creatures we would call “human”, 50.1% of weight goes to a median estimate of 11.9, with (in log10 units) a sigma variation of only 0.22 around that median, 39% of weight to an estimate 13.4, with a much larger sigma of 2.5, and 11% weight to >20, i.e., very high. Regarding creatures who call themselves “human”,  a 45% weight is on estimate 12.0 with sigma 1.2, a 30% weight is on estimate 16.6 with sigma 2.0, and 25% weight on >20. (Such a lognormal mix fit to the first 4 option poll gives roughly consistent results: medians of 11.6, 15.4 with 61% weight on the low estimate, when both are forced to have the same sigma of 0.75.)

Thus, responses seem to reflect either three discrete categories of future scenarios, or three styles of analysis:

  1. ~1/2 say there will only ever be ~10x as many humans as there have been (~100x as many as living now), most all creatures who we’d call “human”. Then it all ends.
  2. ~1/4 say our descendants go on to much larger but still limited populations. There are ~300x as many humans as have ever lived, and ~1000x that many weirder creatures, though estimates here range quite widely, over ~4 factors of 10 (i.e., “orders of magnitude”).
  3. ~1/4 say our descendants grow much more, beyond squaring the number who have ever lived. Probably far far beyond. But ~1/2 of these expect that few of these creatures will be ones most of us would call “human”.

The big question: does this trimodal distribution result from a real discreteness in our actual futures and the risks we will face there, or does it mostly reflect different psychological stances toward the future?

GD Star Rating
a WordPress rating system
Tagged as: ,

Unending Winter Is Coming

Toward the end of the TV series Game of Thrones, a big long (multi-year) winter was coming, and while everyone should have been saving up for it, they were instead spending lots to fight wars. Because when others spend on war, that forces you to spend on war, and then suffer a terrible winter. The long term future of the universe may be much like this, except that future winter will never end! Let me explain.

The key universal resource is negentropy (and time), from which all others can be gained. For a very long time almost all life has run on the negentropy in sunshine landing on Earth, but almost all of that has been spent in the fierce competition to live. The things that do accumulate, such as innovations embodied in genomes, can’t really be spent to survive. However, as sunlight varies by day and season, life does sometimes save up resources during one part of a cycle, to spend in the other part of a cycle.

Humans have been growing much more rapidly than nature, but we also have had strong competition, and have also mostly only accumulated the resources that can’t directly be spent to win our competitions. We do tend to accumulate capital in peacetime, but every so often we have a big war that burns most of that up. It is mainly our remaining people and innovations that let us rebuild.

Over the long future, our descendants will gradually get better at gaining faster and cheaper access to more resources. Instead of drawing on just the sunlight coming to Earth, we’ll take all light from the Sun, and then we’ll take apart the Sun to make engines that we better control. And so on. Some of us may even gain long term views, that prioritize the very long run.

However, it seems likely that our descendants will be unable to coordinate on universal scales to prevent war and theft. If so, then every so often we will have a huge war, at which point we may burn up most of the resources that can be easily accessed on the timescale of that war. Between such wars, we’d work to increase the rate at which we could access resources during a war. And our need to watch out for possible war will force us to continually spend a non-trivial fraction of our accessible resources watching and staying prepared for war.

The big problem is: the accessible universe is finite, and so we will only ever be able to access a finite amount of negentropy. No matter how much we innovate. While so far we’ve mainly been drawing on a small steady flow of negentropy, eventually we will get better and faster access to the entire stock. The period when we use most of that stock is our universe’s one and only “summer”, after which we face an unending winter. This implies that when a total war shows up, we are at risk of burning up large fractions of all the resources that we can quickly access. So the larger a fraction of the universe’s negentropy that we can quickly access, the larger a fraction of all resources that we will ever have that we will burn up in each total war.

And even between the wars, we will need to watch out and stay prepared for war. If one uses negentropy to do stuff slowly and carefully, then the work that one can do with a given amount of negentropy is typically proportional to the inverse of the rate at which one does that work. This is true for computers, factories, pipes, drag, and much else. So ideally, the way to do the most with a fixed pot of negentropy is to do it all very slowly. And if the universe will last forever, that seems to put no bound on how much we can eventually do.

Alas, given random errors due to cosmic rays and other fluctuations, there is probably a minimum speed for doing the most with some negentropy. So the amount we can eventually do may be big, but it remains finite. However, that optimal pace is probably many orders of magnitude slower than our current speeds, letting our descendants do a lot.

The problem is, descendants who go maximally slow will make themselves very vulnerable to invasion and theft. For an analogy, imagine how severe our site security problems would be today if any one person could temporarily “grow” and become as powerful as a thousand people, but only after a one hour delay. Any one intruder to some site who grew while onsite this could wreck havoc and then be gone within an hour, before local security forces could grow to respond. Similarly when most future descendants run very slow, one who suddenly chose to run very fast might have a huge outside influence before the others could effectively respond.

So the bottom line is that if war and theft remain possible for our descendants, the rate at which they do things will be much faster than the much slower most efficient speed. In order to adequately watch out for and respond to attacks, they will have to run fast, and thus more quickly use up their available stocks of resources, such as stars. And when their stocks run out, the future will have run out for them. Like in a Game of Thrones scenario after a long winter war, they would then starve.

Now it is possible that there will be future resources that simply cannot be exploited quickly. Such as perhaps big black holes. In this case some of our descendants could last for a very long time slowly sipping on such supplies. But their activity levels at that point would be much lower than their rates before they used up all the other faster-access resources.

Okay, let’s put this all together into a picture of the long term future. Today we are growing fast, and getting better at accessing more kinds of resources faster. Eventually our growth in resource use will reach a peak. At that point we will use resources much faster than today, and also much faster than what would be the most efficient rate if we could all coordinate to prevent war and theft. Maybe a billion times faster or more. Fearing war, we will keep spending to watch and prepare for war, and then every once in a while we would burn up most accessible resources in a big war. After using up faster access resources, we then switch to lower activity levels using resources that we just can’t extract as fast, no matter how clever we are. Then we use up each one of those much faster than optimal, with activity levels falling after each source is used up.

That is, unless we can prevent war and theft, our long term future is an unending winter, wherein we use up most of our resources in early winter wars, and then slowly die and shrink and slow and war as the winter continues, on to infinity. And as a result do much less than we could have otherwise; perhaps a billion times less or more. (Thought still vastly more than we have done so far.) And this is all if we are lucky enough to avoid existential risk, which might destroy it all prematurely, leading instead to a fully-dead empty eternity.

Happy holidays.

GD Star Rating
a WordPress rating system
Tagged as: , ,

How To Prep For War

In my last two posts I’ve noted while war deaths have fallen greatly since the world wars, the magnitude and duration of this fall isn’t that far out of line with previous falls over the last four centuries, falls that have always been followed by rises, as part of a regular cycle of war. I also noted that the theory arguments that have been offered to explain why this trend will long continue, in a deviation from the historical pattern, seem weak. Thus there seems to be a substantial and neglected chance of a lot more war in the next century. I’m not the only one who says this; so do many war experts.

If a lot more war is coming, what should you do personally, to help yourself, your family, and your friends? (Assuming your goal is mainly to personally survive and prosper.) While we can’t say that much specifically about future war’s style, timing, or participants, we know enough to suggest some general advice.

1. Over the last century most war deaths have not been battle deaths, and the battle death share has fallen. Thus you should worry less about dying in battle, and more about other ways to die.

2. War tends to cause the most harm near where its battles happen, and near concentrations of supporting industrial and human production. This means you are more at risk if you live near the nations that participate in the war, and in those nations near dense concentrations and travel routes, that is, near major cities and roads.

3. If there are big pandemics or economic collapse, you may be better off in more isolated and economically self-sufficient places. (That doesn’t include outer space, which is quite unlikely to be economically self-sufficient anytime soon.) Of course there is a big tradeoff here, as these are the places we expect to do less well in the absence of war.

4. Most of your expected deaths may happen in scenarios where nukes are used. There’s a big literature on how to prepare for and avoid harms from nukes, so I’ll just refer you to that. Ironically, you may be more at risk from being hurt by nukes in places that have nukes to retaliate with. But you might be more at risk from being enslaved or otherwise dominated if your place doesn’t have nukes.

5. Most of our computer systems have poor security, and so are poorly protected against cyberwar. This is mainly because software firms are usually more eager to be first to market than to add security, which most customers don’t notice at first. If this situation doesn’t change much, then you should be wary of depending too much on standard connected computer systems. For essential services, rely on disconnected, non-standard, or high-security-investment systems.

6. Big wars tend to induce a lot more taxation of the rich, to pay for wars. So have your dynasty invest more in having more children, relative to fewer richer kids, or invest in assets that are hidden from tax authorities. Or less bother to invest for the long run.

7. The biggest wars so far, the world wars and the thirty years war, have been driven by strong ideologies, such as communism and catholicism. So help your descendants avoid succumbing to strong ideologies, while also avoiding the appearance of publicly opposing locally popular versions. And try to stay away from places that seem more likely to succumb.

8. While old ideologies still have plenty of fire, the big new ideology on the block seems related to woke identity. While this seems to inspire sufficiently confident passions for war, it seems far from clear who would fight who and how in a woke war. This scenario seems worth more thought.

Added 27July: 

9. If big governance changes and social destruction are coming, that may create opportunities for the adoption of more radical social reforms. And that can encourage us to work more on developing such reforms today.

GD Star Rating
a WordPress rating system
Tagged as: ,