Tag Archives: Future

What Future Areas Matter Most?

I made a list of 44 possibly important future areas, and just did 22 Twitter polls (with N from 379 to 1178), each time asking this question re 4 areas:

Over next 30 years, changes in which are likely to matter most?

I fit the answers to a simple model wherein respondents either pick randomly (~26% of time) or pick in proportion to each area’s (non-negative) “strength”. Here are the estimated area strengths, relative to the strongest set to 100:

Some comments:

  1. The area with the largest modeling error is migration, so politics may be messing that up.
  2. Governance mechanisms looks surprisingly strong, especially relative to its media attention.
  3. The top 7 areas hold half the total strength, and there’s a big drop to #8. ~20% is in automation, AGI, and self-driving cars.
  4. 19 areas have strengths lying within about the same factor of two. So many things seem important.
  5. Relative to these strength ratings, it seems to me that media focus is only roughly correlated. Media seems disproportionately focused on areas involving more direct social conflict.
  6. Areas add roughly linearly. For example, biotech arguably includes life extension, meat, and materials, and pandemics, and its strength is near their strength sum.
GD Star Rating
loading...
Tagged as: ,

Future Timeline, in Econ Growth Units

Polls on the future often ask by what date one expects to see some event X. That approach, however, is sensitive to expectations on overall rates of progress. If you expect progress to speed-up a lot, but aren’t quite sure when that will start, your answers for quite different post-speed-up events should all cluster around the date at which you expect the speed-up to start.

To avoid this problem, I just did 20 related Twitter polls on the distant future, all using econ growth factors as the timeline unit: “By how much more will world economy grow between now and the 1st time when X”.

POLLS ON FUTURE (please retweet)

World economy (& tech ability) increased by ~10x between each: 3700BC, 800BC, 1700, 1895, 1966, 2018. In each poll, assume more growth, & give best (median) guess of how much more grow by then.

Note that I’ve required a key assumption: growth continues indefinitely.

The four possible growth factor answers for each poll were <100, 100-10K, 10K-1M, and “>10M or never”. If the average growth rate from 1966 to 2018 continues into the future at a constant growth rate, then these factor milestones of 100, 10K, 1M will be reached in the years 2122, 2226, and 2330. That is, the world economy has been growing lately by roughly a factor of 100 every 104 years.

I’ve found that lognormals often fit well to poll response distributions over positive numbers that vary by many orders of magnitude. So I’ve fit these poll responses to a lognormal distribution, plus a chance that the event never happens. Here are the poll % answers, % chance it never happens, and median dates (if it happens) assuming constant growth. (Polls had 95 to 175 responses each.)

Many of these estimates seem reasonable, or at least not crazy. On the whole I’d put this up against any other future timeline I know of. But I do have some complaints. For example, 21 years seems way too short for when <10% of human protein comes from animals. And 35 years until <20% of energy comes from fossil fuels seems more possible, but still rather ambitious.

I also find it implausible that median estimates for these four events cluster so closely: ems to appear, frozen humans to be revived, and AI to earn 9x humans, and AI to earn 9x humans+ems. They are all in the same ~2x growth factor range (factors 670-1350), and thus all appear in the same constant-growth 16 year period 2165-2181. As if these are very similar problems, or even the same problem, and as if they reject what seems obvious to me: it is much harder for AI to compete cost-effectively with ems than with humans. (Note also that these are far later dates than often touted in AI forecasts.)

My main complaint, however, is of overly high chances that things never happen. Such high chances make sense if you think something might actually be completely impossible. For example, a 46% chance of never finding aliens makes sense if aliens just aren’t there to be found. A 25% chance that human lifespan never goes over 1000 might result if that is biologically impossible, and a 11% chance of no colony to another star could fit with such travel being physically impossible.

A 31% chance nukes never give >50% or energy could result from them being fundamentally less efficient than collecting sunlight. And a 6% chance that AI never beats humans, an 12% chance that we never get ems, and an 19% chance that AI never beat ems could all make sense if you think AI or ems are just impossible. (Though I’m not sure these numbers are consistent with each other.) Most of these impossibility chances seem too high to me, but not crazy.

But high estimates of “never” make a lot less sense for things we know to be possible. If there is a small chance of an event happening each time period (or each growth doubling period), then unless that chance is falling exponentially toward zero, the event will almost surely happen eventually, at least if the underlying system persists indefinitely.

So I can’t believe a 50% chance that the human population never falls to <50% of its prior peak. Some predict that will result from the current fertility decline, and it could also happen when ems becomes possible and many humans then choose to convert to becoming ems. Both of these scenarios could fit with the estimated median growth factor 152, date 2132. But a great many other events could also cause such a population decline later, and forever is a long time.

The situation is even worse for an event where we have theoretical arguments that it must happen eventually. For example, continued exponential economic growth seems incompatible with our physical universe, where there’s a speed of light limit and finite entropy and atoms per unit volume. So it seems crazy to have a 22% chance that growth never slows down. Oddly, the median estimate is that if that does happen it will happen within a century.

The 13% chance that off Earth never gets larger than on-Earth economy seems similarly problematic, as we can be quite sure that the universe outside of Earth has more resources to support a larger economy.

For many of these other estimates, we don’t have as strong a theoretical reason to think they must happen eventually, but they still seem like things that each generation or era can choose for itself. So it just takes one era to choose it for it to happen. This casts doubt on the 39% chance that the biosphere never falls to <10% of current level, the 28% chance that ten nukes are never used in war, the 24% chance that authorities never monitor >90% of spoken & written words, and the 22% chance we never have whole-Earth government.

The 28% chance that we never see >1/2 of world economy destroyed in less than a doubling time is more believable given that we’ve never seen that happen in our history. But in light of that, the median of 70 years till it happens seems too short.

Perhaps these high estimates of “never” would be suppressed if respondents had to directly pick “never”, or if polls explicitly offered more larger growth factor options, such as 1M-1B, 1B-1T, 1T-1Q, etc. It might also help if respondents could express both their chances that such high levels might ever be reached, separately from their expectations for when events would happen given that such high levels are reached. These would require more than Twitter polls can support, but seem reasonably cheap should anyone want to support such efforts.

GD Star Rating
loading...
Tagged as: ,

We Colonize The Sun First

Space is romantic; most people are overly obsessed with space in their view of the future. Even so, these remain valid questions:

  1. When will off-Earth economy be larger than the on-Earth?
  2. Where in the solar system will that off-Earth economy be then?

Here is a poll I just did on this last question:

(“Closer” here really means ease of transport, not spatial distance.)

On (1), for many centuries the economics gains from clumping have been very important, and we’ve only spend a few percent of income on energy (and cooling) and raw materials. Also, human bodies are fragile and designed for Earth, making space quite expensive for humans. As long as all these conditions remain, economic activity beyond Earth will remain a small fraction of our total economy.

However, eventually ems or other kinds of human level robots will appear and quickly come to dominate the economy. Space is much easier for them. And eventually, continued (exponential) growth will cause Earth to run out of stuff. At recent rates of growth probably not for at least several centuries, but it will happen.

On (2), human level robots probably appear before Earth runs out of stuff. So even though most science fiction looks at where humans would want to be off Earth, to think about this point in time you should be thinking instead about robots; where will robots want to be? Robots can do fine in a much wider range of physical environments. So ask less which locations are comfortable and safe for robots, and ask more where is there useful stuff to attract them.

Clumping will probably remain important; the big question is how important. The more important is clumping, the longer that the off-Earth economy will be concentrated near Earth, even when other locations are much more attractive in other ways.

Since the main reason to leave Earth at this point in time is that it is running out of energy (and cooling) and raw materials, the key attractions of other locations in the Solar System, aside from nearness to Earth, is their abundance of energy (and cooling) and raw materials.

Robots running reversible computing hardware should spend about as much on making their hardware as they do on the energy (and cooling) to run it. And the sum of these expenses should be a big fraction of an em or other robot economy. So from this point of view, both energy and raw materials are important, and about equally important.

However, it seems to me that planet Earth has a lot more raw materials than it does energy. Our planet is huge; its energy is more limited. And raw materials can be recycled, while energy cannot. So my guess is that Earth will run out of energy long before it runs out of raw materials. Thus the main attraction of non-Earth locations, besides nearness to Earth, will be energy (and cooling). And for energy, the overwhelmingly obvious location is the Sun. Which has the vast majority of mass as well, and is also on average located “closer” to most things.

Yes, the sun is very hot, and while at some cost of refrigeration robots could live in or on the Sun itself, it is probably cheaper to live a bit further away, where materials are stable without refrigeration. But that would still be a lot closer to the Sun than to anything else. Dense robot cities on Earth would have already pushed to find computer hardware that can function efficiently at high temperatures. Being near the Sun makes it a lot easier to collect the Sun’s energy without paying extra energy transport costs. And once others are there, they all gain economies of clumping by being together.

Hydrogen and helium are plentiful in the Sun, and for other elements it is probably cheaper to transport mass to the Sun than to transport energy away from it. Probably mostly from Mercury for a long while. Some say computers are more efficient when run at low temperatures, but I don’t see that. So it seems to me that once our descendants go beyond merely clumping around Earth to be near activity there, the main place they will want to go is near the Sun.

Oddly, though space colonization is a hugely popular topic in science fiction, I can’t find examples of stories set in this scenario, of most activity cramming close to the Sun. Some stories mention energy collection happening there, but rarely much other activity, and the story never happens among dense Sun-near activity. As in the poll results above, most stories focus on activity moving in the other direction, away from the Sun. Oh there are a few stories about colonies on Mercury, and of scientific or military visits to the Sun. But not the Sun as the main place that our descendants hang out near after Earth.

In fact, “colonizing the sun” is a well known example of a crazy impossible idea, considered worthy of ridicule. (“Oh, we’ll do it at night, when its cooler.”) So the actual most likely scenario, according to my analysis, is also the one thought the most crazy, and never the setting of stories. Weird.

Added 9July: Some tell me that atoms for fusion can be gained more easily from large gas giant planets than from the Sun, at least until those run out, and that they expect a long period when that is the cheapest way to make energy. For the period when those atoms, or that energy, is transported to near Earth, that is consistent with what I’ve said above.

But if the economy is pushed to move first en mass closer to those gas giants to avoid transport costs of energy or atoms, that would contradict my claim above that the Sun is the first place our descendants move after Earth. Note that we are now entering an era of mass solar energy, which will advance that tech more than fusion tech.

GD Star Rating
loading...
Tagged as: , ,

Three Futures

Recently, 1539 people responded to this poll:

This pattern looks intriguingly bimodal. Are there in some sense two essentially different stories about the future? So I did a more detailed poll. Though only ~95 responded (thank you!), that is enough to reveal an apparently trimodal pattern:

Respondents were asked to estimate the number of future creatures that most people today would call “human”, and also the number who would likely call themselves “human”, even if we today might disagree. The y-axis here is in terms of log10 units. In those units, world population today is 9.89 and ~11.03 humans have ever lived. So the highest number here, 20, is larger than the square of the number alive today.

As you can see, respondents expect a lot more future creatures who call themselves “human”, relative to creatures we would call “human”. And substantial fractions seem to insist that these numbers are higher than any specific number you might mention (20 here). Among the rest, the most popular answer is the 11-12 range (i.e., 0.1-1 trillion “humans”). Note that this can’t be due to a belief that we face huge risks over the next few centuries; that belief suggests the answer <11.

When I set aside the highest (>20) response, and fit a mixture of two lognormals to the rest of each response distribution, I find that regarding creatures we would call “human”, 50.1% of weight goes to a median estimate of 11.9, with (in log10 units) a sigma variation of only 0.22 around that median, 39% of weight to an estimate 13.4, with a much larger sigma of 2.5, and 11% weight to >20, i.e., very high. Regarding creatures who call themselves “human”,  a 45% weight is on estimate 12.0 with sigma 1.2, a 30% weight is on estimate 16.6 with sigma 2.0, and 25% weight on >20. (Such a lognormal mix fit to the first 4 option poll gives roughly consistent results: medians of 11.6, 15.4 with 61% weight on the low estimate, when both are forced to have the same sigma of 0.75.)

Thus, responses seem to reflect either three discrete categories of future scenarios, or three styles of analysis:

  1. ~1/2 say there will only ever be ~10x as many humans as there have been (~100x as many as living now), most all creatures who we’d call “human”. Then it all ends.
  2. ~1/4 say our descendants go on to much larger but still limited populations. There are ~300x as many humans as have ever lived, and ~1000x that many weirder creatures, though estimates here range quite widely, over ~4 factors of 10 (i.e., “orders of magnitude”).
  3. ~1/4 say our descendants grow much more, beyond squaring the number who have ever lived. Probably far far beyond. But ~1/2 of these expect that few of these creatures will be ones most of us would call “human”.

The big question: does this trimodal distribution result from a real discreteness in our actual futures and the risks we will face there, or does it mostly reflect different psychological stances toward the future?

GD Star Rating
loading...
Tagged as: ,

Unending Winter Is Coming

Toward the end of the TV series Game of Thrones, a big long (multi-year) winter was coming, and while everyone should have been saving up for it, they were instead spending lots to fight wars. Because when others spend on war, that forces you to spend on war, and then suffer a terrible winter. The long term future of the universe may be much like this, except that future winter will never end! Let me explain.

The key universal resource is negentropy (and time), from which all others can be gained. For a very long time almost all life has run on the negentropy in sunshine landing on Earth, but almost all of that has been spent in the fierce competition to live. The things that do accumulate, such as innovations embodied in genomes, can’t really be spent to survive. However, as sunlight varies by day and season, life does sometimes save up resources during one part of a cycle, to spend in the other part of a cycle.

Humans have been growing much more rapidly than nature, but we also have had strong competition, and have also mostly only accumulated the resources that can’t directly be spent to win our competitions. We do tend to accumulate capital in peacetime, but every so often we have a big war that burns most of that up. It is mainly our remaining people and innovations that let us rebuild.

Over the long future, our descendants will gradually get better at gaining faster and cheaper access to more resources. Instead of drawing on just the sunlight coming to Earth, we’ll take all light from the Sun, and then we’ll take apart the Sun to make engines that we better control. And so on. Some of us may even gain long term views, that prioritize the very long run.

However, it seems likely that our descendants will be unable to coordinate on universal scales to prevent war and theft. If so, then every so often we will have a huge war, at which point we may burn up most of the resources that can be easily accessed on the timescale of that war. Between such wars, we’d work to increase the rate at which we could access resources during a war. And our need to watch out for possible war will force us to continually spend a non-trivial fraction of our accessible resources watching and staying prepared for war.

The big problem is: the accessible universe is finite, and so we will only ever be able to access a finite amount of negentropy. No matter how much we innovate. While so far we’ve mainly been drawing on a small steady flow of negentropy, eventually we will get better and faster access to the entire stock. The period when we use most of that stock is our universe’s one and only “summer”, after which we face an unending winter. This implies that when a total war shows up, we are at risk of burning up large fractions of all the resources that we can quickly access. So the larger a fraction of the universe’s negentropy that we can quickly access, the larger a fraction of all resources that we will ever have that we will burn up in each total war.

And even between the wars, we will need to watch out and stay prepared for war. If one uses negentropy to do stuff slowly and carefully, then the work that one can do with a given amount of negentropy is typically proportional to the inverse of the rate at which one does that work. This is true for computers, factories, pipes, drag, and much else. So ideally, the way to do the most with a fixed pot of negentropy is to do it all very slowly. And if the universe will last forever, that seems to put no bound on how much we can eventually do.

Alas, given random errors due to cosmic rays and other fluctuations, there is probably a minimum speed for doing the most with some negentropy. So the amount we can eventually do may be big, but it remains finite. However, that optimal pace is probably many orders of magnitude slower than our current speeds, letting our descendants do a lot.

The problem is, descendants who go maximally slow will make themselves very vulnerable to invasion and theft. For an analogy, imagine how severe our site security problems would be today if any one person could temporarily “grow” and become as powerful as a thousand people, but only after a one hour delay. Any one intruder to some site who grew while onsite this could wreck havoc and then be gone within an hour, before local security forces could grow to respond. Similarly when most future descendants run very slow, one who suddenly chose to run very fast might have a huge outside influence before the others could effectively respond.

So the bottom line is that if war and theft remain possible for our descendants, the rate at which they do things will be much faster than the much slower most efficient speed. In order to adequately watch out for and respond to attacks, they will have to run fast, and thus more quickly use up their available stocks of resources, such as stars. And when their stocks run out, the future will have run out for them. Like in a Game of Thrones scenario after a long winter war, they would then starve.

Now it is possible that there will be future resources that simply cannot be exploited quickly. Such as perhaps big black holes. In this case some of our descendants could last for a very long time slowly sipping on such supplies. But their activity levels at that point would be much lower than their rates before they used up all the other faster-access resources.

Okay, let’s put this all together into a picture of the long term future. Today we are growing fast, and getting better at accessing more kinds of resources faster. Eventually our growth in resource use will reach a peak. At that point we will use resources much faster than today, and also much faster than what would be the most efficient rate if we could all coordinate to prevent war and theft. Maybe a billion times faster or more. Fearing war, we will keep spending to watch and prepare for war, and then every once in a while we would burn up most accessible resources in a big war. After using up faster access resources, we then switch to lower activity levels using resources that we just can’t extract as fast, no matter how clever we are. Then we use up each one of those much faster than optimal, with activity levels falling after each source is used up.

That is, unless we can prevent war and theft, our long term future is an unending winter, wherein we use up most of our resources in early winter wars, and then slowly die and shrink and slow and war as the winter continues, on to infinity. And as a result do much less than we could have otherwise; perhaps a billion times less or more. (Thought still vastly more than we have done so far.) And this is all if we are lucky enough to avoid existential risk, which might destroy it all prematurely, leading instead to a fully-dead empty eternity.

Happy holidays.

GD Star Rating
loading...
Tagged as: , ,

How To Prep For War

In my last two posts I’ve noted while war deaths have fallen greatly since the world wars, the magnitude and duration of this fall isn’t that far out of line with previous falls over the last four centuries, falls that have always been followed by rises, as part of a regular cycle of war. I also noted that the theory arguments that have been offered to explain why this trend will long continue, in a deviation from the historical pattern, seem weak. Thus there seems to be a substantial and neglected chance of a lot more war in the next century. I’m not the only one who says this; so do many war experts.

If a lot more war is coming, what should you do personally, to help yourself, your family, and your friends? (Assuming your goal is mainly to personally survive and prosper.) While we can’t say that much specifically about future war’s style, timing, or participants, we know enough to suggest some general advice.

1. Over the last century most war deaths have not been battle deaths, and the battle death share has fallen. Thus you should worry less about dying in battle, and more about other ways to die.

2. War tends to cause the most harm near where its battles happen, and near concentrations of supporting industrial and human production. This means you are more at risk if you live near the nations that participate in the war, and in those nations near dense concentrations and travel routes, that is, near major cities and roads.

3. If there are big pandemics or economic collapse, you may be better off in more isolated and economically self-sufficient places. (That doesn’t include outer space, which is quite unlikely to be economically self-sufficient anytime soon.) Of course there is a big tradeoff here, as these are the places we expect to do less well in the absence of war.

4. Most of your expected deaths may happen in scenarios where nukes are used. There’s a big literature on how to prepare for and avoid harms from nukes, so I’ll just refer you to that. Ironically, you may be more at risk from being hurt by nukes in places that have nukes to retaliate with. But you might be more at risk from being enslaved or otherwise dominated if your place doesn’t have nukes.

5. Most of our computer systems have poor security, and so are poorly protected against cyberwar. This is mainly because software firms are usually more eager to be first to market than to add security, which most customers don’t notice at first. If this situation doesn’t change much, then you should be wary of depending too much on standard connected computer systems. For essential services, rely on disconnected, non-standard, or high-security-investment systems.

6. Big wars tend to induce a lot more taxation of the rich, to pay for wars. So have your dynasty invest more in having more children, relative to fewer richer kids, or invest in assets that are hidden from tax authorities. Or less bother to invest for the long run.

7. The biggest wars so far, the world wars and the thirty years war, have been driven by strong ideologies, such as communism and catholicism. So help your descendants avoid succumbing to strong ideologies, while also avoiding the appearance of publicly opposing locally popular versions. And try to stay away from places that seem more likely to succumb.

8. While old ideologies still have plenty of fire, the big new ideology on the block seems related to woke identity. While this seems to inspire sufficiently confident passions for war, it seems far from clear who would fight who and how in a woke war. This scenario seems worth more thought.

Added 27July: 

9. If big governance changes and social destruction are coming, that may create opportunities for the adoption of more radical social reforms. And that can encourage us to work more on developing such reforms today.

GD Star Rating
loading...
Tagged as: ,

Big War Remains Possible

The following poll suggests that a majority of my Twitter followers think war will decline; in the next 80 years we won’t see a 15 year period with a war death rate above the median level we’ve see over the last four centuries:

To predict a big deviation from the simple historical trend, one needs some sort of basis in theory. Alas, the theory arguments that I’ve heard re war optimism seem quite inadequate. I thus suspect much wishful thinking here.

For example, some say the world economy today is too interdependent for war. But interdependent economies have long gone to war. Consider the world wars in Europe, or the American civil war. Some say that we don’t risk war because it is very destructive of complex fragile physical capital and infrastructure. But while such capital was indeed destroyed during the world wars, the places most hurt rebounded quickly, as they had good institutional and human capital.

Some note that international alliances make war less likely between alliance partners. But they make war more likely between alliances. Some suggest that better info tells us more about rivals today, and so we are less likely to misjudge rival abilities and motives. But there still seems plenty of room for errors here as “brinkmanship” is a key dynamic. Also, this doesn’t prevent powers from investing in war abilities to gain advantages via credible threats of war.

Some point to a reduced willingness by winners to gain concrete advantages via the ancient strategies of raping and enslaving losers, and demanding great tribute. But we still manage to find many other motives for war, and there’s no fundamental obstacles to reviving ancient strategies; tribute is still quite feasible, as is slavery. Also, the peak war periods so far have been associated with ideology battles, and we still have plenty of those.

Some say nuclear weapons have made small wars harder. But that is only between pairs of nations both of which have nukes, which isn’t most nation pairs. Pairs of nations with nukes can still fight big wars, there are more such pairs today than before, over 80 years there’s plenty of time for some pair to pick a fight, and nuke wars casualties may be enormous.

I suspect that many are relying on modern propaganda on our moral superiority over our ancestors. But while we mostly count humans of the mid twentieth century as morally superior to humans from prior centuries, that was the period of peak war mortality.

I also suspect that many are drawing conclusions about war from long term trends regarding other forms of violence, as in slavery, crime, and personal relations, as well as from apparently lower public tolerance for war deaths and overall apparent disapproval and reluctance regarding war. But just before World War I we had also seen such trends:

Then, as now, Europe had lived through a long period of relative peace, … rapid progress … had given humanity a sense of shared interests that precluded war, … world leaders scarcely believed a global conflagration was possible. (more)

The world is vast, eighty years is a long time, and the number of possible global social & diplomatic scenarios over such period is vast. So it seems crazy to base predictions on future war rates on inside view calculations from particular current stances, deals, or inclinations. The raw historical record, and its large long-term fluctuations, should weigh heavily on our minds.

GD Star Rating
loading...
Tagged as: ,

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
loading...
Tagged as: , , ,

Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
loading...
Tagged as: ,

How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
loading...
Tagged as: , ,