Tag Archives: Future

Non-Grabby Legacies

Our descendants will have far more effects on the universe if they become grabby, and most of their expected effects come in that scenario. Even so, as I discussed in my last post, most see only a small chance for that scenario. So what if we remain a non-grabby civilization? What will be our long-term legacies then?

In roughly a billion years, grabby aliens should pass by here, and then soon change this whole area more to their liking. At that point, those grabby aliens will probably have never met any other grabby aliens, and will be very interested in estimating what they might be like, and especially what they might do when the two meet. And one of their main sources of concrete data will be the limited number of non-grabby alien civilizations that they have come across.

Which is all to say that these grabby aliens will be very interested in learning about us, and should be willing to pay substantial costs to do so. So in the unlikely event that our civilization could last the roughly billion years until they get here, those aliens would probably pay substantial costs to protect and preserve us, if that were the cost of learning about us. Of course if they had more advanced tech, they might have other less-fun-for-us ways to achieve that goal.

In the more likely case where we do not last that long, the grabby aliens who arrive here will be looking for any fossils or remnants that they could study. Stuff left here on the surface of the Earth probably won’t survive that long, but stuff left on the surface of geologically dead places like the moon or Mars might well. As could stuff left orbiting between the planets or stars.

Anticipating this outcome, some of us might try to leave data stores about us for them to find. Like we did on the Voyager spacecraft. As our long term legacy. And some of those folks might try to tie their personal revival to such stores. I’m not sure how it could be done, but if you could mix up the info they want with the info that specifies you as an em, maybe you could make it so that the easiest way for them to get the info they want is to revive you.

Of course if a great many people tried this trick, they might bid the “price” down very low. “They want you to revive them for a week to get your info; I only ask one day.” So elites might regulate who is allowed to leave legacy data stores, to keep this privilege to themselves.

Long before grabby aliens got here, they would pass through spacetime events where we’d be active on their past light cone. In fact, sending out a signal from here in most any direction should eventually hit some grabby aliens expanding in our direction. So if we could coordinate with them to send signals out just when they’d be looking at us (such as by sending signals following those from a cosmic explosion), we could tell them about us, and influence them, via such signals.

Some of us might want to try the trick of mixing up their em code with the info aliens want, to force their revival at the receiver end, but the bandwidth to send signals to be received in a 100Myr is rather small. However, as I’ve discussed before, one key function for such signals is that they can prove that they were sent on the date claimed. Later data stores found here are less trustworthy, as they could have been modified in the interim. So perhaps we could send out hash codes to verify datastores saved here now.

We could of course also tell them about any other non-grabby aliens we have discovered. But they’d probably already know about them, assuming they have vastly greater capabilities and tech at least as good as ours.

So is this an exciting legacy to you? A few stories about us that might help some other ambitious civilization calibrate how yet other ambitious civilizations will react upon meeting? No, well then maybe we should work on figuring out how to become grabby ourselves.

GD Star Rating
loading...
Tagged as: ,

Hail S. Jay Olson

Over the years I’ve noticed that grad students tend to want to declare their literature search over way too early. If they don’t find something in the first few places they look, they figure it isn’t there. Alas, they implicitly assume that the world of research is better organized than it is; usually a lot more search is needed.

Seems I’ve just made this mistake myself. Having developed a grabby aliens concept and searched around a bit I figured it must be original. But it turns out that in the last five years physicist S. Jay Olson has a whole sequence of seven related papers, most of which are published, and some which got substantial media attention at the time. (We’ll change our paper to cite these soon.)

Olson saw that empirical study of aliens gets easier if you focus on the loud (not quiet) aliens, who expand fast and make visible changes, and also if you focus on simple models with only a few free parameters, to fit to the few key datums that we have. Olson variously called these aliens “aggressively expanding civilizations”, “expanding cosmological civilizations”, “extragalactic civilizations”, and “visible galaxy-spanning civilizations”. In this post, I’ll call them “expansionist”, intended to include both his and my versions.

Olson showed that if we assume that humanity’s current date is a plausible expansionist alien origin date, and if we assume a uniform distribution over our percentile rank among such origin dates, then we can estimate two things from data:

  1. from our current date, an overall appearance rate constant, regarding how frequently expansionist aliens appear, and
  2. from the fact that we do not see grabby controlled volumes in our sky, their expansion speed.

Olson only required one more input to estimate the full distribution of such aliens over space and time, and that is an “appearance rate” function f(t), to multiply by the appearance rate constant, to obtain the rate at which expansionist aliens appear at each time t. Olson tried several different approaches to this function, based on different assumptions about the star formation rate and the rate of local extinction events like supernovae. Different assumptions made only make modest differences to his conclusions.

Our recent analysis of “grabby aliens”, done unaware of Olson’s work, is similar in many ways. We also assume visible long-expanding civilizations, we focus on a very simple model, in our case with three free parameters, and we fit two of them (expansion speed and appearance rate constant) to data in nearly the same way that Olson did.

The key point on which we differ is:

  1. My group uses a simple hard-steps-power-law for the expansionist alien appearance rate function, and estimates the power in that power law from the history of major evolutionary events on Earth.
  2. Using that same power law, we estimate humanity’s current date to be very early, at least if expansionist aliens do not arrive to set an early deadline. Others have estimated modest degrees of earliness, but they have ignored the hard-steps power law. With that included, we are crazy early unless both the power is implausibly low, and the minimum habitable star mass is implausibly large.

So we seem to have something to add to Olson’s thoughtful foundations.

Looking over the coverage by others of Olson’s work, I notice that it all seems to completely ignore his empirical efforts! What they mainly care about seems to be that his having published on the idea of expansionist aliens licensed them to speculate on the theoretical plausibility of such aliens: How physically feasible is it to rapidly expansion in space over millions of years? If physically feasible, is it socially feasible, and if that would any civilization actually choose it?

That is, those who commented on Olson’s work all acted as if the only interesting topic was the theoretical plausibility of his postulates. They showed little interest in the idea that we could confront a simple aliens model with data, to estimate the actual aliens situation out there. They seem stuck assuming that this is a topic on which we essentially have no data, and thus can only speculate using our general priors and theories.

So I guess that should become our central focus now: to get people to see that we may actually have enough data now to get decent estimates on the basic aliens situation out there. And with a bit more work we might make much better estimates. This is not just a topic for theoretical speculation, where everyone gets to say “but have you considered this other scenario that I just made up, isn’t it sorta interesting?”

Here are some comments via email from S. Jay Olson:

It’s been about a week since I learned than Robin Hanson had, in a flash, seen all the basic postulates, crowd-sourced a research team, and smashed through his personal COVID infection to present a paper and multiple public talks on this cosmology. For me, operating from the outskirts of academia, it was a roller coaster ride just to figure out what was happening.

But, what I found most remarkable in the experience was this. Starting from two basic thoughts — 1) some fraction of aliens should be high-speed expansionistic, and 2) their home galaxy is probably not a fundamental barrier to expansion — so many conclusions appear inevitable: “They” are likely a cosmological distance from us. A major fraction of the universe is probably saturated by them already. Sufficiently high tech assumptions (high expansion speed) means they are likely invisible from our vantage point. If we can see an alien domain, it will likely cover a shockingly large angle in the sky. And the key datum for prediction is our cosmic time of arrival. It’s all there (and more), in both lines of research.

Beyond that, Robin has a knack for forcing the issue. If their “hard steps model” for the appearance rate of life is valid (giving f(t) ~ t^n), there aren’t too many ways to solve humanity’s earliness problem. Something would need to make the universe a very different place in the near cosmic future, as far as life is concerned. A phase transition resulting in the “end of the universe” would do it — bad news indeed. But the alternative is that we are, literally, the phase transition.

GD Star Rating
loading...
Tagged as: , ,

What Is At Stake?

In the traditional Christian worldview, God sets the overall path of human history, a history confined to one planet for a few thousand years. Individuals can choose to be on the side of good or evil, and maybe make a modest difference to local human experience, but they can’t change the largest story. That is firmly in God’s hands. Yet an ability to personally choose good or evil, or to make a difference to mere thousands of associates, seemed to be plenty enough to motivate most Christians to action.

In a standard narrative of elites today, the entire future of value in the universe sits our current collective hands. If we make poor choices today, such as about global warming or AI, we may soon kill ourselves and prevent all future civilization, forever destroying all sources of value. Or we might set our descendants down a permanently perverse path, so that even if they never go extinct they also never realize most of the universe’s great potential. And elites today tend to lament that these far grander stakes don’t seem to motivate many to action.

Humans seem to have arrived very early in the history of the universe, a fact that seems best explained by a looming deadline: grabby/aggressive aliens will control all the universe volume within a billion years, and so we had to show up before that deadline if we were to show up at all.

So now we have strong evidence that all future value in the universe does not sit in our hands. What does sit in our collective hands are:
A) the experiences of our descendants for roughly (within a factor of ten around) the next billion years, before they meet aliens, and
B) our influence on the larger mix of alien cultures in the eras after many alien civilizations meet and influence each other.

Now a billion years is in fact a very long time, a duration during which we could have an enormous number of descendants. So even that first part is a big deal. Just not as big a deal as many have been saying lately.

On the longer timescale, the question is not “will there be creatures who find their lives worth living?” We can be pretty assured that the universe will be full of advanced complex creatures who choose to live. The question is instead more “How much will human-style attitudes and approaches influence the hundreds or more alien civilizations with which we may eventually come in contact?”

It is less about whether there will be any civilizations, and more about what sorts of civilizations they will be. Yes, we should try to not go extinct, and yes we should try to find better paths and lifestyles for our descendants. But we should also aspire, and to a similar degree, to become worthy of emulating, when compared to a sea of alien options.

Unless we can offer enough unique and valuable models for emulation, and actually persuade or force such emulation, then it won’t really matter so much if we survive to meet aliens. From that point on, what matters is what difference we make to the mix. Whether we influence the mix, and whether that mix is better off as a result of our influence.

Not an easy goal, and not one we are assured to achieve. But we have maybe a billion years to work on it. And at least we can relax a bit; not all future universe value depends on our actions now. Just an astronomical amount of it. The rest is in “God’s” hands.

GD Star Rating
loading...
Tagged as: ,

Join The Universe

If you spend most of your time arguing with your immediate family, then even the family members with whom you most disagree are at the center of your world, and greatly define you. Or if you spend most of your time focusing on the “hot” topics and status conflicts within a particular academic community, then even the people there with which you most disagree are your close colleagues, and greatly define you. Or if you spend most of your time arguing about US politics, then even those who disagree with you most about that are near the center of your world, and define you.

That is, in general you are defined more by the topics on which you argue, and the communities in which you argue, than by which side you take on such topics. The people you most hate, you hate exactly because they are close to you, and in your way, as they are in your world.

These worlds I listed, even the US politics world, seem to me just too small and provincial to spend all my time there. So I invite you to instead, at least some of the time, come join my favorite world. Join the place where we argue about the biggest issues we can find in the universe. A conversation that may eventually be joined by creatures across many eras and perhaps even civilizations. Even if I completely disagree with you on such things, if you focus on arguing about them, then you are in my world. And you will help define me.

Here are 42 BIG questions:

  1. Did there have to be something, rather than nothing?
  2. Is the universe infinite, in spacetime or entropy?
  3. Why is entropy always lower in past directions?
  4. Are the speed of light, and forward causation, hard limits on info & influence?
  5. What is most of the universe made of, & can the other stuff make complex life & civs?
  6. Where are the universe’s largest reservoirs of extractable negentropy, and how fast can they flow?
  7. How cheaply can these reservoirs be defended & maintained, and thus how long can they last?
  8. In which of the many possible filter steps does most of the great filter usually lie?
  9. How far away is the nearest alien civilization?
  10. What % of alien civs evolve intelligence via routes other than our social conflict route?
  11. How willing are most aliens to cooperate with us, instead of competing?
  12. When will growth in tech abilities slow down due to running out of useful things to learn?
  13. When will growth of solar system economy slow down due to congestion & exhaustion?
  14. When will growth of Earth economy slow down due to congestion & exhaustion?
  15. When will artificial machines replace biology in running & doing things?
  16. Will that be late enough for genetic engineering or global warming to matter much?
  17. When will the dominant creatures around take a long view, or an abstract view?
  18. What types of competition and coordination (e.g., governance) institutions will dominate in which social areas when and where?
  19. What forms of governance will be most common in which different future eras?
  20. When will mental organization of dominant creatures deviate greatly from that of humans now?
  21. After that point, which kinds of minds will win which competitions where?
  22. After that point, what units of mental or social organization will matter most, and when or where?
  23. After that point, what will minds value, and at what levels will they most encode and coordinate values?
  24. What were the key causes and enablers of each past key growth mode (life, brains, foraging, farming, & industry)?
  25. When will the next growth mode start, what will enable it, and how will it differ?
  26. When, if ever, will all that we caused and care about end and die?
  27. What will be our deepest future collapse, short of extinction, how deep will that be, and how long to recover?
  28. When will be the next major civilization collapse, what % of world will that take down, and how different is the next civ?
  29. When will be the next big war, and will many nukes be used?
  30. When, if ever, will external genetic, econ, or military competition again drive large scale policy & governance choices?
  31. Where in space-time are most of the human like creatures who believe they are experiencing our place in space-time?
  32. What are our strongest levers of influence today over the universe?
  33. How long will how much non-human nature remain, and how wild will that be?
  34. What are the actual motivations that drive most human behavior today?
  35. What has been driving the main changes in values and attitudes over the last few centuries, and what further changes will they induce?
  36. What new practices and institutions can enable greatly increased rates of innovation?
  37. When, if ever, will more general & reliable truth-oriented institutions (e.g., prediction markets) offer estimates on a wide range of subjects?
  38. When, if ever, will average human fertility stop falling, and total human population rise?
  39. When, if ever, will human per-capita income stop rising?
  40. Will human per-capita energy usage ever start rising greatly again?
  41. When will humans become effectively immortal, and least re internal decay?
  42. For where did Earth life originate, and when did it start there?

(I will add more to this list as I hear good suggestions.)

Hamming’s famous question is “What are the most important problems in your field”, followed quickly by “Why aren’t you working on them?” If one of your fields is the universe, then why aren’t you working on one of these big questions?

Added 9a: I’ve mostly left out questions where it is unclear if the usual debates are about something real, rather than about how we use words.

GD Star Rating
loading...
Tagged as: ,

Elois Ate Your Flying Car

J Storrs Hall’s book Where Is My Flying Car?: A Memoir of Future Past, told me new things I didn’t know about flying cars. The book is long, and says many things about tech and the future, including some with which I disagree. But his main thesis is a contrarian one that I’ve heard many times from engineers over my lifetime. Which is good, because by putting it all in one place, I can now tell you about it, and tell you that I agree:

We have had a very long-term trend in history going back at least to the Newcomen and Savery engines of 300 years ago, a steady trend of about 7% per year growth in usable energy available to our civilization. …

One invariant in futurism before roughly 1980 was that predictions of social change overestimated, and of technological change underestimated, what actually happened. Now this invariant itself has been broken. With the notable exception of information technology, technological change has slowed and social change has mounted its crazy horse. …

In the 1970s, the centuries-long growth trend in energy (the “Henry Adams curve”) flatlined. Most of the techno-predictions from 50s and 60s SF had assumed, at least implicitly, that it would continue. The failed predictions strongly correlate to dependence on plentiful energy. American investment and innovation in transportation languished; no new developments of comparable impact have succeeded highways and airliners. …

The war on cars was handed off from beatniks to bureaucrats in the 70s. Supersonic flight was banned. Bridge building had peaked in the 1960s. … The nuclear industry found its costs jacked up by an order of magnitude and was essentially frozen in place. Interest and research in nuclear physics languished. … Green fundamentalism has become the unofficial state church of the US (and to an even greater extent Western Europe). …

In technological terms, bottom line is simple: we could very easily have flying cars today. Indeed we could have had them in 1950, but for the Depression and WWII. The proximate reason we don’t have them now is the Henry Adams curve flatline; the reasons for the flatline have taken a whole book to explore. We have let complacent nay-sayers metamorphose from pundits uttering “It can’t be done” predictions a century ago, into bureaucrats uttering “It won’t be done” prescriptions today. …

Nanotech would enable cheap home isotopic separation. Short of that, it would enable the productivity of the entire US military-industrial complex in an area the size of, say, Singapore. It’s available to anyone who has the sense to follow Feynman’s pathway and work in productive machinery instead of ivory-tower tiddley-winks. The amount of capital needed for a decent start is probably similar to a well-equipped dentist’s office.

If our pre-1970 energy use trend had continued, we’d now use ~30 times as much energy per person, mostly via nuclear power. Which is enough energy for cheap small flying cars. The raw fuel cost of nuclear power is crazy cheap; almost all the cost today is for reactors to convert power, a cost that has been made and kept high via crazy regulation and liability. Like the crazy restrictive regulations that now limit innovation in cars and planes, destroyed the small plane market, and prevented the arrival of flying cars.

Anything that goes into a certificated airplane costs ten times what the thing would otherwise. (As a pilot and airplane owner, I have personal experience of this.) It’s a lot like the high cost of human medical drugs compared with the very same drugs for veterinary use.… Building of airports remains so regulated (not just by the FAA) that only one major new one (KDEN) has been built [since 1990]. …

It seems virtually certain that if we had had [recent] cultural and regulatory environment … from, say, 1910, the development of universal private automobiles would have been suppressed. … By the end of the 70s there was virtually nothing about a car that was not dictated by regulation.

With nuclear power, we’d have had far more space activity by now. Without it, most innovation in energy intensive things has gone into energy efficiency, and into smaller ecological footprints. Which has cut growth and prevented many things. The crazy regulation that killed nuclear energy is quite unjustified, not only because according to standard estimates nuclear causes far fewer deaths, but also because standard estimates are greatly inflated via wide use of a “linear no threshold model”, regarding which there are great doubts:

Several places are known in Iran, India and Europe [with high] natural background radiation … However, there is no evidence of increased cancers or other health problems arising from these high natural levels. The millions of nuclear workers that have been monitored closely for 50 years have no higher cancer mortality than the general population but have had up to ten times the average dose. People living in Colorado and Wyoming have twice the annual dose as those in Los Angeles, but have lower cancer rates. Misasa hot springs in western Honshu, a Japan Heritage site, attracts people due to having high levels of radium, with health effects long claimed, and in a 1992 study the local residents’ cancer death rate was half the Japan average.

To explain this dramatic change of regulation and litigation, Hall says culture changed:

Western culture had essentially succeeded in supplying the needs of the physical layers of [Maslow’s] hierarchy, including the security of a well-run society; and that the shift to the Eloi [of the Well’s Time Machine story] could be thought of as people beginning to take those things—the Leave It To Beaver suburban life—for granted, and beginning to spend the bulk of their energy, efforts, and concerns on the love, esteem, and self-actualization levels. … “Make Love, Not War” slogan of the 60s … neatly sums up the Eloi shift from bravery to sensuality. …

The nuclear umbrella meant that economic, political, and moral strength of the society was no longer at a premium.

I’ll say more about explaining this cultural change in another post.

GD Star Rating
loading...
Tagged as: , ,

Russell’s Human Compatible

My school turned back on its mail system as we start a new semester, and a few days ago out popped Stuart Russell’s book Human Compatible (published last Oct.), with a note inside dated March 31. Here’s my review, a bit late as a result.

Let me focus first on what I see as its core thesis, and then discuss less central claims.

Russell seems to say that we still have a lot of time, and that he’s only asking for a few people to look into the problem:

The arrival of super intelligence AI is inherently unpredictable. … My timeline of, say eighty years is considerably more conservative than that of the typical AI researcher. … If just one conceptual breakthrough were needed, …superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I’m, however, fairly confident that wee have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. (pp.77-78)

Scott Alexander … summed it up brilliantly: … The skeptic’s position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers,” meanwhile [take exactly the same position.] (pp.169-170)

Yet his ask is actually much larger: unless we all want to die, AI and related disciplines must soon adopt a huge and expensive change to their standard approach: we must stop optimizing using simple fixed objectives, like the way a GPS tries to minimize travel time, or a trading program tries to maximize profits. Instead we must make systems that attempt to look at all the data on what all humans have ever done to infer a complex continually-updated integrated representation of all human preferences (and meta-preferences) over everything, and use that complex representation to make all automated decisions. Modularity be damned: Continue reading "Russell’s Human Compatible" »

GD Star Rating
loading...
Tagged as: , ,

Sim Argument Confidence

Nick Bostrom once argued that you must choose between three options re the possibility that you are now actually living in and experiencing a simulation created by future folks to explore their past: (A) its true, you are most likely a sim person living in a sim, either of this sort or another, (B) future folk will never be able to do this, because it just isn’t possible, they die first, or they never get rich and able enough, or (C) future folk can do this, but they do not choose to do it much, so that most people experiencing a world like yours are real humans now, not future sim people.

This argument seems very solid to me: future folks either do it, can’t do it, or choose not to. If you ask folks to pick from these options you get a simple pattern of responses:

Here we see 40% in denial, hoping for another option, and the others about equally divided among the three options. But if you ask people to estimate the chances of each option, a different picture emerges. Lognormal distributions (which ignore the fact that chances can’t exceed 100%) are decent fits to these distributions, and here are their medians:

So when we look at the people who are most confident that each option is wrong, we see a very different picture. Their strongest confidence, by far, is that they can’t possibly be living in a sim, and their weakest confidence, by a large margin, is that the future will be able to create sims. So if we go by confidence, poll respondents’ favored answer is that the future will either die soon or never grow beyond limited abilities, or that sims are just impossible.

My answer is that the future mostly won’t choose to sim us:

I doubt I’m living in a simulation, because I doubt the future is that interested in simulating us; we spend very little time today doing any sort of simulation of typical farming or forager-era folks, for example. (More)

If our descendants become better adapted to their new environment, they are likely to evolve to become rather different from us, so that they spend much less of their income on sim-like stories and games, and what sims they do like should be overwhelmingly of creatures much like them, which we just aren’t. Furthermore, if such creatures have near subsistence income, and if a fully conscious sim creature costs nearly as much to support as future creatures cost, entertainment sims containing fully conscious folks should be rather rare. (More)

If we look at all the ways that we today try to simulate our past, such as in stories and games, our interest in sims of particular historical places and times fades quickly with our cultural distance from them, and especially with declining influence over our culture. We are especially interested in Ancient Greece, Rome, China, and Egypt, because those places were most like us and most influenced us. But even so, we consume very few stories and games about those eras. And regarding all the other ancient cultures even less connected to us, we show far less interest.

As we look back further in time, we can track decline in both world population, and in our interest in stories and games about those eras. During the farming era population declined by about a factor of two every millennium, but it seems to me that our interest in stories and games of those eras declines much faster. There’s far less than half as much interest in 500AD than in 1500AD, and that fact continues for each 1000 year step backward.

So even if future folk make many sims of their ancestors, people like us probably aren’t often included. Unless perhaps we happen to be especially interesting.

GD Star Rating
loading...
Tagged as: , ,

Remote Work Specializes

We seem on track to spend far more preventing pandemic health harm than we will suffer from it, which seems too much spending given the apparent low elasticity of harm w.r.t. prevention. But an upside is that some of this prevention effort is being invested in remote work, which is helping to develop and innovate such capacities. Which matters because remote work (a.k.a. telecommuting) is my guess for the most important neglected trend over the next 30 years. (At least of trends we can foresee now.)

My recent polls put remote work at #24 out of 44 future trends, which IMHO greatly underrates it. AGI, biotech, crypto, space, and quantum computing are far overrated (due to drama & status). Automation matters but will continue steadily as it has for many decades, not causing much trend deviation. Global warming, non-carbon energy, the rise of Asia, falling fertility, and the rise of cybersecurity and privacy are important trends, but their trend deviation implications tend more to be correctly anticipated. However, I see remote work as big and mattering more than and driving trends in migration, aug./virtual reality, and self-driving cars. And remote work implications seem neglected and unappreciated.

Remote work has been a topic of speculation for many decades, so likely somewhere out there is an author who sees it right. But I haven’t yet found that author. I’ve recently read a dozen or so recent discussions of remote work, and all of them seem to miss the main reason that remote work will be such a big deal: specialization due to agglomeration (i.e., more interaction options). The two most formal math analyses I could find actually explicitly assume that remote work, in contrast to traditional work,  produces no agglomeration gains! In contrast, these discussions get closer to the truth: Continue reading "Remote Work Specializes" »

GD Star Rating
loading...
Tagged as: , , ,

What Future Areas Matter Most?

I made a list of 44 possibly important future areas, and just did 22 Twitter polls (with N from 379 to 1178), each time asking this question re 4 areas:

Over next 30 years, changes in which are likely to matter most?

I fit the answers to a simple model wherein respondents either pick randomly (~26% of time) or pick in proportion to each area’s (non-negative) “strength”. Here are the estimated area strengths, relative to the strongest set to 100:

Some comments:

  1. The area with the largest modeling error is migration, so politics may be messing that up.
  2. Governance mechanisms looks surprisingly strong, especially relative to its media attention.
  3. The top 7 areas hold half the total strength, and there’s a big drop to #8. ~20% is in automation, AGI, and self-driving cars.
  4. 19 areas have strengths lying within about the same factor of two. So many things seem important.
  5. Relative to these strength ratings, it seems to me that media focus is only roughly correlated. Media seems disproportionately focused on areas involving more direct social conflict.
  6. Areas add roughly linearly. For example, biotech arguably includes life extension, meat, and materials, and pandemics, and its strength is near their strength sum.
GD Star Rating
loading...
Tagged as: ,

Future Timeline, in Econ Growth Units

Polls on the future often ask by what date one expects to see some event X. That approach, however, is sensitive to expectations on overall rates of progress. If you expect progress to speed-up a lot, but aren’t quite sure when that will start, your answers for quite different post-speed-up events should all cluster around the date at which you expect the speed-up to start.

To avoid this problem, I just did 20 related Twitter polls on the distant future, all using econ growth factors as the timeline unit: “By how much more will world economy grow between now and the 1st time when X”.

POLLS ON FUTURE (please retweet)

World economy (& tech ability) increased by ~10x between each: 3700BC, 800BC, 1700, 1895, 1966, 2018. In each poll, assume more growth, & give best (median) guess of how much more grow by then.

Note that I’ve required a key assumption: growth continues indefinitely.

The four possible growth factor answers for each poll were <100, 100-10K, 10K-1M, and “>10M or never”. If the average growth rate from 1966 to 2018 continues into the future at a constant growth rate, then these factor milestones of 100, 10K, 1M will be reached in the years 2122, 2226, and 2330. That is, the world economy has been growing lately by roughly a factor of 100 every 104 years.

I’ve found that lognormals often fit well to poll response distributions over positive numbers that vary by many orders of magnitude. So I’ve fit these poll responses to a lognormal distribution, plus a chance that the event never happens. Here are the poll % answers, % chance it never happens, and median dates (if it happens) assuming constant growth. (Polls had 95 to 175 responses each.)

Many of these estimates seem reasonable, or at least not crazy. On the whole I’d put this up against any other future timeline I know of. But I do have some complaints. For example, 21 years seems way too short for when <10% of human protein comes from animals. And 35 years until <20% of energy comes from fossil fuels seems more possible, but still rather ambitious.

I also find it implausible that median estimates for these four events cluster so closely: ems to appear, frozen humans to be revived, and AI to earn 9x humans, and AI to earn 9x humans+ems. They are all in the same ~2x growth factor range (factors 670-1350), and thus all appear in the same constant-growth 16 year period 2165-2181. As if these are very similar problems, or even the same problem, and as if they reject what seems obvious to me: it is much harder for AI to compete cost-effectively with ems than with humans. (Note also that these are far later dates than often touted in AI forecasts.)

My main complaint, however, is of overly high chances that things never happen. Such high chances make sense if you think something might actually be completely impossible. For example, a 46% chance of never finding aliens makes sense if aliens just aren’t there to be found. A 25% chance that human lifespan never goes over 1000 might result if that is biologically impossible, and a 11% chance of no colony to another star could fit with such travel being physically impossible.

A 31% chance nukes never give >50% or energy could result from them being fundamentally less efficient than collecting sunlight. And a 6% chance that AI never beats humans, an 12% chance that we never get ems, and an 19% chance that AI never beat ems could all make sense if you think AI or ems are just impossible. (Though I’m not sure these numbers are consistent with each other.) Most of these impossibility chances seem too high to me, but not crazy.

But high estimates of “never” make a lot less sense for things we know to be possible. If there is a small chance of an event happening each time period (or each growth doubling period), then unless that chance is falling exponentially toward zero, the event will almost surely happen eventually, at least if the underlying system persists indefinitely.

So I can’t believe a 50% chance that the human population never falls to <50% of its prior peak. Some predict that will result from the current fertility decline, and it could also happen when ems becomes possible and many humans then choose to convert to becoming ems. Both of these scenarios could fit with the estimated median growth factor 152, date 2132. But a great many other events could also cause such a population decline later, and forever is a long time.

The situation is even worse for an event where we have theoretical arguments that it must happen eventually. For example, continued exponential economic growth seems incompatible with our physical universe, where there’s a speed of light limit and finite entropy and atoms per unit volume. So it seems crazy to have a 22% chance that growth never slows down. Oddly, the median estimate is that if that does happen it will happen within a century.

The 13% chance that off Earth never gets larger than on-Earth economy seems similarly problematic, as we can be quite sure that the universe outside of Earth has more resources to support a larger economy.

For many of these other estimates, we don’t have as strong a theoretical reason to think they must happen eventually, but they still seem like things that each generation or era can choose for itself. So it just takes one era to choose it for it to happen. This casts doubt on the 39% chance that the biosphere never falls to <10% of current level, the 28% chance that ten nukes are never used in war, the 24% chance that authorities never monitor >90% of spoken & written words, and the 22% chance we never have whole-Earth government.

The 28% chance that we never see >1/2 of world economy destroyed in less than a doubling time is more believable given that we’ve never seen that happen in our history. But in light of that, the median of 70 years till it happens seems too short.

Perhaps these high estimates of “never” would be suppressed if respondents had to directly pick “never”, or if polls explicitly offered more larger growth factor options, such as 1M-1B, 1B-1T, 1T-1Q, etc. It might also help if respondents could express both their chances that such high levels might ever be reached, separately from their expectations for when events would happen given that such high levels are reached. These would require more than Twitter polls can support, but seem reasonably cheap should anyone want to support such efforts.

GD Star Rating
loading...
Tagged as: ,