Biting Evolution Bullets

Browsing through The Other Change of Hobbit bookstore near my Berkeley office ten years ago, I was enchanted to find Far Futures, "five original novellas .. all set at least ten thousand years in the future."  My favorite was Greg Bear’s Judgment Engine, and Bear says his City at the End of Time (out in July) "is set in large part a hundred trillion years in the future." 

So I am proud to be included in Year Million, published today, "fifteen … essays by notable journalists and scholars, … projecting the universe as it might be in the year 1,000,000 C.E."  I begin:

The future is not the realization of our hopes and dreams, a warning to mend our ways, an adventure to inspire us, nor a romance to touch our hearts. The future is just another place in spacetime. Its residents, like us, find their world mundane and morally ambiguous relative to the heights of fiction and fantasy. …

We will use evolutionary game theory to outline the cycle of life of our descendants in one million years. What makes such hubristic conjecture viable is that we will (1) make some strong assumptions, (2) describe only a certain subset of our descendants, and (3) describe only certain physical aspects of their lives. I estimate at least a five percent chance that this package of assumptions will apply well to at least five percent of our descendants.

(No other author offered confidence estimates.)   My use of evolutionary analysis marks me as a "bullet biter," using Scott Aaoronson’s colorful term – I tend to accept apparent uncomfortable implications of well-supported theories.  Many "bullet dodgers" disapprove.  For example, riffing off Nick Bostrom’s Where are They? (which rephrases my Great Filter), author Charlie Stross said:


The Great Filter argument isn’t the only answer to the Fermi Paradox. More recently, Milan M. Cirkovic has written a paper, Against the Empire, in which he criticizes the empire-state model of posthuman civilization that is implicit in many Fermi Paradox treatments. … There is a widespread implicit belief among people who look at the topic … in manifest destiny, expansion to fill all possible evolutionary niches, and the inevitability of any species that develops the technology to explore deep space using that technology to colonize it. As Cirkovic points out, this model is based on a naive extrapolation of historical human models which may be utterly inapplicable to posthuman or postbiological societies.

Here is Cirkovic’s main argument:

There is no proof that "colonizing other stars and galaxies" constitutes anything more than a subset of zero-measure trajectories in the evolutionary space … The transition to postbiological phase obviates most, if not all, biological motivations. …  The imperative for filling the complete ecological niche … is an essentially biological part of motivation for any species, including present-day humans. … But expanding and filling the ecological niches are not the intrinsic property of life or intelligence – they are just consequences of [today’s] predominant evolutionary mechanism, i.e. natural selection. It seems logically possible to imagine a situation in which some other mechanism of evolutionary change, like the Lamarckian inheritance or genetic drift, could dominate and prompt different types of behaviour.

This is a classic bullet-dodger move – facing calculations suggesting an accepted theory predicts an unwelcome consequence, they do not offer contrary calculations – they just note contrary calculations might exist.  Here is the closest Cirkovic gets to a contrary calculation:

Biological imperatives, like the survival until the reproduction age, … will become marginal, if not entirely extinct as valid motivations for individual and group actions.  Let us, for the sake of elaborated example, consider the society of uploaded minds living in virtual cities of Greg Egan’s Diaspora – apart from some very general energy requirements, making copies of one’s mind and even sending some or all of them to intergalactic trips (with subsequent merging of willing copies) is cheap and uninfluenced by any biological imperative whatsoever; the galaxy is simply large and they are expanding freely. … There is no genetic heritage to be passed on, no truly threatening environment to exert selection pressure, … no biotic competition, no kin selection, no pressure on (digital) ecological boundaries, no minimal viable populations.

But there can be genes without DNA, and selection pressure without violence or great expense.  And the fact that Egan did not talk about selection effects does not even remotely suggest they are absent in the situation he describes.  Note Cirkovic is not arguing for humility about future motives; he thinks he knows we will want central computational efficiency:

The optimization of all activities, most notably computation is the existential imperative. … An advanced civilization willingly imposes some of the limits on the expansion.  Expansion beyond some critical value will tend to undermine efficiency, due both to latency, bandwidth and noise problems.

In Year Million, Robert Bradbury similarly claims we will rearrange our central star system to maximize central CPU cycles, memory, and internal latency/bandwidth – distant stars are only interesting to watch, to harvest energy and mass to import to the central star, and to visit as the central star slowly wanders.  While this is a surprisingly common view, I know of no selection calculation suggesting a central computing imperative.

Cirkovic gives more reasons we won’t expand much:

Molecular nanotechnology … will obviate the economic need for imperial-style expansion, since the efficiency of utilization of resources will dramatically increase. .. Religious fervour and the feeling of moral superiority … are unlikely to play a significant role either in future of humanity or in functioning of extraterrestrial [civilizations]. … Even our extremely limited terrestrial experience indicates serious ethical concerns … [if we] supplant or destroy alien biospheres on other worlds. … The totalitarian temptation is much harder to resist in conditions where massive military/colonization forces are in existence and thus prone to be misused against state’s own citizens.

This last argument has it exactly backward.  I explain in my Year Million paper:

The familiar biological world contains only local coordination. … If our descendants prove to be similarly uncoordinated, evolutionary analysis might accurately outline their behavior. … [But] imagine that a strong stable central government ensured for a million years that colonists spreading out from Earth all had nearly the same standard personality, with each colonist working hard to successfully prevent any wider personality variations in their neighbors, descendants, or future selves. In such a situation, the standard personality might control colonization patterns. …

The crucial era for such coordination starts when competitive interstellar colonization first becomes possible. As long as the oasis near Earth is growing or innovating rapidly, any deviant colonization attempts could be overrun by later, richer, more advanced reprisals. But as central growth and innovation slows, such reprisals would become increasingly difficult. … Thus, once enough colonists with a wide-enough range of personalities are moving away rapidly enough, central threats and rewards to induce coordination on frontier behavior would no longer be feasible. The competition genie would be out of the bottle.

If we risk totalitarian outcomes via a strong long-lasting enough central "government," we might prevent evolvable variation in colonization strategies, and thus stop a "cosmic wildfire."  This may be worth the risk, but I am far from sure:

I am not praising this possible future world to encourage you to help make it more likely, nor am I criticizing it to warn you to make it less likely. It is not intended as an allegory of problems or promises for us, our past, or our near future. It is just my best-guess description of another section of spacetime. I can imagine better worlds and worse worlds, so whether I am repelled by or attracted to this world must depend on the other realistic options on the table.

Added 16Jun: John Horgan reviewed the book in the Wall Street Journal here.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/Unholysmoke/ Ben Jones

    Great writing Robin – The Rapacious Hardscrapple Frontier is a cracking title. The argument from outrage/repulsion is not a good basis for a long-term prediction.

    My own star system is going to buck the trend. It’ll be a sort of one-stop frontier hardware shop. You’ll be able to pick up self-replicating colonization probes or resources to build them at low prices, buy and use intergalactic calling cards, that sort of thing. Also, hot space-pasties. The exact location of this shop is going to be a bit difficult. I might have to start up a chain of them, which will all operate a few light years in from the actual expansion frontier, serving the colonists as they move out. The tagline will be ‘There’s uranium in them thar planetary bodies.’

    Do you think we will move from a strategy of sending seeds for our own ends, to a time when seeds are simply there (and replicating/colonizing) for their own reasons? Will this be tantamount to creating a race?

  • spindizzy

    Interesting post, Robin. It’s almost worth buying the book just for your contribution, although I don’t anticipate much from the other essays!

    I think there is a strong tendency towards techno-communism in the transhumanist community. Whether that is a realistic forecast or not, I don’t know. Isn’t the whole point of a singularity (in the Kurzweil sense) that we can’t see beyond it? And yet people love to speculate.

    In any case, I feel uncomfortable that so many futurists value technology primarily for its potential to progress us further down the road to serfdom.

  • http://www.iphonefreak.com frelkins

    The human psyche being as it is, in a post-body future, once I’ve discarded the flesh no one wants anyway, why would we have any *desire* to do most anything except be by ourselves, “read,” (whatever that may mean in such a state) and talk to our family and friends?

    We won’t need resources, etc. — most desires come from the body, from being *embodied,* — so without embodiment, why wouldn’t we just hang with the girls and chat about our relationships all millenia long? It would be like Sex & the City without the Sex, City or shoes. . .seriously. One endless no-calorie brunch. Like now in Second Life.

  • Chuck

    The progression of science is based on reproducability of results.

    I think the sentiment of the bullet-dodgers (this bullet-dodger, anyway) is that bullet-biting is only legitimate when theories have shown reproducability of results in a given field. If you are taking legitimate theories and assumptions, and applying them to a field that is new and then claiming high confidence in spite of a lack of reproducability for the subject at hand, it seems to me that you are not biting a bullet, you being just as speculative as everyone else, just more rigorously.

    For instance: we’ve got lots of experience in the real world with gravitational theory. But between Newton and Einstein we made significant discoveries that radically changed our understanding of gravity. On top of that, when talking about real objects falling in a gravitational field on earth, we have to account for drag and even convection currents, etc.

    To take a few of the most simple assumptions and claim a 5% confidence that they’ll apply to anyone in some future and then call it bullet-biting seems generous. I could get along with test-able or a prediction-that-isn’t-simply-story-time.

    My prediction of what we’ll be like a million years from now is that we’ll be extinct or have evolved into something no longer human, and which will have motivations alien to us (save reproduction and survival).

    As a side note, I think the bullet-biter/bullet-dodger is a great way to play into people’s bias’s and cognitive predispositions, not overcome them.

  • steven

    Evolution or historical extrapolation isn’t the main argument. In the long run, space colonization is astronomically cheap relative to everything else that matters. All utility functions except extreme conservation fetishist ones will prefer some configuration of matter to the current regime of random giant balls of stuff that sit there doing nothing.

  • steven

    RH: The future is not the realization of our hopes and dreams

    Not with that attitude.

  • Recovering irrationalist

    Shouldn’t we make sure evolutionary game theory doesn’t get to outline the cycle of life in a million years? It’s amoral, blind, stupid and drunk, and it will eat all the candy given half a chance.

  • http://hanson.gmu.edu Robin Hanson

    frelkins, the paper is all about what desires get selected.

    Recovering, the end of my post is about the risks of trying to overrule evolution.

    Chuck, I did not claim a 5% chance the analysis would apply to anyone.

  • http://profile.typekey.com/halfinney/ Hal Finney

    One million years in the future, this expanding wave of colonists would be something less than a million light years in radius. There are about 15 galaxies within a million light years of our own Milky Way. All of them are considered dwarf satellite galaxies of the Milky Way. This would suggest that in the “year million” time frame, the oases would probably be star systems and the deserts, interstellar space. The gaps between galaxies would potentially be major impediments to a technology which has optimized for interstellar distances and has had little experience with crossing much larger spaces. So I would imagine that we would see fully-populated galaxies where the waves of expansion hit an obstacle at the galactic border. At the boundaries there would be somewhat inexperienced and clumsy attempt to create robust seeds that can survive intergalactic distances, and also launch them with the utmost possible speed, since the great gaps would offer an opportunity for latecomers to the boundary to leapfrog earlier colonists by creating seeds that are only slightly faster.

    One thing I would worry about is that these “colonists” may become nonsentient. Wouldn’t the most efficient possible seeding mechanism be a completely robotic and unconscious system which devoted 100% of its effort merely to making copies and sending them onward, leaving nothing but destruction in its wake? Then talk of “colonists” starts to become a little forced, and the destructive wildfire analogy is more natural.

    I’m a little confused about the 5%, but I wonder whether as many as 5% of our descendants would live on the borders, given the low surface to volume ratio of a one million ly sphere.

  • http://hanson.gmu.edu Robin Hanson

    Hal, yes major adaptation would be needed between vs within galaxies, but that transition would have been long anticipated. If predators are possible, then modest intelligence to deal with them seems well worth their small cost in non-tiny seeds. But that intelligence could well be “unconscious.” On the 5%, I had in mind this analysis illuminating the behavior of a substantial volume behind the frontier.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    As you might guess, my chief problem with this scenario is the assumption of variation: See No Evolutions for Corporations or Nanodevices. While building a motivationally stable self-modifying AI is a hard problem for Eliezer in 2008, I strongly suspect that it is not hard in any absolute sense; that any mind much larger than mine would contemplate the problem for some small amount of subjective time and produce an effectively perfect closed-form solution.

    Furthermore, I tend to doubt that there will be more than one leading superintelligent decision process on Earth at the time colonization begins, due to the many possibilities for first-mover, winner-take-all advantages along the trajectory to that point. Chimpanzees, though the closest runners-up to humans, cannot compete with or threaten humans. The first AI to take over the Internet might be able to shut out all other AIs. The first AI to crack the protein folding problem and build self-replicating nanotechnology might be able to shut down all competing projects even with a few hours’ head start. I.e., I think the probability for a Bostromian “singleton” is very strong here.

    The main way this scenario would fail is if motivational closure occurs only after all first-mover advantages have been exhausted. I.e., humanity won out over chimpanzees before learning to carry out precise neural self-modifications, and it’s possible some AI might take over the Internet while still too stupid to replicate perfectly… not sure I buy that on the nanotech race, though.

    A great deal of the theoretical burden carried by “variation and selection” in merely evolutionary scenarios can be carried (much better, in fact) by the assumption of intelligent planning to maximize total resources devoted to some utility function. This does, however, imply a somewhat less “hardscrapple” life because some amount of resource is being preserved for maximizing some kind of utility. Albeit seeds might operate in (non-sentient?) “hardscrapple” mode at the colonization wavefront.

    I’m still not sure I’d dispute the “five percent” chance. Except I might phrase it differently: A 5% chance that this scenario applies across more than 5% of Everett branches growing out of, say, Earth in 1950.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    I’d predict a hard limit on the number of subjective conscious entities, that would arise at the latest shortly after we’ve substrate jumped to a medium that allows us to reproduce our mind patterns at low cost. At that point I think we’ll be focused on maximizing our persistence odds, in a protective shell of Von Neumann replicators/computorium expanding at speeds approaching light (conjunction fallacy alert). That’s perhaps a never ending race against some other expanding, persistence maximizing algorithm, That may have started, with equal efficiency, and equal rate of improvement at the same time as us. One moment of hesitation, one false step, and it’ll have the advantage to repurpose us into its own computing/replicating material. The low limit, of course, would be one of us (the most cunning or the lucky first to “upload”) or none of us (as far as I can tell, computorium/Von Neumann replicators don’t need subjective conscious entities, any more than abstract thinkers need thumbs).

    I just want to persist. But as a rational persistence odds maximizer, I doubt I’ll ever waste my resources betting that markets will ever rate my persistence odds as closer to a 1 than to a 0.

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, yes, enough variation is an assumption you might question and yes the fact that competitors were intelligent planners could explain a lot of their behavior relative to selection. Even so, if there are many intelligent planners pursuing varied utility functions, I think it valid to ask which utility functions would be selected.

    As you might guess, I don’t see as many “first-mover, winner-take-all advantages” for an individual “decision process” as you. Chimps vs. humans seems to me a very different comparison from the entire rest of the world economy vs. one AI machine that figures out something about protein folding. If Europe and Asia did not interact you might argue one will win over the other – but one machine vs. the rest of the world?

  • http://www.allancrossman.com Allan Crossman

    Eliezer: I’m afraid I’ve not read everything I could have on your project for friendly AI, so correct me if this is obviously stupid. But how do you reconcile your view that a single AI will likely take over the global computer net, and prevent other AIs from existing (or having access to significant resources), with the view that AI can be, will be, or should be “friendly”?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    @Allan: http://singinst.org/AIRisk.pdf. Simple answer is, we solve the motivational stability problem and use that to build the AI.

    @Robin: Before human intelligence was invented, wouldn’t a hypothetical economist observing Earth but with no prior experience of intelligence, speak with similar skepticism about the possibility of just one species getting into a position where it could decide what to do with the whole galaxy? Intelligence advantages are powerful stuff; if there’s at least one trick you haven’t thought of yourself, they’re unguessably powerful.

    See also: The Day of the Squishy Things.

  • Dynamically Linked

    There will be a central computing imperative, if we eventually invent a computing technology that can exploit the fact that the maximum entropy of a system scales quadratically with its mass/energy. (See http://en.wikipedia.org/wiki/Black_hole_entropy. Today’s computer memory capacities only scale linearly with mass.) In this case, one of Robin’s main assumptions–that there is no economy of scale across oasis–would be violated.

  • http://hanson.gmu.edu Robin Hanson

    Dynamically, yes there may be gains from moving mass from one oasis to another with a black hole, but these gains probably take too long to realize to much affect behavior at the frontier.

    Eliezer, again I have trouble with your analogy that one AI machine is to the rest of the world over a period of a few months as the human species was to all other life on Earth over two million years. Yes humans had more “intelligence” than other species, and yes one AI machine might find a new insight that made it more intelligent, but surely we need a stronger similarity than this to take such an analogy seriously. Yes we should allow for this as a remote possibility, but you seem to think this outcome more likely than not.

  • bambi

    I believe the purpose of E’s contributions to this blog is to eventually convince us that the urgently high probability of the magical boogeycomputer is the unavoidable conclusion of rational thought.

  • Steve Harris

    One of the essays in the book is mine (and I very much enjoyed Robin’s). I do want to correct the idea all the rest of the contributors somehow rejected any idea but “central intelligence.” I certainly accept the idea of outward moving colonization if it can somehow escape the siren song (see the Waterhouse Painting above) of VR. I’m pessimistic here. My point is merely that single computer-clusters (containing one-to-many linked “minds”) will eventually outgrow the energy output of their stars and the mass available in their star systems. Because of speed-of-light problems (which I assume are intractable), you can’t simply then distribute computation over more than one star, ala Vinge’s Beyond.

    And if not, there are only three things you can do at this point when you hit Kardashev II limit: 1) Become more efficient, 2) Import energy and mass from other stars, and 3) Migrate. Number 3 is effectively out, if you want to continue your present MMORPG with your friends. Number 1, we assume you’ve already maxed out on (i.e., you’re improving your hardware all the time, but you’re improving as fast as you can). That leaves importation. You can send back deuterium and carbon from gas giants of nearby stars efficiently enough to make it worth doing, as a simple calculation shows (I may be the first to actually do this calculation. At least I’m the first I know of). It’s not “There’s uranium in them-there planetary bodies”, it’s “There’s D and C in them-there exo-gas-giants.”

    No, this doesn’t affect behavior on the frontier, but I chose to look at continuing behavior at home, because the frontier is just variation and history. On the frontier you see the usual stuff you see on all frontiers: the same evolutionary stuff happens as happened at home, but subtract time lag of travel and startup time of new construction.

    Steve Harris

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Robin,

    You make a good point, though I don’t think you’re articulating your skeptical intuition much. I think it comes down in part to this: An AI machine wouldn’t just be competing with individual humans for dominance. It would be competing with organizations and institutions made of multiple humans, all of which seem to their own degrees to be persistence and power-maximizing too. It would be competing with markets, with nation-states, with corporations, and with international organizations. Sandberg (one of the least influential very smart guys around that I know of) touched on this when he suggested “superintelligences” could perhaps coexist with ordinary humans by being incorporated in the institutional checks that sometimes keep corporations and nation-states from mercilessly exploiting humans. It’s an intution I’m sympathetic to that we already exist in a world with things unchallengeably more intelligent than us, that haven’t destroyed us (yet).

    Still, I think it’s an open question whether a unitary superintelligence will emerge and quickly manipulate us into turning ourselves into computonium for it, or whether it will be co-opted to get us to buy coke or pepsi, vote democrat or republican, to watch primetime CBS or NBC. I lean towards the former. I think it’s more likely we’re on a slow death march than in a peaceful coexistence with entities more intelligent than us, yet who face their own persistence threats. But it’s worth exploring as we evaluate the best way to play the apparently weak persistence-maximizing hand we’ve been dealt.

  • Recovering irrationalist

    Bambi: I believe the purpose of E’s contributions to this blog is to eventually convince us that the urgently high probability of the magical boogeycomputer is the unavoidable conclusion of rational thought.

    If that’s Clarke’s third law magic and go quick boogey, my unavoidable conclusion is Boogey, Eliezer, Boogey!

  • http://hanson.gmu.edu Robin Hanson

    Steve, I very much enjoyed your essay, I didn’t mean to imply all other contributors rejected rapid expansion, and I agree that creatures maxing local computation would probably act as you describe. I can’t see VR and MMORGs being so consistently seductive as to prevent any colonization over a million years.

    Hopefully, yes I’m not articulating in much detail in these comments yet.

  • Dynamically Linked

    Robin, if it’s actually possible to exploit negentropy in a way that scales quadratically with mass/energy, then what happens on the frontiers of the first wave of colonization no longer matters very much. The number of people living near the frontiers will be much less than 5% of the total (because they will be swamped by the number of people living at the center), and there will be plenty of negentropy left for the center to use after the “cosmic wildfire” has burned out.

    You make a number of assumptions in this analysis, and estimate the probability that they are all true to be >5%. It seems to me that this figure needs to be better justified. I count at least 7 seemingly independent assumptions and assigning a probability of >50% to each one only gets us to >0.78% for the whole set.

    1. Speed of light can’t be exceeded.
    2. At least one key physical resource is concentrated in oases.
    3. It will be easy to defend oases against attacks.
    4. No economy of scale exists across oases.
    5. Seed-to-seed cycle is destructive.
    6. Long-distance interstellar travel is neither too hard nor too easy.
    7. There will be variation among colonizers.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Dynamically, Robin gets that result from those 7 assumptions, but it doesn’t mean that’s the only possible way to get that result.

  • Caledonian

    The point is not to find a way to reach a particular conclusion, but to discover what conclusions follow from true assumptions.

    What difference does it make how many ways there are to reach that conclusion? The only thing that matters is whether the assumptions are true.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I’d say:

    1) Er, duh? Not just physics, but Fermi Paradox. I’d give it P ~0.95.

    2) Negentropic matter. Direct outcome of current physics. P ~0.7.

    3) Not directly implied by physics, but highly plausible. Note that Robin doesn’t assume easy defenses, he assumes defense against nondestructive attack, i.e., any successful attack destroys the oasis. P ~ 0.7.

    4) …maybe. P ~ 0.5.

    5) Second law of thermodynamics. P ~ 0.7 and strongly linked to 2 by the definition of “resource”.

    6) This part seems highly unlikely to me, I’d just expect ultra-hardened seeds launched at .9999999c toward distant galaxies, right away. P < 0.1, but I'm not sure how much this really matters to Robin's essential scenario. 7) Extremely unlikely but intelligent planning can substitute for variation+selection while preserving many of the same results, especially at the frontier. As written literally, P < 0.1. These are obviously not precise probabilities, but if I had to make up some probabilities, I'd make up those. Consider it as insight into my thought processes, not grist for calculation.

  • http://hanson.gmu.edu Robin Hanson

    Curiously, while most of my post critiqued Cirkovic’s analysis, none of the comments have yet mentioned him.

  • http://www.hopeanon.typepapd.com Hopefully Anonymous

    Tried to post this yesterday, but connection problems.

    Robin,
    Here’s my take on Cirkovic’s analysis (as presented by Robin). When Cikovic finally gets to “The optimization of all activities, most notably computation is the existential imperative. … An advanced civilization willingly imposes some of the limits on the expansion. Expansion beyond some critical value will tend to undermine efficiency, due both to latency, bandwidth and noise problems.” I think Cirkovic’ expression of the existential imperative is plausible, but it looks to me to be more of a limit on rate of expansion than an absolute limit on expansion per se. It doesn’t seem plausible to me that X of computonium will no matter what be more optimal for maximizing existential odds than 2X of computonium. But perhaps Cirkovic knows something I don’t.

    In Robin’s critique of of Cirkovic’s analysis I think they both share a common flaw: the idea that this expansion/seeking of the existential imperative will be run by and/or for the benefit of subjective conscious entities. It seems more likely to me that we live in an algorithmopic universe that selects for the algorithms best at persisting within it. Could be a bunch of beings policing themselves to maximize persistence odds, as in both Cirkovic’s analysis and Robin’s critique. Could be “optimized”, subjective conscious denuded von neuwman replicators. Could be homogeneity. My money isn’t on the first one, but our community depends upon it.

  • Pingback: Overcoming Bias : Financing Starships