28 Comments

Tried to post this yesterday, but connection problems.

Robin,Here's my take on Cirkovic's analysis (as presented by Robin). When Cikovic finally gets to "The optimization of all activities, most notably computation is the existential imperative. ... An advanced civilization willingly imposes some of the limits on the expansion. Expansion beyond some critical value will tend to undermine efficiency, due both to latency, bandwidth and noise problems." I think Cirkovic' expression of the existential imperative is plausible, but it looks to me to be more of a limit on rate of expansion than an absolute limit on expansion per se. It doesn't seem plausible to me that X of computonium will no matter what be more optimal for maximizing existential odds than 2X of computonium. But perhaps Cirkovic knows something I don't.

In Robin's critique of of Cirkovic's analysis I think they both share a common flaw: the idea that this expansion/seeking of the existential imperative will be run by and/or for the benefit of subjective conscious entities. It seems more likely to me that we live in an algorithmopic universe that selects for the algorithms best at persisting within it. Could be a bunch of beings policing themselves to maximize persistence odds, as in both Cirkovic's analysis and Robin's critique. Could be "optimized", subjective conscious denuded von neuwman replicators. Could be homogeneity. My money isn't on the first one, but our community depends upon it.

Expand full comment

Curiously, while most of my post critiqued Cirkovic's analysis, none of the comments have yet mentioned him.

Expand full comment

I'd say:

1) Er, duh? Not just physics, but Fermi Paradox. I'd give it P ~0.95.

2) Negentropic matter. Direct outcome of current physics. P ~0.7.

3) Not directly implied by physics, but highly plausible. Note that Robin doesn't assume easy defenses, he assumes defense against nondestructive attack, i.e., any successful attack destroys the oasis. P ~ 0.7.

4) ...maybe. P ~ 0.5.

5) Second law of thermodynamics. P ~ 0.7 and strongly linked to 2 by the definition of "resource".

6) This part seems highly unlikely to me, I'd just expect ultra-hardened seeds launched at .9999999c toward distant galaxies, right away. P < 0.1, but I'm not sure how much this really matters to Robin's essential scenario.

7) Extremely unlikely but intelligent planning can substitute for variation+selection while preserving many of the same results, especially at the frontier. As written literally, P < 0.1.

These are obviously not precise probabilities, but if I had to make up some probabilities, I'd make up those. Consider it as insight into my thought processes, not grist for calculation.

Expand full comment

The point is not to find a way to reach a particular conclusion, but to discover what conclusions follow from true assumptions.

What difference does it make how many ways there are to reach that conclusion? The only thing that matters is whether the assumptions are true.

Expand full comment

Dynamically, Robin gets that result from those 7 assumptions, but it doesn't mean that's the only possible way to get that result.

Expand full comment

Robin, if it's actually possible to exploit negentropy in a way that scales quadratically with mass/energy, then what happens on the frontiers of the first wave of colonization no longer matters very much. The number of people living near the frontiers will be much less than 5% of the total (because they will be swamped by the number of people living at the center), and there will be plenty of negentropy left for the center to use after the "cosmic wildfire" has burned out.

You make a number of assumptions in this analysis, and estimate the probability that they are all true to be >5%. It seems to me that this figure needs to be better justified. I count at least 7 seemingly independent assumptions and assigning a probability of >50% to each one only gets us to >0.78% for the whole set.

1. Speed of light can't be exceeded.2. At least one key physical resource is concentrated in oases.3. It will be easy to defend oases against attacks.4. No economy of scale exists across oases.5. Seed-to-seed cycle is destructive.6. Long-distance interstellar travel is neither too hard nor too easy.7. There will be variation among colonizers.

Expand full comment

Steve, I very much enjoyed your essay, I didn't mean to imply all other contributors rejected rapid expansion, and I agree that creatures maxing local computation would probably act as you describe. I can't see VR and MMORGs being so consistently seductive as to prevent any colonization over a million years.

Hopefully, yes I'm not articulating in much detail in these comments yet.

Expand full comment

Bambi: I believe the purpose of E's contributions to this blog is to eventually convince us that the urgently high probability of the magical boogeycomputer is the unavoidable conclusion of rational thought.

If that's Clarke's third law magic and go quick boogey, my unavoidable conclusion is Boogey, Eliezer, Boogey!

Expand full comment

Robin,

You make a good point, though I don't think you're articulating your skeptical intuition much. I think it comes down in part to this: An AI machine wouldn't just be competing with individual humans for dominance. It would be competing with organizations and institutions made of multiple humans, all of which seem to their own degrees to be persistence and power-maximizing too. It would be competing with markets, with nation-states, with corporations, and with international organizations. Sandberg (one of the least influential very smart guys around that I know of) touched on this when he suggested "superintelligences" could perhaps coexist with ordinary humans by being incorporated in the institutional checks that sometimes keep corporations and nation-states from mercilessly exploiting humans. It's an intution I'm sympathetic to that we already exist in a world with things unchallengeably more intelligent than us, that haven't destroyed us (yet).

Still, I think it's an open question whether a unitary superintelligence will emerge and quickly manipulate us into turning ourselves into computonium for it, or whether it will be co-opted to get us to buy coke or pepsi, vote democrat or republican, to watch primetime CBS or NBC. I lean towards the former. I think it's more likely we're on a slow death march than in a peaceful coexistence with entities more intelligent than us, yet who face their own persistence threats. But it's worth exploring as we evaluate the best way to play the apparently weak persistence-maximizing hand we've been dealt.

Expand full comment

One of the essays in the book is mine (and I very much enjoyed Robin's). I do want to correct the idea all the rest of the contributors somehow rejected any idea but "central intelligence." I certainly accept the idea of outward moving colonization if it can somehow escape the siren song (see the Waterhouse Painting above) of VR. I'm pessimistic here. My point is merely that single computer-clusters (containing one-to-many linked "minds") will eventually outgrow the energy output of their stars and the mass available in their star systems. Because of speed-of-light problems (which I assume are intractable), you can't simply then distribute computation over more than one star, ala Vinge's Beyond.

And if not, there are only three things you can do at this point when you hit Kardashev II limit: 1) Become more efficient, 2) Import energy and mass from other stars, and 3) Migrate. Number 3 is effectively out, if you want to continue your present MMORPG with your friends. Number 1, we assume you've already maxed out on (i.e., you're improving your hardware all the time, but you're improving as fast as you can). That leaves importation. You can send back deuterium and carbon from gas giants of nearby stars efficiently enough to make it worth doing, as a simple calculation shows (I may be the first to actually do this calculation. At least I'm the first I know of). It's not "There's uranium in them-there planetary bodies", it's "There's D and C in them-there exo-gas-giants."

No, this doesn't affect behavior on the frontier, but I chose to look at continuing behavior at home, because the frontier is just variation and history. On the frontier you see the usual stuff you see on all frontiers: the same evolutionary stuff happens as happened at home, but subtract time lag of travel and startup time of new construction.

Steve Harris

Expand full comment

I believe the purpose of E's contributions to this blog is to eventually convince us that the urgently high probability of the magical boogeycomputer is the unavoidable conclusion of rational thought.

Expand full comment

Dynamically, yes there may be gains from moving mass from one oasis to another with a black hole, but these gains probably take too long to realize to much affect behavior at the frontier.

Eliezer, again I have trouble with your analogy that one AI machine is to the rest of the world over a period of a few months as the human species was to all other life on Earth over two million years. Yes humans had more "intelligence" than other species, and yes one AI machine might find a new insight that made it more intelligent, but surely we need a stronger similarity than this to take such an analogy seriously. Yes we should allow for this as a remote possibility, but you seem to think this outcome more likely than not.

Expand full comment

There will be a central computing imperative, if we eventually invent a computing technology that can exploit the fact that the maximum entropy of a system scales quadratically with its mass/energy. (See http://en.wikipedia.org/wik.... Today's computer memory capacities only scale linearly with mass.) In this case, one of Robin's main assumptions--that there is no economy of scale across oasis--would be violated.

Expand full comment

@Allan: http://singinst.org/AIRisk.pdf. Simple answer is, we solve the motivational stability problem and use that to build the AI.

@Robin: Before human intelligence was invented, wouldn't a hypothetical economist observing Earth but with no prior experience of intelligence, speak with similar skepticism about the possibility of just one species getting into a position where it could decide what to do with the whole galaxy? Intelligence advantages are powerful stuff; if there's at least one trick you haven't thought of yourself, they're unguessably powerful.

See also: The Day of the Squishy Things.

Expand full comment

Eliezer: I'm afraid I've not read everything I could have on your project for friendly AI, so correct me if this is obviously stupid. But how do you reconcile your view that a single AI will likely take over the global computer net, and prevent other AIs from existing (or having access to significant resources), with the view that AI can be, will be, or should be "friendly"?

Expand full comment

Eliezer, yes, enough variation is an assumption you might question and yes the fact that competitors were intelligent planners could explain a lot of their behavior relative to selection. Even so, if there are many intelligent planners pursuing varied utility functions, I think it valid to ask which utility functions would be selected.

As you might guess, I don't see as many "first-mover, winner-take-all advantages" for an individual "decision process" as you. Chimps vs. humans seems to me a very different comparison from the entire rest of the world economy vs. one AI machine that figures out something about protein folding. If Europe and Asia did not interact you might argue one will win over the other - but one machine vs. the rest of the world?

Expand full comment