77 Comments

It is a critique of the intelligence explosion roughly as much as it is a critique of any prediction of far-reaching human expansion.

How do you get that? The intelligence explosion would seem to imply a far-reaching expansion more strongly than does a far-reaching expansion imply an intelligence explosion. (At least sci-fi authors seem to agree.) But in any event, it's not obvious to me that they're comparable.

Expand full comment

Katja: Thanks. I've posted in the other thread describing my view more exactly. I just treat question "where we are" as "what's around us", when a theory provides for several observers that's several theories with a specific observer, the theories may have some sensible relations between their priors, or not. (a theory can have prior by construction - e.g. a theory that randomly guesses n bits is 2^-n improbable or worse)

edit: and SSA pretty much arbitrarily ignores evidence by lumping together things into a "reference class".

Expand full comment

Gwern,

We make some specific observations: we are alive, we are billions of years into the history of the universe, and we don't see any aliens.Either an alien colonization wave reaching the Earth or visible signs of aliens would have produced different observations. An increased tendency for aliens to evolve and signal or spread would lower the frequency of young civilizations with observations like ours in a given region of space. So we get the Fermi paradox, by SIA.If you want to (a la some forms of SSA) consider the observations of a representative sample of randomly selected young civilizations (drawn from a list of such civilizations throughout the history of the universe), our observations would be more atypical if aliens evolving to spread or signal were common: most civilizations would exist earlier in the history of the universe, before there had been time for colonization waves or visible signals to reach us.So by the same logic used by Bostrom and Tegmark to rule out frequent astrophysical catastrophes destroying most planets like ours, one can infer that it is very unlikely that colonization waves pre-empt almost all observations like ours:http://arxiv.org/abs/astro-...

The same logic has been applied to a vacuum transition destroying everything with lightspeed expansion. There's no relevant difference between that and an alien colonization wave pre-empting our observations in either of these frameworks. So we still have the Fermi paradox.

Expand full comment

Because Gwern asked me to chime in:

According to SIA, and to me: not being killed carries the same information as not observing X, if the scenarios are the same except that being killed is replaced by observing X. 

According to SSA: not being killed tells you nothing, if your reference class is people who are alive now. If your reference class is people who were alive at some point, then not being killed is informative.

Expand full comment

It is a critique of the intelligence explosion roughly as much as it is a critique of any prediction of far-reaching human expansion.

Expand full comment

I don't view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is *still* very puzzling - it's already very mysterious.

Perhaps you're not looking at the right place. The fact that the prior is already low tells you nothing about the likelihood ratio. 

Expand full comment

AFAIK, all SIAI personnel think and AFAIK have always thought that UFAI cannot possibly explain the Great Filter; the possibility of an intelligence explosion, Friendly or unFriendly or global-economic-based or what-have-you, resembles the prospect of molecular nanotechnology in that it makes the Great Filter more puzzling, not less.  I don't view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is *still* very puzzling - it's already very mysterious.

Expand full comment

 I published that blog post in February. About 2 months later the formation of Planetary Resources was announced. Also in April a pdf was published: http://kiss.caltech.edu/stu... Many of the co-authors of this paper are part of the Planetary Resources team.I had suggested some asteroids could be parked in high lunar orbit for as little as .3 km/s. The pdf  points to an asteroid that can be retrieved with as little as .17 km/s. It was gratifying to see this paper back up my numbers and to show I was even being pessimistic.On the bottom of page 15 of the KISS pdf they talk about safety. They advocate (as I did) retrieving asteroids small enough to harmlessly burn up in the upper atmosphere should the rock hit the earth. Their first asteroid mined will likely be a water rich asteroid (As I mentioned propellant high on the slopes would break the exponent in Tsiolkovsky's equation). The 2nd substance mined might be PGMs, not iron. See http://www.planetaryresourc...

Expand full comment

 Thanks for sharing.

I'm not qualified to evaluate your math over Murphy's. He did mention that he made some simplifications, but, as you point out, given the exponetial nature of the rocket equation even a small difference in terms of delta-v budget can make a great difference in terms of propellant mass.

The idea of using the lunar ice caps as a source of fuel seems interesting, but is certaily highly speculative at this point. Assuming it is technically feasible, it would require the construction of massive solar or nuclear powered infrastructure on the Moon in order to extract the ice and split it into oxygen and hydrogen.

Anyway, travel to Mars and back, considering aerobraking, all the orbital mechanics tricks and even possibly lunar refueling, is always going to cost lots of energy, all to get to a barren planet without large ore resources, nothing worth the cost of bringing it back.

As for asteroid mining, you mention that capturing pieces of ~20 m diameter extracted from some near-Earth asteroids when they pass close the the Earth-Moon Lagrange points could cost ~1 km/s delta-v.

Assuming this calculation is correct, and that object is mostly iron, it has a mass of 6.3*10^7 kg. Assuming an oxygen-hydrogen propellant and ignoring any additional overhead, the rocket equation says you need 1.6*10^7 kg of propellant, which is about 2.7 * 10^6 kg of hydrogen, and has an energy content of 3.3 * 10^14 J.

At the liquid hydrogen price of 3.6 $/kg reported here (which refer to 1980 and are not adjusted for inflation): http://www.astronautix.com/... the fuel would cost ~10 milion dollars. Iron ore price depends on the quality, but is well below 1.0 $/kg, making the asteroid mining business unfeasible just due to fuel costs.

Even if technological developments could cut the costs of fuel production, and all the other huge costs as well, 6.3*10^7 kg of iron ore every now and then would be a minuscle fraction of the world annual production, which amounts to 2.4*10^11 kg. The business would lack the economies of scale respect to the conventional iron mining business.

Expand full comment

"Yes, it is possible that the extremely difficultly was life’s origin" <--- what?

Expand full comment

Repost from g+

Well, b-duhh, isn't it obvious that we live in a simulated universe, where we ourselves are the object of simulation? Or rather, what is being studied is the birth of super-AI? So, of course, the starting point is a universe devoid of super-AI. :-)

Oh, b-duhh, just to save a bit of energy and expense on compute power for such a simulation, the simulators have avoided generating an excess of entropy, and are thus using quantum to explore multiple possibilities for no extra cost. Right? Isn't this where the facts are pointing?

On the other hand, we also don't know what fraction of brown dwarfs are super-AI's. I mean, for all we know, they are just playing WoW all day long, navel-staring, and not pursuing universal domination.

Why would they do that? Perhaps they're psychotic? Evolution has very carefully crafted the human brain to be functional, yet its clear by looking around, that, in a certain sense, everyone is a bit crazy. And some fraction of humanity really is certifiably crazy. Perhaps sanity is not automatic, but something that must be carefully tuned and adjusted: an unstable fixed point, a thing to be almost lost at any time, by the smallest perturbation. So maybe all advanced AI's simply go insane before they get to planetary scale. Who's to argue otherwise?

Expand full comment

Only consider the planets in our galaxy with liquid water and we are already down to around 10^10 candidates.

Expand full comment

Edit: Disregard the earlier comment. Actually seems to work in GNU Octave but not Google. No idea how Google gets its 0.05332034881 result.

Edit 2: GNU Octave uses a natural log for log(). Google uses a base-10 log for log(). Using ln() in Google reproduces the intended result.

Expand full comment

gwern: What ever, at least you seem to finally got it about the firing squad, even if you still don't get it that it makes no difference whenever the super-AIs expand at near speed of light and kill us, or are merely visible.

Expand full comment

 > And because we "know of no such thing" (have probabilistic knowledge), the survival is to be processed differently from not seeing some odd light, or what?

Er, yeah. If you don't have certain data, you don't reach certain conclusions. Shocking, I know.

Expand full comment

gwern: And because we "know of no such thing" (have probabilistic knowledge), the survival is to be processed differently from not seeing some odd light, or what?

Expand full comment