# Guess Alien Value, Chance Ratios

Continuing the discussion about yelling to aliens at Cato Unbound, I ask:

Regarding a choice to yell on purpose, there are two key relevant parameters: a value ratio, and a chance ratio.

The value ratio divides the loss we would suffer if exterminated by aliens by the gain we would achieve if friendly aliens were to send us helpful info. I’d guess this ratio is at least one thousand. The probability ratio divides the chance that yelling induces an alien to send helpful info by the chance that yelling induces an alien to destroy us. I’d guess this ratio is less than one hundred.

If we can neglect our cost or value regarding the yelling process, then we need only compare these ratios. If the value ratio is larger than the chance ratio, yelling is a bad idea. If the value ratio is smaller than the chance ratio, yelling is a good idea. Since I estimate the value ratio to be larger than the chance ratio, I estimate yelling to be a bad idea. If you disagree with me, I want to hear your best estimates for these ratios. (more)

GD Star Rating
Tagged as:
• bluto

Undefined

• rsouthan

A possible utilitarian argument in favor of yelling, no matter what these ratio estimates are: If these aliens are advanced enough to find us before we find them, maybe they’ve also figured out happiness and suffering better than we have. If that’s the case, it wouldn’t be so horribly dire for them to wipe us out, because they might put our resources to better use.

It might still be nicer for them not to destroy us all, but it wouldn’t have to be framed the way that you’re framing it, that one possibility is nice and one is terrible. Both could arguably be advantageous in different ways.

It’s true that most humans don’t want to die, not even to be replaced by creatures who get more out of life than they do, but we all have to die eventually and an alien invasion seems like one of the more interesting ways to go.

Does this improve the argument for intentionally yelling?

• http://overcomingbias.com RobinHanson

You could take that into account in your value ratio. But even so, I want to hear your ratio estimates.

• Vitalik Buterin

My value ratio is close to 1:1. My reasoning:

1. With sufficiently advanced medical technology I will be able to live at least ~10^34 years.
2. The utility increment of living 10^34 years instead of 10^2 years (call it w) is much higher in absolute value than the utility decrement of dying now (call it -1).
3. I expect a ~50% chance that we’ll achieve escape velocity in time for myself, or via cryogenics. Hence, my current expected future utility is ~w/2
4. If aliens come and help us, technology will improve massively, including technology to help us live longer. Hence my utility will be ~w.
5. If aliens kill us my utility will be ~0.

I agree we need a starting point, but harping on best-vs-worst case analysis is misleading. We need a proper expected value comparison so I wish you would at least argue that this approximates what we would really like. Otherwise, everyone who has ever opposed a project or reform on account of the so-called Precautionary Principle will claim you as their champion.

• http://overcomingbias.com RobinHanson

You lost me here. “approximates what we would really like”?

• Robert Koslover

Ahem, I don’t think those numbers of 1/100 or 1/1000 have whole a lot of anything defendable behind them. Nobody knows those numbers. And hey, I think we should just go ahead and yell. If aliens don’t like it, they may even ask us to be quiet, and we can all discuss this again then. I don’t think we humans are even worth the trouble of destroying, at present, since we are so very, very far away from everybody else. And if aliens gave us a heads-up about their existence by kindly replying to our calls, we might have a better chance to prepare. But finally, we should yell simply because a profound and determined hunt for extraterrestrial life (and extraterrestrial intelligent life) is part of our ultimate destiny, to do great and wonderful things. Consider: https://www.youtube.com/watch?v=g25G1M4EXrQ .
Care to assign value numbers to that? It’s a subjective thing.

Consider an ant that crawls over your leg and tickles you. It doesn’t hurt, you’ve no reason it kill it, if you think about it it’s almost certainly not worth the bother of killing it.

But are you going to try to talk to the ant and tell it to go away? No, you’re just going to crush it.

• IMASBA

To aliens humans would be more like an ant that is 1000km away from you. You won’t crush it without a good reason. There is one big difference though, aliens could converse with humans (some agreed-upon form of language can always be constructed) and they may wish to do so out of curiosity or to help us improve our own utility, they will not crush us out of indifference. Of course there’s the other possibility that they are very paranoid and know that there are ways to conduct interstellar warfare or plain genocide, they may wish to eradicate us before we become too advanced.

• http://overcomingbias.com RobinHanson

Decision theory doesn’t excuse you from calculating values and chances if those seem “subjective” or “indefensible”. You still have to decide, so you must have numbers.

• Robert Koslover

OK, I’ll assign a number to a related consideration: the “slippery slope.” Ceding local or individual authority to some global institution, to approve/disapprove astronomical experiments (e.g., radar probing of planets) or airport radar operation, entirely based on highly-speculative, very far-term threats posed by advanced aliens, may actually backfire upon us, contributing toward erosion of human freedoms, retarding human technological progress, and increasing human suffering. I estimate the risk of those consequences occurring to be > 50% should such a “no-yell” policy be *enforced* (not merely recommended) worldwide. Now, how *serious* would the level of negative consequences be? I don’t know. How much worse off would we now be if we had established a worldwide policy, and an enforcement regime to go with it, to prevent any human from ever going to the Moon, from ever sending any probes to Mars, etc? I just don’t know. But suppressing the free pursuit of science and/or technology has non-zero costs. Without a more compelling argument for worldwide suppression of cosmic-level free speech, I say no. Let my people yell.

• IMASBA

I’m wondering how these ratios will change over the next couple of centuries. Will there come a time when we can be reasonably sure that most alien civilizations won’t have an insurmountable technological edge over us if we spend a similar or smaller percentage of global GDP on defense than we do now?

I notice how hard my brain is fighting against answering. Yelling has an obvious appeal, hence my rejection of any argument that might oppose it. So I’ll try to overcome this.

Assume aliens are to us roughly as we are to a mouse (anyone lower than we won’t talk back, so as things are distributed over many orders of mag this seems sensible). There’s lots that we could in theory trade with mice, but we generally don’t bother. We could easily kill mice, but again, generally don’t bother.

I think I disagree with your ratios. With the first because yelling is wider than just SETI. The same argument also covers spreading beyond the solar system. Keeping SETI quiet and then launching probes helps nothing. The question seems to me should we stay quiet and local forever, or should we expand. Aliens hearing about us now versus when we colonise alpha centauri leaves us just as dead/just as helped as we are now.

Both ratios depends what kinds of aliens there are. Clearly some kinds of aliens preclude galactic-scale human flourishing, which slashes the cost of extinction. Some other kinds would allow us to join as members of a galactic-scale economy, which has enormous benefits.

The first kind however seems far more likely. Even if there is a multipolar galactic economy it’s plausible that barriers to entry will have been constructed. And given how long it takes civilisations to arise compared to how long it takes to travel galactic distances I’d expect to see a singleton guarding its resources rather than The Culture.

So, numbers:

Cost of extinction (relative to humans staying quiet and local forever) / benefit of nice talk (relative to humans staying quiet and local forever) = 0.1 … I’m reasonably sure it’s less than one, but much more than that I’m unclear about

Chance of hearing nice talk / chance of being killed = 1/1000 … I’m very very sure it’s less than one, and by a long way.

Hmm, damn, that’s not the result I was hoping for. It may be I underestimate how low the first ratio is, but I think I agree with you.

• http://overcomingbias.com RobinHanson

Wow. So you’d accept a 90% chance of extinction, if that could buy you a 10% chance of nice talk from aliens?!

Yes, but only on the assumption that “nice talk” also means “they let us colonise a large wedge of the galaxy, which we would otherwise have to avoid doing”. Large scale projects like this seem enormous wells of potential utility.

If we ignore this factor then I agree with your incredulity.

• http://overcomingbias.com RobinHanson

Realize that the positive scenario you want could also happen if we waited for them to talk first. So the value is only from the scenarios where they insist that we talk first.

Yes, true, I guess I implicitly assume that because of the fermi paradox (ie, if they’re going to talk first we would have heard them). Assuming otherwise moves the calculus dramatically.

The image I have is a man trapped on a desert island surrounded by pirate ships. It’d be worth a large risk of death to escape the island. But not worth it if you expect pirates are very likely to be hostile. If you can expect some of the rare friendly ships to send messages then this thought process changes.

• Cambias

The risk is zero. Literally zero. Interstellar travel is so difficult that there is nothing on Earth worth making the effort to come here with violent intent. And if you posit a civilization so insanely hostile that they would somehow be willing to burn all their resources in order to attack people across interstellar distances, then they would presumably also be able to make the effort to build detectors capable of spotting accidental emissions of civilization.

And if that’s the case, then they’ve already destroyed human civilization, or sterilized the Earth’s surface back during the Permian Era, and we’re not having this discussion.

In short, it’s a more benevolent form of the Fermi Paradox: if these aliens are so hostile, where are they?

• MarkBahner

“The risk is zero. Literally zero. Interstellar travel is so difficult that there is nothing on Earth worth making the effort to come here with violent intent.”
Look how far the zombies travel for human brains. 😉
Seriously, though, a much better use of time would be to follow up on Elon Musk’s (and many others’) concerns of an existential threat from human-created artificial intelligence. That has a far higher probability and immediate likelihood of happening.

• Vali1005

“Concerns of an existential threat from human-created artificial intelligence”
We’re still here and we have not been wiped out/invaded by an alien super-AI. When people worry about us creating super-AIs, then they should also worry about aliens creating super-AIs.
We can infer from this that no one else in the galaxy has achieved the means to create a hostile, Skynet-like super-AI, otherwise, a super-AI could easily afford to spend the time required for the inter-stellar travelling necessary for acquiring resources from all over the galaxy.
Milky Way has a diameter of 120,000 light-years, at 10% of speed-of-light it would only take, generously, 1.5-2 million years to navigate through the galaxy, yet, nothing has happened.

• Cahokia

I keep hearing comparisons between our species and vermin like mice and ants. Yet these are precisely the sorts of animals that, far from going extinct, have flourished under human civilization.

I still haven’t seen many good arguments why aliens would either consciously kill every human or would act so blithely as to cause our extinction. What am I missing here?

Note that humans have caused far fewer extinctions than most environmentalists and conservationists want to admit. They have to resort to highly speculative theoretical models to substantiate claims that we’re currently living during a new mass extinction.

You have a barn full of corn, the corn is incredibly valuable, it also has mice in it. The instant response is to exterminate them all before they eat your corn.

• Cahokia

The corn is the product of human labor. Not really analogous to the earth and it’s resources.

Better analogy – you have a forest full of game, so you exterminate all predators. Yet, as I’ve noted, humans have wiped out remarkably few species, including apex predators. Wolves, tigers, and bears may be rare, but they’re not extinct by and large. Why would aliens wipe out every last human? Isn’t it more likely that a small number of humans would survive or be consciously preserved?

• Lord

If you want to extrapolate this, the Fermi paradox becomes no paradox at all. They are all in hiding.

• Dan Browne

My personal challenge in answering questions like this is that there is no way of estimating priors because we’re the only data point and we haven’t contact anybody else yet. So for me, it’s more in the realm of speculatory philosophy with a dose of hard science fiction thrown in. Some of the posters have stated (quite correctly) that if the hostile aliens are out there, then since it would take only(!) 1 million years or so to traverse the galaxy then why haven’t we been wiped out yet? I’ll take a stab at this:
Our data points are us and other hominids: So…
We have been around for conservatively speaking about 200,000 years. Prior to that we can (maybe) say that other hominids lasted about 2 million years. Before going extinct. So let’s assume that’s us too. So we should be able to last another 1.8 million years. That’s plenty time (if we use the 1 million years estimate) to colonize the galaxy.
So… where are they?
Well, there’s a hella big epoch since the beginning of complex life on our planet until we showed up. Hominids have only been around 2 million years out of 500 million years. So 1/250th.
We can also argue that since our sun is a type II star and has been “alive” for only five billion years AND we “need” type II stars to form heavy elements etc etc then we’re limited to the last five billion years. Let’s wild guess this one and say that life on *any* planet needs the same amount of time to develop as ours. i.e. it’s been around no longer than 500 million years.
So here is our assumption: Alien species X has been around long enough to develop technology (i.e. 200,000 years using our prior). AND it’s been around long enough to be sufficiently ahead of us to be able to colonize the galaxy. So that means 1.2 million years minimum.
But what is our estimate that the 2 million year lifetime of putative hostile technologically advanced overlaps with ours in a 500 million year span? Well we can’t really do that very easily, but we can say (at least) that we’ve seen basically three rounds of “advanced” complex life (pre dinosaur eta, dinosaur era and our era) each separated by mass extinction events. So can we say we get (maybe) three shots over a 500 million year period?
If we can say this (huge ass if) then that’s a six million year period across 500 million years for each planet having the opportunity to develop complex life. Obviously the probability of not overlapping with us (496 years out of 500) is very large if there is only one.
But again (huge ass if) using my numbers we only need 80 planets in the galaxy that get their 3 allotted chances to develop hostile technological life for a 50/50 chance of overlapping with us. Or…. 160 life bearing planets for a close to 100% chance of overlapping with us.
So are there more or less than 160 some complex life bearing planets in the milky way? Who the heck knows. If there are though, and the numbers I ran up are anywhere near sound, there there could be an inbound relativistic missile coming our way right now ;->
But my math could be wrong…. ;->

• IMASBA

I kinda see where you’re going but I think the “three chances” bit could use some revising: the “three ages” of complex life were not equal in length, pre-dinosaur complex life was further away from evolving an intelligent brain, which reduces the odds of your first “chance” happening but on the other hand more recent lifeforms get increased odds. Finally I’m not sure there should only be one chance per age, for example if humans went extinct today, but chimps survived, then chimps might be colonizing Mars 5 million years from now. On the homeworld of an intelligent species there are pretty much bound to be other species who are close to being intelligent, particularly the species closely related to the intelligent species. Then again the intelligent species going extinct may indicate an overall extinction of all life or all complex life on that planet. I’d say there have to be even less than 80 homeworlds in our current galaxy for some form of contact to be possible and 80 is probably already a low-estimate outcome of the Drake equation.

• Dan Browne

Yeah of course. I obviously pulled the numbers out of my butt. But some numbers are better than none. Any case I love thinking about this stuff. My best take is there is nobody out there. We’re it.

Any case, if you don’t mind, please use your criticisms to build on the model and tighten it up a little?

• IMASBA

Ok, here’s my back-of-the-envelope attempt. Two things are crucial: the average time any living trace of an advanced civilization survives and the probability of one or more near-intelligent species surviving the extinction of the intelligent species on their planet, or one they terraformed, so one of them can become the next intelligent species (probably in as little as 10 million years, and after more time a simpler, non-near-intelligent, lifeform can still become intelligent, even on planets that never had an intelligent species before). I’m assuming intelligent life to be particularly likely in the most recent 100 million years. if I take the life-expectancy of a civilization as 2 million years (which I think is conservative) and the probability of a near-intelligent species surviving and advancing to be 50% then any planet that has had an intelligent civilization can be expected to see 4 million years of civilization, that means only 25 other planets need to have developed intelligent life for there to be overlap with us, or 50, depending on whether you consider life on Earth as an anthropic given or not.

• Dan Browne

I think your numbers are fair. The only one I’ll quibble with (and it’s a theoretical quibble only because we have evidence!) is the assumption of life being most likely in the most recent 100 million years. Is the assumption based (quite reasonably) on the prior that we are here and in the last 100 million years? I’d like to make a wild ass assertion and say I think we could have had intelligent life at any time since large complex life evolved so I think it should be wider i.e. 500 million years at least. But in principle, yeah I agree with your math. So that leads us to the conclusion that they’re either not there or it’s really difficult to get here or something else is preventing it from happening.

• IMASBA

The 100 million years figure shouldn’t matter if the probability of extinction of all life isn’t too high (if so much as a subterranean tardigrade survives you can have a wide variety of complex life again within 100 million years).

I think there are 3 plausible explanations. They’re not there (or at least not in range), they think getting here is too costly/not interesting enough (note that on a small scale they may send scientific probes without us ever knowing) and finally the third, least plausible, but most optimistic one: one of the alien civilizations is “protecting” us (they may do this passively, with us simply being located inside or behind their defensive perimeter).

• arch1

This is pretty blue sky but I’ll speculate 1/100000 on the first number and 1/10 on the second number.

1) The rationale behind the 1/100000 is that our expected civilzational lifetime seems small absent some anomalous event (such as helpful alien info) enabling us to spread before hitting whatever barrier likely looms. Whereas the (time x quality x population) value of clearing that barrier seems huge.

2) The rationale behind the 1/10 is ignorance plus a vague feeling that there may be old civilizations and there may be bold civilizations but there are no old bold civilizations. But mostly ignorance.

3) That said I have so little confidence in these estimates that my gut says it’s better to stop shouting, at least until we have better estimates. (*That* said I’m confused about whether/how such meta considerations are relevant to rational decision making.)

• http://overcomingbias.com RobinHanson

Sure you aren’t giving the inverse of the ratios I asked for?

• arch1

Pretty sure. wrt the first ratio, the expected loss from being destroyed might be 10^15?? individual-years of quality not more than a few orders of magnitude beyond current avg. The expected gain from *extremely* helpful alien info (helpful enough to get us dispersed widely) seems many orders of magnitude greater. I reduced the ‘many’ to (say) 5 because it seems very unlikely that any helpful info will be *that* helpful.
If we ignore all but *extremely* helpful info, both of my SWAGged ratios become significantly smaller, by the same factor.