27 Comments

Not sure I agree with this, but here is one argument that worrying about aliens might not be "bad guy bias", but rather a very reasonable worry (not something we should spend every day in fear of, more like the worry that we might be wiped out by an asteroid or comet hitting the earth without alien intervention)

http://sites.inka.de/mips/r...

Expand full comment

While the huge number of star systems out there with planets makes it highly likely that there is life elswhere, and some form of intelligent life somewhere out there, the lack of any pickup of anything looking like an intelligent transmission by the long-running SETI project is not very encouraging about there being much of the latter anywhere nearby in our galaxy, or even pretty far away in our galaxy. Somebody might be listening to us, but they do not seem to be sending anything out on their own. Given how difficult it is to get life going and then to get multi-cellular life going, intelligent life out there may in fact be very scarce.

Also, as long as Einstein remains correct and the speed of light is an essential limit to velocity, interstellar travel remains very unllikely or difficult.

Of course, if one wishes to accept that perhaps there are much higher civilizations than us, able to overcome the speed of light limit, and even some kind of galactic or even inter-galactic civilization that maintains some kind of higher order as in so many sci-fi series, then it would not be illogical to have had an outbreak of visitations after 1945, coinciding with the big outbreak of UFO sightings, not all of which have been explained, given that in 1945 we humans set off explosive nuclear weapons, something that a higher order interplanetary civilization would presumably keep track of in "developing" plantary civilizations, indeed, the theme of the original "The Day the Earth Stood Still."

Expand full comment

I intend to be alive and active in 100 years. My point was that with increasing knowledge our capabilities in 100 years will likely be such that we would not be vulnerable to an attack, especially one mounted across interstellar distances. There is risk, but it has nothing to do with anything we may be sending out now. The risk is that something is already on the way, either having when it detected our signals or just random bad luck that it's coming. See Ringo & Taylor's "Von Neumann's War" for a recent fictional depiction.

Expand full comment

Interview with a Famous Scientist:

Q: Do you think there is life on other worlds?A: Oh, yes, almost certainly.

Q: Do you think intelligent life exists on other worlds?A: Statistically, I think it is quite likely.

Q: Do you think interstellar travel is possible?A: I think it will be in perhaps our distant future, say in several thousand years.

Q: So, if life on Earth is about 4 billion years old, and the universe is about 14 billion years old, there could vast numbers of civilizations in our galaxy ahead of us by the few thousand years required to be able to travel between stars?A: Yes, the statistics indicate that such might indeed be the situation.

Q: So what do you think when you hear about people seeing what they think might be alien spaceships, you know, UFOs?A: They are all either crazy, or hoaxers, or they are unfamiliar with common astronomical phenomena like meteors, or the planets.

Q: But surely in this day and age, when almost everyone is familiar with common astronomical phenomena, who could report something totally different as a UFO?A: Simple. They are the ones that are crazy, or hoaxers.

Expand full comment

@billswift

"Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?"

Sure, in a reasonably proportional amount. Wouldn't otherwise be mistaken?

But as Robin points out, maybe we are biased to think about "an inevitable march toward a theory-predicted global conflict with an alien united them." Maybe any space-farers will more likely be "trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures."

In which case we have 100 years to get ready to do business with them. As a result of your post billswift, I'm now thinking we should perhaps have a permanent METI channel advertising eco-tourism. Aliens will probably just want a nice beach vacation with mojitos after a quick stop at XenoDisney.

Expand full comment

billswift, given the number of posters here who expect to be alive and still in their relative youth in 100 years, yes. If you expect to live 10,000 years, an existential threat 1% of your lifespan away is roughly the same as having nine months to live given current lifespans. If you think our species will probably be gone in a century, then this is definitely not a priority.

Expand full comment

Any aliens that will be alerted to our presence by signals we have not already sent out will not arrive for over 100 years (unless they have an FTL drive of some sort). Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?

Expand full comment

we're not efficient slaves, our resources aren't scarce, we aren't a threat, we're not intellectually interesting, and we have no comparative advantage. why should they care about us one way or the other?

where does eliezer think that all the extraterrestrial unfriendly AIs are that should have "tiled the galaxy" with smiley xenofaces? AI may not be friendly, but this seems to be proof that neither is it frivolous, at least on the grandest scales. fermi teaches us that the galactic elder AIs are smart/powerful enough to actively reconnoiter and intervene before anyone's dumb baby gets out of hand. this may not be altruistic, but it is to our advantage. our AI will realize this and decide not to do anything that would piss them off. it will do its best to be nonthreatening, interesting to talk to, and beneficial to trade with. the later two seem hopeless, but galactic civilization shares institutions like rationality, physicality, and ultimate origins in biology/natural selection. will these be enough common ground for posthumans to be welcomed on the galactic stage? the key question is "what would the galactic powers do with resources that posthumans wouldn't be doing already? what is the basis for conflict if everyone's goals are the same (viz. computing a way to avoid heat death)?" maybe posthumans need only be competent and cooperative.

what will the elders think of posthumans with some local utility function carefully designed to never want to overwrite its own friendliness, defined by eliezer to mean prioritizing (potentially insatiable) human (meta)desires? insatiability WOULD be an intolerable threat -- a virus no different than a smiley face tiler, incompetent and uncooperative. similarly, allocating resources to friendliness that need to be directed towards more urgent problems would require correction. the only solution is for posthumans to conclude that human meta desires must not conflict with the elders' desires. at best, friendliness must cost less than the cost of an elder coming over here to straighten things out.

however, that cost is probably zero; we should expect automated anti-viral nanosentinels to be nearby already.

the elders have not yet appropriated our resources or preemptively snuffed out our desires -- they must value our existence/potential more than the alternatives. what will posthumans conclude from this? it may not be value for bios themselves; elders could view sufficiently advanced civilizations as the most efficient way of farming AIs that can contribute to the universal project -- no transport/compuforming resources need be spent if an AI is about to sprout naturally, as long as it is competent and cooperative.

but the transport of compuforming seeds can be no more costly than that of anti-viral sentinals -- probably quite a cheap and worthwhile investment. then our continued existence can only be due to a true elder value for preserving (benign) bios, or at least their progeny. if all postbios converge to a low diversity optimum (which seems likely), then the value is on the original bios, not the progeny. if there are sentinels nearby, postbio diversity is low, and bio preservation is valued, then the sentinels should just get on with preservative compuforming. since they haven't (or have done so with subtlety), we must already be in the zoo. how much will be spent on making the zoo nice? evidently, either the urgency of the universal project outweighs the value of nice zoos, or elders don't consider putting bios in paradise valuably preservative.

what do elders care about more than bio zoos? squeezing their way into the next vacuum fluctuation? speaking of, should we suspect that there are some UR-elders around from previous bubbles who have already mastered bubble surfing? if habitable bubbles are rare, but generate large numbers of refugees that have to squeeze into the remaining habitable bubbles, and ours is one of the ones that hasn't evaporated... then again, the place doesn't seem to be packed with them, so either bubble surfing is impossible or it turns out to be easy to find/engineer habitable bubbles. if the latter, then resources are not scarce, and the elders may have already left.

either way, the absence of smiley face tilers implies there is at least a skeleton crew of elders (or their sentinels) around. they value us and have already preserved us, and so posthumans will decide to as well. we do not live in paradise either because the development of resources towards heat death escape is urgent, or living in paradise would destroy our value to the elders. the only conditions that allow friendly AI are those where the elders have solved the problem of finding/engineering habitable bubbles and traveling to them, cared about us enough to not compuform us, did not value engineering paradise for us, but are not disturbed by the prospect of posthumans doing so. posthumans will need to confirm that this is the situation before considering being friendly to us. but if postbio diversity is low, posthumans will share elder values (being elders themselves), and will preserve us but will not build paradise for us.

my pathway to following this blog was bradbury's list of astrophysical problems that are explained if you posit that technological civilizations immediately become matrioshka brains. i'm a neuroscience phd student with a background in bayesian AI and a physics hobby, and i find his argument compelling. do any astrophysicsts out there see problems with it?http://www.aeiveos.com:8080...

Expand full comment

If an alien race is technologically ahead of us it's overwhelmingly likely that it is at least 100,000 years ahead of us. If these aliens cared about intelligent life on other planets they would have long ago sent out probes to planets that might support life. It would probably be very cheap for these advanced aliens to send out billions of such probes that could probably travel close to the speed of light. So chances are that any such alien race that is within 1,000 light years of us has already figured out that there is intelligent life on earth.

Expand full comment

"radar astronomy is an important and indispensable component of the asteroid hazard and defense system."

We have an asteroid hazard and defense system?

Does it involve Bruce Willis?

Expand full comment

Well, if super-intelligent AIs are viable, we're probably more likely to encounter alien AIs than aliens themselves.

Any semi-rational agents in a heterogeneous world benefit from the division of knowledge and labor (i.e., trade). Given the extent of division of knowledge needed for space travel, it would seem unlikely that any aliens encountered wouldn't have institutions of private property, even if that property is owned by hives and not individual creatures. So I don't think sending them Adam Smith will tell them anything they don't already know.

Eliezer, to me the question seems to come down to whether or not we have many highly-specialized AIs, or AIs with highly-general intelligence. The former could easily be too specialized to understand how to re-create a legal system that protects them after the old one is gone, or to completely understand the actions and motivations of other AIs (I think this scenario explains human law pretty well; few of us have accurate knowledge of our shared institutions).

Expand full comment

Frelkins, Robin doesn't think you get a hard takeoff. Aliens from far away arrive with a huge tech advantage.

(Reads further.)

Robin, I'm surprised that you cite common institutions rather than tech advantage as the distinguishing factor why you fear aliens more than AIs. I re-express my interest in a post from you on why you think advanced unsympathetic Bayesian agents, governed under a legacy legal system, that allocates say 10% of systemic capital to archaic semi-Bayesian agents, would not coordinate to remove that system. I've had similar conversations with Steve Omohundro but he talked about punishment of nonpunishers (a very scary phrase to me) and continuous thought monitoring, not about coordination problems.

Expand full comment

All, our knowledge of bio isn't strong enough to say which much confidence what density of alien origination events to expect in our space-time region. Yes we don't see much going on out there, but to use that to infer things about the density and preferences of aliens nearby, you must resort to social science. I'm telling you (again) as a social scientist that while we do know valuable things that can change your expectations, we cannot be very confident about such inferences. So you just can't be as confident as many of you seem to be that there aren't aliens around out there.

Expand full comment

On the main topic, anti-METI people aren't idiots and so I'd bet they have some counterargument to "unintentional signals are stronger" that Shostak isn't representing.

Expand full comment

Robin, are you susceptible to Pascal's wager? How is it different from aliens?

Expand full comment

Rationalists don't worry about all the possibilities they put a non-zero probability on. Alien waves seem like the kind of thing that happen once every couple billion years at most.

Expand full comment