Alien Bad Guy Bias

The Bad Guy Bias applies to Earth signals to aliens.  From the NYT:

The makers of the new movie “The Day the Earth Stood Still” have arranged for it to be beamed into space on … the same day the movie opens here on planet Earth. … Dr. Shostak, who was a consultant for the new movie … [says] there are some people, he acknowledges, who might worry that broadcasting “The Day the Earth Stood Still” could be inimical to our interests. He added, “I think that if these people are truly worried about such things, they might best begin by shutting down the radar at the local airport.”

Shostak is right; compared to intentional signals, unintentional signals are a million times larger:

There are three large-dish instruments in the world that are currently employed for doing radar investigations of planets, asteroids and comets: ART (Arecibo Radar Telescope), GSSR (Goldstone Solar System Radar), and EPR (Evpatoria Planetary Radar). Radiating power and directional diagram of these instruments is so outstanding that it also allows us to emit radio messages to outer space, which are practically detectable everywhere in the Milky Way. This dedicated program is called METI (Messaging to Extra-Terrestrial Intelligence) …

Over all the radar astronomy history … The total area of the sky illuminated by [radar] transmissions is about 0.022 steradians (sr). … The total area of sky illuminated by the METI transmissions is … 2000 times less … Total duration of time of radar transmissions exceeds the overall time interval of the METI transmissions by a factor of 500. Therefore, we can conclude that the probability to detect the radar astronomy transmissions by a hostile super-civilization is 2000 x 500 = 1,000,000 times higher than that of the METI transmissions.

So, if someone is concerned about our detection by an aggressive super-civilization (so-called METI-phobia), first of all one has to prohibit not the METI, but the radar astronomy. However, one can not prohibit it because the radar astronomy is an important and indispensable component of the asteroid hazard and defense system. But most radar astronomy has little to do with asteroid defense.

Seems to me we should be more explicitly considering these negative costs of radar astronomy.  Some argue we shouldn’t worry about radar astronomy because Earths O2 spectral line has emitted a signal hundred billion times stronger.  But life be far far more common in the universe than radar astronomy capable life.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • billswift

    Isn’t it a little late to worry about that kind of problem? Anything close enough to be a potential threat is close enough that signals that have already been sent have reached or will reach them. What we’re sending out now is either going to reach something that has already been alerted by previous signals or is too far away to be a real threat.

  • frelkins

    Thinking about this with care, Robin, why would aliens be a threat to us anyway?

    Previously you’ve argued that rational beings would be more interested in trade than war – that’s what you said to counter the fear that the direct hand-coded AI would exterminate us, iirc – why wouldn’t this hold likewise for the little green men when they arrive?

    Also, since presumably any beings intelligent and advanced enough to arrive would most likely have achieved their own Singularity, couldn’t we feel fairly confident they would trade Singularity-related technology with us?

    This makes me think that instead of dampening signals, we should increase them. Thus we could benefit from alien technologies sooner. The work therefore might be to ensure that we are ourselves Friendly when they arrive?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    frelkins, I’d never say war or genocide is impossible – I’ve said it is less likely between groups that share the same legal and social institutions for within-group coordination. Tall people and short people who intermingle well are unlikely to go to war. Aliens from far away share very few institutions with us.

  • frelkins

    Ok Robin, that makes sense to me – it’s true, they could be hostile of course. But is there any decent way to think about how you could create a fair estimation of that probability?

    What then comes to mind: are any plausible ways to consider what institutions space-faring, BBC-watching, and classical-music-broadcast listening aliens might have?

    Then we could find points of commonality now to reduce the likelihood of conflict and increase the possibility of trade. Obviously it would be to our benefit to have our diplomatic ducks in a row before the grays arrive.

    Assuming of course, that aliens travel.

  • http://macroethics.blogspot.com nazgulnarsil

    the question seems to be whether or not any alien or artificial intelligence places a negative utility on destroying other life. This acts as a “barrier to entry” for destroying us even if it is of positive utility to them.

  • frelkins

    @nazgulnarsil

    even if it is of positive utility to them

    Hmm, nn. I’d rather not have to rely on anyone’s purity of heart – xenomorality. This draws me to consider if we should direct a METI that outlines capitalism and the benefits of xenotrade.

    If aliens are monitoring broadcasts to learn about us either for xenobiology or possible conquest, it might be best to attempt to send them Adam Smith if they don’t already know it. This would probably be the most crucial, useful institution we could share at first contact to ensure peace, no?

  • http://profile.typekey.com/hopefullyanonymous/ Hopefully Anonymous

    I think the overwhelming current evidence is any aliens that could get to us aren’t going to distinguish us much from the rest of the matter in the region, and that the most likely outcome is that we’ll be von neuwmannized.

  • Anonymous

    Seems to me we should be more explicitly considering these negative costs of radar astronomy.

    I don’t think we’ll alert that many aliens in a century and by then we’ll probably have started our own stellar expansion, transcended or died out.

  • http://www.mccaughan.org.uk/g/ g

    It’s not entirely beyond the bounds of possibility that an alien civilization might watch our radar signals with equanimity but be moved to xenocide by seeing a content-rich signal, going to the trouble of working out what it is, and finding that their reward for doing so is … Keanu Reeves. 🙂

    I’m mostly joking, of course, but there’s a serious point there too: the probability that a signal will get us into trouble isn’t the same as the probability that it will reach a potential troublemaker with enough strength to be detected. Some signals might be more trouble-provoking than others. And I don’t think I’d choose, as something to send out deliberately to the stars, a depiction of one species threatening another with extermination.

    And the point isn’t just that we might be *first detected* as a result of the TDTESS broadcast, but that someone who’s already detected us might see it and not like what they see.

  • Vladimir Slepnev

    If you worry about aliens, you’re not a rationalist.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Bill and Anonymous, why cannot far aliens be a potential threat? Sure the threat would be realized later, but still it could be realized.

    Hopefully, you have overwhelming evidence on the preferences of aliens?

    Vladimir, rationalists don’t put probability zero on non-excluded possibilities.

  • steven

    Rationalists don’t worry about all the possibilities they put a non-zero probability on. Alien waves seem like the kind of thing that happen once every couple billion years at most.

  • Vladimir Slepnev

    Robin, are you susceptible to Pascal’s wager? How is it different from aliens?

  • steven

    On the main topic, anti-METI people aren’t idiots and so I’d bet they have some counterargument to “unintentional signals are stronger” that Shostak isn’t representing.

  • http://hanson.gmu.edu Robin Hanson

    All, our knowledge of bio isn’t strong enough to say which much confidence what density of alien origination events to expect in our space-time region. Yes we don’t see much going on out there, but to use that to infer things about the density and preferences of aliens nearby, you must resort to social science. I’m telling you (again) as a social scientist that while we do know valuable things that can change your expectations, we cannot be very confident about such inferences. So you just can’t be as confident as many of you seem to be that there aren’t aliens around out there.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Frelkins, Robin doesn’t think you get a hard takeoff. Aliens from far away arrive with a huge tech advantage.

    (Reads further.)

    Robin, I’m surprised that you cite common institutions rather than tech advantage as the distinguishing factor why you fear aliens more than AIs. I re-express my interest in a post from you on why you think advanced unsympathetic Bayesian agents, governed under a legacy legal system, that allocates say 10% of systemic capital to archaic semi-Bayesian agents, would not coordinate to remove that system. I’ve had similar conversations with Steve Omohundro but he talked about punishment of nonpunishers (a very scary phrase to me) and continuous thought monitoring, not about coordination problems.

  • Grant

    Well, if super-intelligent AIs are viable, we’re probably more likely to encounter alien AIs than aliens themselves.

    Any semi-rational agents in a heterogeneous world benefit from the division of knowledge and labor (i.e., trade). Given the extent of division of knowledge needed for space travel, it would seem unlikely that any aliens encountered wouldn’t have institutions of private property, even if that property is owned by hives and not individual creatures. So I don’t think sending them Adam Smith will tell them anything they don’t already know.

    Eliezer, to me the question seems to come down to whether or not we have many highly-specialized AIs, or AIs with highly-general intelligence. The former could easily be too specialized to understand how to re-create a legal system that protects them after the old one is gone, or to completely understand the actions and motivations of other AIs (I think this scenario explains human law pretty well; few of us have accurate knowledge of our shared institutions).

  • http://shagbark.livejournal.com Phil Goetz

    “radar astronomy is an important and indispensable component of the asteroid hazard and defense system.”

    We have an asteroid hazard and defense system?

    Does it involve Bruce Willis?

  • http://jamesdmiller.blogspot.com/ James D. Miller

    If an alien race is technologically ahead of us it’s overwhelmingly likely that it is at least 100,000 years ahead of us. If these aliens cared about intelligent life on other planets they would have long ago sent out probes to planets that might support life. It would probably be very cheap for these advanced aliens to send out billions of such probes that could probably travel close to the speed of light. So chances are that any such alien race that is within 1,000 light years of us has already figured out that there is intelligent life on earth.

  • erik

    we’re not efficient slaves, our resources aren’t scarce, we aren’t a threat, we’re not intellectually interesting, and we have no comparative advantage. why should they care about us one way or the other?

    where does eliezer think that all the extraterrestrial unfriendly AIs are that should have “tiled the galaxy” with smiley xenofaces? AI may not be friendly, but this seems to be proof that neither is it frivolous, at least on the grandest scales. fermi teaches us that the galactic elder AIs are smart/powerful enough to actively reconnoiter and intervene before anyone’s dumb baby gets out of hand. this may not be altruistic, but it is to our advantage. our AI will realize this and decide not to do anything that would piss them off. it will do its best to be nonthreatening, interesting to talk to, and beneficial to trade with. the later two seem hopeless, but galactic civilization shares institutions like rationality, physicality, and ultimate origins in biology/natural selection. will these be enough common ground for posthumans to be welcomed on the galactic stage? the key question is “what would the galactic powers do with resources that posthumans wouldn’t be doing already? what is the basis for conflict if everyone’s goals are the same (viz. computing a way to avoid heat death)?” maybe posthumans need only be competent and cooperative.

    what will the elders think of posthumans with some local utility function carefully designed to never want to overwrite its own friendliness, defined by eliezer to mean prioritizing (potentially insatiable) human (meta)desires? insatiability WOULD be an intolerable threat — a virus no different than a smiley face tiler, incompetent and uncooperative. similarly, allocating resources to friendliness that need to be directed towards more urgent problems would require correction. the only solution is for posthumans to conclude that human meta desires must not conflict with the elders’ desires. at best, friendliness must cost less than the cost of an elder coming over here to straighten things out.

    however, that cost is probably zero; we should expect automated anti-viral nanosentinels to be nearby already.

    the elders have not yet appropriated our resources or preemptively snuffed out our desires — they must value our existence/potential more than the alternatives. what will posthumans conclude from this? it may not be value for bios themselves; elders could view sufficiently advanced civilizations as the most efficient way of farming AIs that can contribute to the universal project — no transport/compuforming resources need be spent if an AI is about to sprout naturally, as long as it is competent and cooperative.

    but the transport of compuforming seeds can be no more costly than that of anti-viral sentinals — probably quite a cheap and worthwhile investment. then our continued existence can only be due to a true elder value for preserving (benign) bios, or at least their progeny. if all postbios converge to a low diversity optimum (which seems likely), then the value is on the original bios, not the progeny. if there are sentinels nearby, postbio diversity is low, and bio preservation is valued, then the sentinels should just get on with preservative compuforming. since they haven’t (or have done so with subtlety), we must already be in the zoo. how much will be spent on making the zoo nice? evidently, either the urgency of the universal project outweighs the value of nice zoos, or elders don’t consider putting bios in paradise valuably preservative.

    what do elders care about more than bio zoos? squeezing their way into the next vacuum fluctuation? speaking of, should we suspect that there are some UR-elders around from previous bubbles who have already mastered bubble surfing? if habitable bubbles are rare, but generate large numbers of refugees that have to squeeze into the remaining habitable bubbles, and ours is one of the ones that hasn’t evaporated… then again, the place doesn’t seem to be packed with them, so either bubble surfing is impossible or it turns out to be easy to find/engineer habitable bubbles. if the latter, then resources are not scarce, and the elders may have already left.

    either way, the absence of smiley face tilers implies there is at least a skeleton crew of elders (or their sentinels) around. they value us and have already preserved us, and so posthumans will decide to as well. we do not live in paradise either because the development of resources towards heat death escape is urgent, or living in paradise would destroy our value to the elders. the only conditions that allow friendly AI are those where the elders have solved the problem of finding/engineering habitable bubbles and traveling to them, cared about us enough to not compuform us, did not value engineering paradise for us, but are not disturbed by the prospect of posthumans doing so. posthumans will need to confirm that this is the situation before considering being friendly to us. but if postbio diversity is low, posthumans will share elder values (being elders themselves), and will preserve us but will not build paradise for us.

    my pathway to following this blog was bradbury’s list of astrophysical problems that are explained if you posit that technological civilizations immediately become matrioshka brains. i’m a neuroscience phd student with a background in bayesian AI and a physics hobby, and i find his argument compelling. do any astrophysicsts out there see problems with it?
    http://www.aeiveos.com:8080/~bradbury/MatrioshkaBrains/MatrioshkaBrainsPaper.html#Evidence

  • billswift

    Any aliens that will be alerted to our presence by signals we have not already sent out will not arrive for over 100 years (unless they have an FTL drive of some sort). Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?

  • http://zbooks.blogspot.com Zubon

    billswift, given the number of posters here who expect to be alive and still in their relative youth in 100 years, yes. If you expect to live 10,000 years, an existential threat 1% of your lifespan away is roughly the same as having nine months to live given current lifespans. If you think our species will probably be gone in a century, then this is definitely not a priority.

  • frelkins

    @billswift

    Do you really think that the risk of unfriendly aliens arriving that far in the future is worth worrying about AT ALL?

    Sure, in a reasonably proportional amount. Wouldn’t otherwise be mistaken?

    But as Robin points out, maybe we are biased to think about “an inevitable march toward a theory-predicted global conflict with an alien united them.” Maybe any space-farers will more likely be “trillions of quirky future creatures not fundamentally that different from us, focused on their simple day-to-day pleasures.”

    In which case we have 100 years to get ready to do business with them. As a result of your post billswift, I’m now thinking we should perhaps have a permanent METI channel advertising eco-tourism. Aliens will probably just want a nice beach vacation with mojitos after a quick stop at XenoDisney.

  • sherlock

    Interview with a Famous Scientist:

    Q: Do you think there is life on other worlds?
    A: Oh, yes, almost certainly.

    Q: Do you think intelligent life exists on other worlds?
    A: Statistically, I think it is quite likely.

    Q: Do you think interstellar travel is possible?
    A: I think it will be in perhaps our distant future, say in several thousand years.

    Q: So, if life on Earth is about 4 billion years old, and the universe is about 14 billion years old, there could vast numbers of civilizations in our galaxy ahead of us by the few thousand years required to be able to travel between stars?
    A: Yes, the statistics indicate that such might indeed be the situation.

    Q: So what do you think when you hear about people seeing what they think might be alien spaceships, you know, UFOs?
    A: They are all either crazy, or hoaxers, or they are unfamiliar with common astronomical phenomena like meteors, or the planets.

    Q: But surely in this day and age, when almost everyone is familiar with common astronomical phenomena, who could report something totally different as a UFO?
    A: Simple. They are the ones that are crazy, or hoaxers.

  • billswift

    I intend to be alive and active in 100 years. My point was that with increasing knowledge our capabilities in 100 years will likely be such that we would not be vulnerable to an attack, especially one mounted across interstellar distances. There is risk, but it has nothing to do with anything we may be sending out now. The risk is that something is already on the way, either having when it detected our signals or just random bad luck that it’s coming. See Ringo & Taylor’s “Von Neumann’s War” for a recent fictional depiction.

  • http://cob.jmu.edu/rosserjb Barkley Rosser

    While the huge number of star systems out there with planets makes it highly likely that there is life elswhere, and some form of intelligent life somewhere out there, the lack of any pickup of anything looking like an intelligent transmission by the long-running SETI project is not very encouraging about there being much of the latter anywhere nearby in our galaxy, or even pretty far away in our galaxy. Somebody might be listening to us, but they do not seem to be sending anything out on their own. Given how difficult it is to get life going and then to get multi-cellular life going, intelligent life out there may in fact be very scarce.

    Also, as long as Einstein remains correct and the speed of light is an essential limit to velocity, interstellar travel remains very unllikely or difficult.

    Of course, if one wishes to accept that perhaps there are much higher civilizations than us, able to overcome the speed of light limit, and even some kind of galactic or even inter-galactic civilization that maintains some kind of higher order as in so many sci-fi series, then it would not be illogical to have had an outbreak of visitations after 1945, coinciding with the big outbreak of UFO sightings, not all of which have been explained, given that in 1945 we humans set off explosive nuclear weapons, something that a higher order interplanetary civilization would presumably keep track of in “developing” plantary civilizations, indeed, the theme of the original “The Day the Earth Stood Still.”

  • Tim Fowler

    Not sure I agree with this, but here is one argument that worrying about aliens might not be “bad guy bias”, but rather a very reasonable worry (not something we should spend every day in fear of, more like the worry that we might be wiped out by an asteroid or comet hitting the earth without alien intervention)

    http://sites.inka.de/mips/reviews/TheKillingStar.html