One In A Billion?

At CATO Unbound this month, David Brin’s lead essay makes two points:

  1. We probably shouldn’t send messages out to aliens now on purpose, and more surely we shouldn’t let each group decide for themselves if to send.
  2. The lack of visible aliens may be explained in part via a strong tendency of all societies to become “feudal”, with elites “suppressing merit competition and mobility, ensuring that status would be inherited” and resulting in “scientific stagnation.”

In my official response at CATO Unbound, I focus on the first issue, agreeing with Brin, and responding to a common counter-argument, namely that we now yell to aliens far more by accident than on purpose. I ask if we should cut back on accidental yelling, which we now do most loudly via the Arecibo planetary radar. Using the amount we spend on Arecibo yelling to estimate the value we get there, I conclude:

We should cut way back on accidental yelling to aliens, such as via Arecibo radar sending, if continuing at current rates would over the long run bring even a one in a billion chance of alerting aliens to come destroy us. And even if this chance is now below one in a billion, it will rise with time and eventually force us to cut back. So let’s start now to estimate such risks, and adapt our behavior accordingly. (more)

As an aside, I also note:

I’m disturbed to see that a consensus apparently arose among many in this area that aliens must be overwhelmingly friendly. Most conventional social scientists I know would find this view quite implausible; they see most conflict as deeply intractable. Why is this kind-aliens view then so common?

My guess: non-social-scientists have believed modern cultural propaganda claims that our dominant cultures today have a vast moral superiority over most other cultures through history. Our media have long suggested that conflictual behaviors like greed, theft, aggression, revenge, violence, war, destruction of nature, and population growth pressures all result from “backward” mindsets from “backward” cultures.

GD Star Rating
Tagged as: ,
Trackback URL:

    I’m actually curious as to what aliens could throw at us that we won’t be able to defend against in say 100-200 years…

    Really the only thing I can think of in a “hard sc-fi” scenario (no FTL, deflector shields or quantum torpedoes) is a relativistic kill vehicle and I’m not even 100% sure that would work. About the only reason I can think of for them to attack would be paranoia (which may be a good reason and can still be part of an advanced and otherwise peaceful civilization, but then again, they know that if they stay quiet we won’t find them so why should they be afraid of us?), even for colonization purposes (in which case using a relativistic kill vehicle would defeat the purpose) it’s probably easier for them to just terraform planets closer to their homeworld.

    As to why people have started believing in friendly aliens, well, there is some merit to the idea that aggressive cultures are more likely to self-destruct before they can colonize space and advanced aliens may simply not have much to gain from attacking Earth: there’s no resource out here that’s not available out there. Yes, there are people who believe that all violence will cease with technological advancement and they are of course mistaken (paranoia and conflicting interests won’t just disappear) but I don’t think that’s the main reason for people not to be afraid of aliens.

    • Sigivald

      I’m sure it’d work (the RKV, assuming a hard-SF scenario), though again the motive thing is what’s lacking, as you say.

      Because I’m not at all sure what you could do to stop it, in that scenario.

      Going at say .95C it’d be hard to detect before it arrived, no? And at a good serious planet-cracking mass it’d be nigh-impossible to deflect enough, and breaking it up would be both hard to do and ineffective.

      IF they have the energy and patience to get, oh, a five-mile rock going that fast, there’s sweet-FA you can do about it by the time you notice, even if you have as much energy.

      Because you don’t have the time to push your own rock to .95C to smack it…

      • IMASBA

        Yeah but any advanced civilization would occupy more than one planet and some space stations too, the RKV doesn’t kill civilizations, it makes them angry. I don’t think there’d be five mile RKVs, maybe something smaller that could be dispersed (so most of the debris doesn’t collide with Earth) by beam weapons or maybe we can harness exotic physics, placing an impassable (so still no FTL) wormhole in the RKV’s path would neutralize it, or maybe we can randomly change the gravitational fields in our solar system slightly (moving asteroids around and such) so they can’t aim their RKV accurately enough. More simply we could put up a screen of dust and debris if we know which side the RKV would be coming from.

      • Dan Browne

        Would it occupy more than one planet? Maybe it makes more sense for an advanced civilization to occupy *only* space. If this were true, then maybe planet-bound creatures aren’t a threat? After all, space is very large. I also suspect that any advanced species are likely to be machines and not organic, further making me tend towards guessing that we wouldn’t have much of an overlap.

        On the other hand, maybe machine intelligences would want to wipe us out. Gregory Benford writes some good scifi on the topic in his Galactic Center series.

      • IMASBA

        Machines would occupy space stations, biological creatures would occupy planets and space stations because there wouldn’t be enough room on space stations alone for all of them. Spreading across multiple planets is insurance against extinction and also makes off-world mining easier.

    • Mariano

      A Nicoll-Dyson laser would be impossible to anticipate:

  • Robert Koslover

    Allow me to hypothesize that the actual problem may be a little different. Distances between advanced civilizations on different worlds appear to be very, very great. Costs of traveling between such worlds appear to be very, very great. But communications are enormously faster and cheaper than conveying physical objects or beings. So communications with aliens will necessarily precede alien visitation by very, very long times. Does this mean we are safe? Interestingly, it does not. Consider the threat of dangerous/hostile communications! E.g., imagine an alien civilization communicating some kind of advanced “information virus” to us. After all, human history already shows that there really is nothing more dangerous than certain massively-popular, yet extremely self-destructive, social, religious, or political ideas. Scientific/technical knowledge can also be very dangerous in the wrong hands. Following such deadly exposure to new and lethal ideas and/or knowledge, we could very well actively liquidate ourselves, millennia before the physical arrival of any aliens and/or their robotic agents. Consider how much danger we (i.e., humans) are already in from our own nutty ideas and knowledge. What if aliens provided simple instructions, to millions of people, on how to cheaply and easily build technologically-advanced destructive devices (or viruses, toxins, etc.) in our garages, at low cost, any one of which could eliminate most/all life on Earth? Surely, it wouldn’t take too long for some human to do just that, given the know-how. No need to visit us to destroy us!

    • Meh

      The question then is why they would want to destroy us if they don’t plan on using our material resources. If communication is cheapter than visit/warfare, then scientific cooperation might be more useful to both sides than destruction of the other. For instance, each side could perform in-detail analysis of the local astronomical region and send the results to the other, for mutual gain in insight. The impulses to steal, conquer, enslave etc. only make sense if physical proximity to the same resources is possible.

      • Sigivald

        That and how an “information virus” would be communicable at all between species so – presumably – radically different, without them having, given the time gaps that would have to be involved, barring FTL travel, a long time to try and figure out how they worked by inference from our communications.

        Remember that an alien without enough knowledge of life on Earth to send out “designs for a killer virus” or the like, would pretty much have to be here to get that knowledge; you can’t infer it from TV signals.

        And if they’re that close, and hostile, they can toss rocks at us, far more effectively and faster.

        Relative non-threat.

      • IMASBA

        My thoughts exactly. Perhaps sending instructions on how to build an antimatter bomb in your garage could actually be dangerous, but then again governments are listening to and might use the same knowledge to enforce bans on such technology or come up with counters.

      • Robert Koslover

        I think you are right, for the most part. I think the initial communications, at least, would be mutually beneficial. And in fact, I do not agree with Robin that we should limit such communications. But at the same time, I do think those communications are potentially very dangerous, possibly even terminal for humanity. But I’m willing to take that risk, in part because I consider it unavoidable anyway, at least in the sufficiently long term if advanced alien civilizations exist within communication ranges.

      • Lord

        I think the greater risk is inadvertent, say sending their genetic blueprint for local replication which is just better adapted to anything, or instructions for a gateway for FTL travel at which point we just become a resource colony for expansion, or instructions for a simple but extremely powerful technology whether it has a mind of its own, is controllable by them, or just something we may accidentally destroy ourselves with.

    • RobinHanson

      Executing opaque instructions sent by aliens is also dangerous, and should be considered separately as a risk.

  • Tige Gibson

    Wow. Maybe it’s time to start trying to overcome the bias in favor of libertarianism.

    • Robert Koslover

      Off-topic and nonsense. The world, and most of world history, is heavily-biased toward authoritarianism. A little more libertarianism should be welcomed. Besides, libertarians don’t fence people in. You are already free to move/defect to nearly any existing authoritarian/totalitarian utopia (e.g., Iran, Saudi Arabia, North Korea, Cuba, the new ISIS caliphate, etc…) that you wish. Just swear fidelity to the local despot and give them all your worldly goods! But if/when you do go, bear in mind that it’s often a one-way choice. Authoritarian regimes are notorious for (among other things) not letting people leave. So go and have a good time. Or, if you prefer, move to one of the currently dying economies of the more “progressive” European countries. Bribe enough officials to get by, or learn how to take advantage of their generous (but running out of money) welfare systems, etc. Bye.

      • Tige Gibson

        This article just like the previous few are linked to libertarianism. Your response has nothing to do with either. I don’t know why you are bringing up oppressive states and authoritarianism at all, unless in your view anyone who isn’t a Nazi sympathizer must concede to libertarianism as the only valid choice. You are clearly suffering from some extreme bias towards an extreme ideology.

        As an extreme ideology, libertarianism should not be the leading subject on a site focussed on overcoming bias unless something is severely wrong here.

  • Dave Lindbergh

    If this catches on, I suppose that would explain the Fermi Paradox.

    Realistically, how are you going to prevent people deciding for themselves? You can pass laws (fines for pointing high-powered lasers at the sky), but they’re not very enforceable – the technology is widely disseminated already. At best you can reduce the number of high-power messages sent (maybe by one or two orders of magnitude); it’s not clear that would make any significant difference.

    If hyper-paranoid aliens exist that snuff out all potential competitors on discovery exist, presumably they’ve been listening with great care for a long time, from many dispersed locations (they’re hyper-paranoid, after all). So the relativistic bullet is already enroute and our actions from this point seem unlikely to change much.

    • RobinHanson

      Transmitters of the scale of Arecibo aren’t the sort of thing you can carry in your pocket. It is 300 meters wide and pushes a megawatt.

    • blogospheroid

      “the relativistic bullet is already enroute”

      So much horror in such few banal words.

  • mcw0933

    “My guess: non-social-scientists have believed modern cultural propaganda claims that our dominant cultures today have a vast moral superiority over most other cultures through history. Our media have long suggested that conflictual behaviors like greed, theft, aggression, revenge, violence, war, destruction of nature, and population growth pressures all result from “backward” mindsets from “backward” cultures.”

    My take on this is that non-social-scientists operate in a domain that is predominantly meritocratic. They tend to eschew political or sociological influences that detract from science’s enlightened neutrality. As such, they import similar aims and goals onto the concept of advanced alien species, assuming that such beings have by necessity subjugated any political impulses to nobler aims.

    More “Federation“, less “Inhibitors“? But perhaps that’s just a slight rephrasing of your point, Robin.

    • RobinHanson

      Yes they believe their own propaganda claiming that their academic disciplines are also vastly morally superior to most other organizations in most other cultures.

  • Stephen Diamond

    Why does the risk of destruction automatically outweigh the hope of receiving immense help? [Loss aversion?]

    • lump1

      On Earth, when civilizations encounter technologically inferior aboriginals, they tend to do them more harm than good, even if they don’t have explicitly bad intentions.

      But here is a more serious worry: Did you ever consider that the solution to the Fermi paradox is that somewhere in our galaxy there is a sterilizer civilization who scans the stars for signs of intelligence, and then sends some sort of probe to snuff it out?

      They might not even be straightforwardly sinister. On certain assumptions, their actions would be utilitarian. One of these assumptions is that if multiple civilizations become interstellar, their colonization spheres will eventually intersect, and since they will be competing for more or less the same resources, this is likely to generate conflict on a massive scale. If likely massive (galactic-scale) conflict can be prevented by an action with a comparatively tiny cost in utility (the destruction of a single solar system inhabited by intelligent creatures), then the preventative action could be seen as obligatory, even by relatively familiar terrestrial value systems. And who knows how freaky aliens might approach normative questions about things like us!

      So yeah, I don’t think we should be looking around for trouble.

      • Cahokia

        The Fermi Paradox suggests that civilized extraterrestrials are very rare.

        Why would an intelligent species extinguish one of the few other intelligent species in the galaxy?

        However self-interested and unethical, it seems likely they’d want to preserve some humans, for their labs, wildlife preserves, menageries, amusement parks, etc.

        Even if we accept the analogy to European contact with aboriginals, most of these peoples did not actually go extinct. And their descendants enjoy much higher living standards than they would have had without contact. So, the risk would be that humanity might suffer a major demographic crisis, not extinction.

      • Stephen Diamond

        On certain assumptions, [emphasis added] their actions would be utilitarian.

        Then why are you partial to the idea that the utilitarian assumptions of this technologically superior civilization are inferior to our utilitarian assumptions?

        [I see making these proposals as an insidious argument for restricting liberty now for the most speculative of reasons.]

      • Cahokia

        Yes, Robin needs to explicate his ethical basis for prioritizing human species-hood over the interests of sentient beings in general.

      • lump1

        Fair point. Maybe the utilitarian thing to do is to shout now, so that the aliens can find us and wipe us out sooner, while we are still few and defenseless. That certainly would be a lesser catastrophe than if we were wiped out later, when there are more of us and we have advanced tools that allow us to go down fighting (causing collateral damage to the exterminators).

        (This comment was initially meant to be snarky, but upon re-reading, it’s starting to seem weirdly reasonable.)

      • IMASBA

        Now you’re placing their utility above ours, why? It is each civilization’s own responsibility to not let their population grow beyond what their resources can sustain. The universe is a big place, if you absolutely need to take those few inhabited planets from someone else to feed your own people then you’re doing something wrong.

      • IMASBA

        I don’t think it would be for their wildlife reserves but if there are only a few civilizations out there they might indeed simply enjoy the cultural diversity humans would add (even if it’s just through radio or laser communication), just like we value diversity in ecosystems, I mean what else but “the simple things” is there to admire for a civilization that has all imaginable comforts and technology? Even if there is some limited hypothetical potential for conflict (why should there automatically be conflict? If each civilization has 100 colony worlds it’s as safe from extinction as it’s ever going to get in one galaxy) it would be a very rash thing to just eliminate entire species and their ecosystems because of it. We don’t have to worry about displacement or infectious diseases: contact might never become physical and if it does they have nothing to gain from displacing us and their microbes could probably not adapt to our ecosystem, in the off-chance that they can the aliens could just teach us give us the knowledge to deal with their microbes (and that’s assuming we wouldn’t have developed such knowledge on our own even 100-200 years from now).

        I can see how there could be socioeconomic upheaval following a rapid influx of alien knowledge, but then again a united world with a basic income could probably handle it and that’s not an entirely unrealistic thing to have 100-200 years from now either.

  • Sigivald

    What’s the chance of “trying to hide from aliens” meaning we get slammed by an asteroid and all die?

    I bet it’s somewhat higher than one in a billion.

    (Also, per Mr. Lindbergh’s hypothesis, what’s to stop the Ultimate Super-powerful Paranoid FTL-capable aliens from just sending out probes to every star in the galaxy?

    You can’t hide from that, without being as super-powerful as they are.

    I’m much more inclined to a view like Lem’s in Fiasco – it’s far more likely that there’s just a lot less intelligent, technological life out there, meaning we’ll be lucky to ever have a chance at even hearing about it, let alone meeting it.

    More likely, I mean, than “they’re gonna come kill us if we don’t stop using big radars to map the solar system”.)

    • RobinHanson

      The possibility of other ways to die does not mean we shouldn’t try to avoid dying this way.

      • Dave Lindbergh

        Yes, but his point is that there may be tradeoffs – using radars to find dangers vs. risk of detection.

        This boils down to comparing various tiny probabilities, most of which we can only guess at over several orders of magnitude, against each other.

        I’m not against a little effort going into the enterprise – maybe we can reduce our risks a tiny bit this way.

        But given our lack of knowledge, it seems unlikely that (a) we’ll get it right and (b) even if we do, it’ll change the odds by very much.

        So I’d put this very low on my list of things to worry about.

      • RobinHanson

        We use telescopes to find asteroids, not radar:

      • Dave Lindbergh

        OK, but there are surely other costs to be weighed against potential benefits.

      • Stephen Diamond

        Is it a moral or prudential imperative to avoid long-term human extinction, even at the hands of a superior species?

        The moral issues are probably unfathomable, and the consequences are too distant in time (numerous generations in the future) for our personal prudential sense to have serious relevance.

    • Dave Lindbergh

      Agreed – if we seriously contemplate hyper-paranoid aliens, then we have to assume they’d send self-replicating probes to most/all solar systems to listen for potential competitors.

      So Robin’s proposal addresses the sub-sub-sub-scenario of aliens who are so paranoid that they shoot first and talk later, so powerful that they _can_, yet who haven’t invested the (small) resources needed to detect low-power transmissions.

      That seems to me so unlikely that it isn’t worth much attention – we have far bigger risks to worry about.

  • TheBrett

    It’s more than a little premature to even begin trying to assign odds on how frequently we might contact civilizations, never mind how frequently they might be hostile (and what that would mean). Libertarians don’t generally suggest that we curb exploratory efforts when the risks are unknown, do they?

    . . . On a side-note, it’s so fascinating to me that astronomers and physicists keep arguing over convoluted behavioral reasons why aliens haven’t contacted us or visited us, while the biologists tend to just say, “Duh, it’s because intelligent alien life is probably super-rare, considering how incredibly rare it is on Earth”.

    • RobinHanson

      The time to assign odds is when you have a decision to make. Which we do here.

      • TheBrett

        There’s nothing to base the calculation on. We haven’t even confirmed a single rocky planet with potential biosignature gases yet.

      • Stephen Diamond

        There’s always something on which to base the probability. But it should take more than these tiny and dubious probabilities to institute measures which (no one can doubt) restrict liberty.

  • Andrew_M_Garland

    Humans conquer one another for material resources and services (slavery). We destroy the next village to acquire its land, materials, manufactured goods, slaves, and to extract tribute.

    If we knew of a civilization 4+ light years away, what motivation would we or they have to spend the effort to go so far to collect stuff which is everywhere in the universe? It isn’t even worth it to preemptively conquer other civilizations, fearing their eventual advance. The more advanced, the less desire to go great distances to collect stuff. We don’t mount invasions to collect African baskets. Only children take some joy commanding ants.

    The universe is a violent place. Complex life may only recently have a chance. For example:
    ( )
    === ===
    So violent is the GRB phenomenon that it has been put forward as an explanation of the Fermi Paradox. If advanced extraterrestrial civilizations exist, why have no traces of them been found?

    James Annis, an astrophysicist at Fermilab, near Chicago, has speculated that such events could sterilize entire galaxies, wiping out life-forms before they had the chance to evolve to the stage of interstellar travel. “If one went off in the Galactic center, we here two-thirds of the way out of the Galactic disk would be exposed over a few seconds to a wave of powerful gamma rays.” It would be enough to exterminate every species on Earth. Even the hemisphere shielded from immediate exposure by the planet’s mass would not escape, since there would be lethal indirect effects such as the demolition of the entire protective ozone layer.
    === ===

    • IMASBA

      GRBs (gamma ray bursts) don’t always have to be fatal to all life on a planet. It’s almost certain some life would survive in the oceans and underground, in fact this can even be multicellular life. GRBs also don’t just wipe out galaxies, they very have narrow beams and those beams can be stopped by any number of objects before they reach a planet. Still this can set back evolution for tens or hundreds of millions of years.

      I do agree though that there simply is not much of an incentive to conquer alien civilizations, though I have to point out if something like a relativistic kill vehicle turns out to be feasible you might see civilizations wiping out others out of paranoia.

      I also agree that complex life may be a recent phenomenon: The universe is 14 billion years old and for only about 5 billion years has there been an abundance of planets containing heavier elements and that’s about the age of our solar system, so sure, maybe if the dinosaurs hadn’t been wiped out there may have been a saurian civilization 30 million years ago but that’s not even that long on a cosmic scale and it’s perfectly possible Earth has actually been comparatively lucky when it comes to the number of mega extinctions.

    • Robert Koslover

      Re: “Humans conquer one another for material resources and services
      (slavery). We destroy the next village to acquire its land, materials,
      manufactured goods, slaves, and to extract tribute.” and “Only children take some joy commanding ants.” Oh, how I wish those reasonable assertions were actually true. You are conveniently ignoring the religious belief system of roughly 1/7th of the world’s current population, which happens to follow a holy book that literally commands them to conquer all non-believers, subjugate them, and assimilate them. Pauses in this process are permitted, but there is no end to this holy mission until ALL non-believers have been conquered, converted, or (in some interpretations) pay a tax and accept their subjugated slave-like status. Combine that kind of belief system with Star Trek-level technology and the result is a Star Trek-style, Borg-like, interstellar threat. Logic has no effect on true believers. Do not comfort yourself by believing that such fanatics could never build, or acquire through their conquests, a highly-technological society.

      • Andrew_M_Garland

        Violent religions were/are promoted by leaders to confiscate goods from others. They want taxes, tribute, and slaves, and there is wonton killing along the way as a display of strength.

        The Borg is fictional.

        Human technology in most places is not productive enough to end the desire to steal from the next village. That is a problem for maybe another 200 years.

        Civilizations as advanced as the US and some others create more wealth by staying home rather than colonizing. Plus, if you don’t want their land, there is no profit in killing the people occupying it.

      • Ilya1981

        The Borg did not strike me as non-proselytizing. In fact, it is the very opposite: they try to assimilate everyone.

        A civilization based on fanatical non-proselytizing religion may indeed present danger to other civilizations, if the former maintains high fertility/expansion that constantly pushes the aforementioned civilization against Malthusian constraints.

        To give a more concrete, real-life based example/thought experiment: if the Jewish Orthodoxy, for example, were high-tech, there is nothing to prevent them from trying to accomplish God’s promise to Abraham (“your descendants will be as numerous as the stars”) by converting it into a decree.

      • IMASBA

        Civilizations such as the one you describe would have a disadvantage regarding the rate of technological progress. Internal stability would also be a problem: there would be break-away sects and secularist movements popping up constantly. At least in humans oppressive organized religion seems to go against our ingrained “forager” values and can therefore only be maintained through fear, ignorance and brutal violence, that’s never been a good recipe to get a long-lasting, adaptable civilization. An alien species that is natively more geared to oppressive organized religion (a species that evolved with “farmer” values) can also be expected to have a slower rate of development and an increased probability of self-destruction (they would not have buddha’s, Ghandi’s or any enlightenment philosophers to calm things the F down).

        So to summarize, civilizations like you imagined should be comparatively rare and the ones that do exist only have a short window to terrorize the galaxy before they self-destruct.

      • Ilya1981

        There *could* be civilizations that would self-destruct in the manner that you described. However, I am far from certain that *all* such civilizations would.

        Looking, again, at Orthodox Jewry of the Ashkenazi ethnic variety, it strikes me that *intentionally* breeding people for religiosity (obviously), civility (subject to constraint of not making them into lotus-eaters) and high-IQ, and discouraging fertility of specimens on the less representational side of the spectrum for such traits actually *may* make such civilizations more stable internally.

        Enlightened philosophers/leaders like Ghandi or Christ would merely be temporary aberrations on the path of the evolution of said societies. For these societies, it’s possible to imagine emergent super-high densities of populations (possibly, also populating the underground and the oceans), before they hit a critical mass of know-how, to escape into cosmos.

      • IMASBA

        I don’t know of any fundamentalist groups that had technology beyond the iron age and weren’t/aren’t in same way parasites of a larger, more secular civilization. Akhenazi Jews aren’t fundamentalist, Orthodox Jews are and they are parasites of the Israeli welfare state that’s being upheld by secular Jews. So any way you turn it there’s always a large part of the population that’s not happy with the fundamentalists and that part is more adaptable and understands technology better.

      • Ilya1981

        My original comment was regarding the possibility of a highly-cohesive, eugenic, and non-proselytizing religion/culture being the bedrock of a high-tech civilization. The example I gave was more on the illustrative side. Keep in mind that what allowed current secular Jews to develop their smarts was this exact thing: sticking to their economic niches, their non-proselytizing religion and procreating.

        As to Orthodox Jews now in Israel: you may be right (to an extent, for now at least). However, long-term, the experiment that is running right now in Israel may bear fruits that are contrary to what you’re saying. Religions are not always as rigid when it comes to survival. When time comes, and they become the majority, things will *have* to be at least somewhat different, or everyone perishes. There are already reports of some Haredim joining the military. Plus, it’s always possible to have 5% of your people devoted to science and engineering, while the other 95% mainly studies Torah and keeps the social glue active.

        We live in the age of automation, after all, but IMO it’s better to be praying and studying as opposed to robbing convenience stores and burning cars.

      • Cahokia

        A civilization adhering to a fanatical non-proselytizing religion would be a more serious threat.

    • Andrew_M_Garland

      To IMASB: There is a type of burst which is spherical, not limited to a beam, from the link.

      • IMASBA

        Actually the jury’s still out on the angle of short GRBs, they may be wider than beams but “spherical” is literally stretching it, and how common are mergers of neutron stars or a neutron star and a black hole anyway?

      • Andrew_M_Garland

        If you don’t like the link I gave, then provide another one.

      • IMASBA

        Your link doesn’t even mention spherical GRBs. I got my information from other sources and what I remember from my astronomy courses at uni.

    • Dan Browne

      This is a really good one. Most of the mass of the galaxy is tightly orbiting the center and most of it is also within range of GRBs. Maybe it’s only out here that we get long enough between mass extinction GRBs that we’ve had a chance to evolve intelligence.

  • Doug

    I have an outlandish idea I’ve been toying with, please poke holes in it and tell me why its implausible. What are the chances that Earth is currently in some sort of cosmic nature preserve? At first glance it seems unlikely. Any very advanced civilization that colonized the galaxy should be capturing a very large fraction of all the energy output of almost all the stars. Yet astronomically it appears almost every star is emitting at full intensity. Ergo it seems very unlikely that our galaxy is heavily colonized.

    Take this one step further. What if not only are most stars inside Dyson spheres, but our very own star is as well. Assume a Dyson sphere with an encompassing radius encloses our solar system. The interior of the sphere could project images of the extrasolar night sky to Earth. Humans would observe an unspoiled galaxy that bears no relation to the actual state of things. For the Dyson sphere owners, only a small proportion of sun’s energy would be wasted on preserving the planets. This may be seen as a trivial cost to preserve/study life on Earth. Okay now someone explain to me why this is absurd.

    • iontom

      That’s the Galactic Zoo hypothesis – I think the absurdity mainly comes from the shear scale of the engineering involved and the material required. To build a sphere with a radius of 100+ AU would take more matter than is present in the solar system to begin with.

      A similar but more deployable hypothesis is the simulation argument. We don’t see anybody because we’re really a simulation and the rest of the universe is universe is fabricated as we explore it.

  • zarzuelazen

    Just a short point about the Fermi paradox:

    Actually we *do* see two very unusual (currently unexplained) elements on a cosmological scale: Dark matter and dark energy. We are fairly sure that the visible universe is only about 5% of total mass-energy, and that around 70% is dark energy, and 25% is dark matter. So its possible that aliens *have* already utilized most of the mass-energy (95%) – and the remaining 5% is just remnants.

    If aliens *had* colonized most of the observable universe and were utilizing the mass-energy with extremely high efficiency, then most of it *would* appear dark, exactly what is observed. The remaining 5% of visible matter in its ‘natural’ state could arise from the fact that alien colonization isn’t quite uniform – and Earth just happens to be in one of the small patches of the universe that was still in its original natural state.

    • Dan Browne

      It would likely be outputting heat as infra-red since the majority of physical processes emit heat. Dark matter doesn’t emit heat.

    • IMASBA

      Like Dan Browne said, there had to be residual heat (to answer blogospheroid’s question: the radiation would come from so many sides and be reflected/re-emitted by so many things it would be visible from very, very far away). Also, the distribution of dark matter doesn’t match your scenario: dark matter is concentrated in spheres around galaxies, now why would a civilization move matter to 20.000 lightyears into the nothingness above the plane of a galaxy and also do this uniformly?

  • Cahokia

    There have been many serious objections to your case in favor of prohibiting accidental yelling to aliens – this is definitely worth another post to discuss further.

  • blogospheroid

    Good article, Prof. Hanson. I liked the quantitative bit you introduced.

  • mbri

    I never thought of SETI in quite the same way after I read this book:

  • Anonymous

    The usual argument against human extinction says that

    a) humans are good enough to create a positive future and
    b) if humans go extinct, intelligent life won’t have a future

    But if intelligent aliens exist, they might be just as good as humans and human extinction would no longer have the same relevance. If it’s aliens vs. humans, how do I even know I want the humans to win?

    • IMASBA

      Do you consider preventing new lives being created as ethically equivalent to killing existing lives? If not then the answer should be pretty clear: humanity should be preserved, simply because it already exists. Aliens do not need our “lebensraum” to survive and we do not need theirs, so it’s not an us-versus-them scenario. Just keep expanding until you hit the borders of another civilization and leave each other in peace, there’s plenty of space for each civilization out there as long as they’ve mastered the ancient art of birth control.

      • Stephen Diamond

        Do you consider preventing new lives being created as ethically equivalent to killing existing lives?

        You’re assuming that all (intelligent?) lives are equal. Technologically superior aliens are (one reasonably supposes) apt to be intellectually superior. [So that, if they see value in sheer “destruction,” it’s probably there.]

        I wonder at what point in hominid evolution you think an apeman’s life became the “equal” of ours.

      • IMASBA

        “You’re assuming that all (intelligent?) lives are equal.”

        Yes, why not? Being more intelligent (remember more advanced doesn’t mean more intelligent: there were many prehistoric hunter-gatherers who were just as intelligent as you and I, aliens would have to alter themselves to actually be more intelligent) doesn’t mean their lives are richer or more meaningful, holding intelligence in particularly high esteem is just a cultural idea btw. Also, why do people keep thinking that we have to die for them to survive, if they just use their version of a condom nobody has to die and we can develop to become just as “valuable” as them, whatever your metric of “value” is. Again, if you see killing as worse than not producing potential offspring and their civilization is being responsible with the resources they have outside of Earth there is no reason for conflict.

      • Cambias

        What a creepy, repulsive idea! Technological superiority = intellectual superiority = greater moral value? Congratulations, you’ve just argued that the SS were justified in murdering Ukrainian peasants. After all, Germany was more technologically advanced, right?

      • Stephen Diamond

        We know what Ukrainains were, whereas we’re trying to predict what an unknown alien species will be like.

        More to the point, who do you favor in humanity’s war against the mosquito?

      • IMASBA

        Stephen, to answer your question about the apeman:

        I view being consciously aware of life and death as a binary thing, so no sliding scale. I don’t think we should be killing chimps or whales to make room for more humans. One thing that could make me uncomfortable is the issue if lifespan: if aliens live far onger than humans then a world of them can have at least the same utility but with less people dying, that doesn’t mean I think such a species should wipe us out, they should give us time (and if they’re really so concerned with utility they should give us their life-extending technology) to figure out how to live as long as they do.

      • Stephen Diamond

        I agree that a binary criterion could make “terrestrial denfensism” ethically viable.

        But what’s so important about being aware of life and death? And why are you so sure that a more intelligent species won’t find a criterion that renders yours trivial?

        [Are you convinced that chimps and whales are aware of life and death?]

    • Robert Koslover

      Perhaps simply because YOU are one of the humans? Being on the human-side of such a conflict is not like choosing which sports team to support. To support enemy aliens against humans is treason against your own biology. By the way, have you read Ender’s Game?

      • Stephen Diamond

        To support enemy aliens against humans is treason against your own biology.

        Here is how this sci fi speculation becomes politically reactionary today. If we’re bound to loyalty to our species, aren’t we bound similarly to our nations?

      • IMASBA

        How? In the scenarios painted here all the humans are asking for is being allowed to live, while the aliens are being genocidal. Also we might just not have much common ground with the aliens in social interactions and we wouldn’t be able to procreate with them so life among aliens would simply be less enjoyable for us than life among humans (what are discriminatory cultural stereotypes among humans would be actual facts of life between differents species: you’re not being racist when you call a Klingon aggressive: his species really did evolve to have a heightened aggression and thirst for violence). I think you’re “bound to loyalty to our species” also assumes unlimited loyalty that was never implied here: if a faction of humans was acting genocidal against a faction of peaceful aliens I don’t think Robert Koslover has a problem with humans siding with the aliens.

      • Stephen Diamond

        I don’t think Robert Koslover has a problem with humans siding with the aliens.

        For me, the term “treason” settled the matter. Maybe he’ll say which is the correct interpretation.

      • Robert Koslover

        IMASBA understood me correctly. Sorry about the confusion. Notice that I didn’t refer to all aliens. I referred to “enemy aliens.” In such a case, all humans should take the human side.

      • Anonymoua

        Treason is a loaded word that is associated with betrayal of trust and oath-breaking. But since I never chose to be human and in fact most humans value different things than I do, I see it as completely neutral. The question is, what are the aliens like?

    • Stephen Diamond

      That the question isn’t as obvious as most commenters seem to assume is shown by (on the similar issue of whether to support super-AI in its war with humanity) the calm switch of sides by Eliezer Yudkowsky. [A proponent’s sincere belief in a cause is questionable when the proponent executes a 180 degree shift of position!]

      This obsession with digging up dubious “existential” risks (and ignoring that in these speculations, we can’t even rationally choose our own side) is an obvious candidate for a signaling explanation.

      [I think “weird” intellectuals who are inclined to feel at risk for group expulsion for their idiosyncrasy, use existential risk to signal that they are truly committed to the group’s ultimate well-being. Even the libertarians signal how they will sacrifice liberty for the sake of this ultimate common welfare!]

  • Robert H.

    If there is a xenocidal alien species out there, where are they? Humans are probably technologically and industrially capable of creating beserker von Neumann probes right now, albeit at the cost of significant chunks of global GDP. With that being so, it looks like our ability to destroy all life in the galaxy has come close on the heals of our ability to send out radio signals aliens will detect. But you’re telling me aliens are just sitting back, waiting to detect electromagnetic evidence of possible rivals before destroying them? It seems ludicrous.

    If Aliens wanted to destroy us and could destroy us, they would have found and destroyed us by now. Waiting to passively detect a civilization before eliminating it as a threat is like waiting to feel a bullet before shooting at a gunman.

    All of which raises the question of whether we should be spending trillions to destroy all other life in the galaxy, given the possible upside of avoiding our own extinction at the hands(?) of later evolving aliens.

  • Pingback: Alexander Kruel · Pascal’s wager: Better safe than sorry?

  • Stephen Diamond

    Perhaps one basis for the blatant anti-alien bias is “stories.” Little science fiction is written where good aliens deem it desirable to destroy humanity.

    • oldoddjobs

      “anti-alien bias” is too vague, would you please elaborate?

      • Stephen Diamond

        The assumption that in a conflict where aliens try to exterminate humans, victory for the human side is morally preferable.

      • oldoddjobs

        Oh, that idiotic bias

      • Stephen Diamond

        Maybe I’m the idiot, but I don’t share it. Does it express the mindset that “it’s us–end of story” or that the destruction of any (intelligent?) species is apt to be bad overall?

      • oldoddjobs

        Really? You don’t? So a race of aliens arrives tomorrow and starts incinerating all the humans….you remain philosophical about the whole affair and try to examine your biases? 😉

      • Stephen Diamond

        You still haven’t told me why not. The (probably true) claim that I would fight them like anyone else is distinct from whether I should.

      • oldoddjobs

        You’d be thinking “I should fight them to save all the things I care about”.

  • Lalartu

    I heard a claim that some radio telescope is able to detect typical airport radar from 50 light years away. If it is true then sending messages does not increase danger at all.

    • RobinHanson

      Wow – why not read two paragraphs into a post before you comment on it?

      • Lalartu

        If you mean “shut all radars because aliens”, its just an obviously poor choice.

  • Mr Benn

    Kind Aliens Theory:
    One plausible explanation is that there are lots of alien worlds who know of our existence but no population will make themselves known to earth until we have worked out how to live non-aggressively. This would obviously be in the interest of an alien world not to make contact with a planet that has nuclear weapons, space travel and barely controlled aggression.

  • Robert H.

    Robin, can you think of a way to estimate how dangerous we are to extrasolar aliens? I think the key to estimating the risk of yelling at aliens is the gap between humans developing the ability to yell and humans developing the ability to harm aliens. If that gap is only one or two hundred years, no aliens are going to adopt a “passively listen and hope potential enemies yell at you” strategy. By the time you heard enemies over 200 light years away it could be too late, especially if you factor in a response that could move at sub light speeds.

    I said in an earlier comment that I think we are already a threat — for a few trillion dollars I think we could get some h-bomb armed von Neumann probes going — but I am not an engineer and am not confident in that prediction.

  • Pingback: Overcoming Bias : Slow Growth Is Cosmo-Fast