Category Archives: Disaster

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
loading...
Tagged as: , ,

Nuclear winter and human extinction: Q&A with Luke Oman

In Reasons and Persons, philosopher Derek Parfit wrote:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

1. Peace

2. A nuclear war that kills 99% of the world’s existing population.

3. A nuclear war that kills 100%

2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater… If we do not destroy mankind, these thousand years may be only a tiny fraction of the whole of civilized human history.

The ethical questions raised by the example have been much discussed, but almost nothing has been written on the empirical question: given nuclear war, how likely is scenario 3?

The most obvious path from nuclear war to human extinction is nuclear winter: past posts on Overcoming Bias have bemoaned neglect of nuclear winter, and high-lighted recent research. Particularly important is a 2007 paper by Alan Robock, Luke Oman, and Georgiy Stenchikov:  “Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences.” Their model shows severe falls in temperature and insolation that would devastate agriculture and humanity’s food supply, with the potential for billions of deaths from famine in addition to the direct damage.

So I asked Luke Oman for his estimate of the risk that nuclear winter would cause human extinction, in addition to its other terrible effects. He gave the following estimate:

The probability I would estimate for the global human population of zero resulting from the 150 Tg of black carbon scenario in our 2007 paper would be in the range of 1 in 10,000 to 1 in 100,000.

I tried to base this estimate on the closest rapid climate change impact analog that I know of, the Toba supervolcanic eruption approximately 70,000 years ago.  There is some suggestion that around the time of Toba there was a population bottleneck in which the global population was severely reduced.  Climate anomalies could be similar in magnitude and duration.  Biggest population impacts would likely be Northern Hemisphere interior continental regions with relatively smaller impacts possible over Southern Hemisphere island nations like New Zealand.

Luke also graciously gave a short Q & A to clarify his reasoning:

Continue reading "Nuclear winter and human extinction: Q&A with Luke Oman" »

GD Star Rating
loading...

911 Puzzling

Someone bent my ear again on 911 conspiracy theories, and I've had jigsaw-puzzle-solving fun digging through the details.  Also, I feel we should consider evidence for even pretty crazy-sounding claims when the evidence offered meets high enough standards.  To his credit physicist Steven Jones has published papers meeting such standards:

I conclude the twin towers probably held big chucks of hitech pyrotechnic materials quite uncommon in office buildings.  And a few hundred pounds of this stuff spread around the pillars of a single floor might well bring down a tower. 

BUT, I am unpersuaded by claims that plane crashes could not have induced the towers falling as they did, the sounds heard, the warnings voiced, etc. (E.g., hear him and him.)  Aside from the above findings, the match between simple theory and observation seems about as close as we should expect, given this complex and unusual situation; it would be crazy not to expect a few anomalies between simple predictions and what we saw.

Continue reading "911 Puzzling" »

GD Star Rating
loading...
Tagged as: ,

The Thing That I Protect

Followup toSomething to Protect, Value is Fragile

"Something to Protect" discursed on the idea of wielding rationality in the service of something other than "rationality".  Not just that rationalists ought to pick out a Noble Cause as a hobby to keep them busy; but rather, that rationality itself is generated by having something that you care about more than your current ritual of cognition.

So what is it, then, that I protect?

I quite deliberately did not discuss that in "Something to Protect", leaving it only as a hanging implication.  In the unlikely event that we ever run into aliens, I don't expect their version of Bayes's Theorem to be mathematically different from ours, even if they generated it in the course of protecting different and incompatible values.  Among humans, the idiom of having "something to protect" is not bound to any one cause, and therefore, to mention my own cause in that post would have harmed its integrity.  Causes are dangerous things, whatever their true importance; I have written somewhat on this, and will write more about it.

But still – what is it, then, the thing that I protect?

Friendly AI?  No – a thousand times no – a thousand times not anymore.  It's not thinking of the AI that gives me strength to carry on even in the face of inconvenience.

Continue reading "The Thing That I Protect" »

GD Star Rating
loading...

Investing for the Long Slump

I have no crystal ball with which to predict the Future, a confession that comes as a surprise to some journalists who interview me.  Still less do I think I have the ability to out-predict markets.  On every occasion when I've considered betting against a prediction market – most recently, betting against Barack Obama as President – I've been glad that I didn't.  I admit that I was concerned in advance about the recent complexity crash, but then I've been concerned about it since 1994, which isn't very good market timing.

I say all this so that no one panics when I ask:

Suppose that the whole global economy goes the way of Japan (which, by the Nikkei 225, has now lost two decades).

Suppose the global economy is still in the Long Slump in 2039.

Most market participants seem to think this scenario is extremely implausible.  Is there a simple way to bet on it at a very low price?

If most traders act as if this scenario has a probability of 1%, is there a simple bet, executable using an ordinary brokerage account, that pays off 100 to 1?

Why do I ask?  Well… in general, it seems to me that other people are not pessimistic enough; they prefer not to stare overlong or overhard into the dark; and they attach too little probability to things operating in a mode outside their past experience.

But in this particular case, the question is motivated by my thinking, "Conditioning on the proposition that the Earth as we know it is still here in 2040, what might have happened during the preceding thirty years?"

Continue reading "Investing for the Long Slump" »

GD Star Rating
loading...

Alien Bad Guy Bias

The Bad Guy Bias applies to Earth signals to aliens.  From the NYT:

The makers of the new movie “The Day the Earth Stood Still” have arranged for it to be beamed into space on … the same day the movie opens here on planet Earth. … Dr. Shostak, who was a consultant for the new movie … [says] there are some people, he acknowledges, who might worry that broadcasting “The Day the Earth Stood Still” could be inimical to our interests. He added, “I think that if these people are truly worried about such things, they might best begin by shutting down the radar at the local airport.”

Shostak is right; compared to intentional signals, unintentional signals are a million times larger:

There are three large-dish instruments in the world that are currently employed for doing radar investigations of planets, asteroids and comets: ART (Arecibo Radar Telescope), GSSR (Goldstone Solar System Radar), and EPR (Evpatoria Planetary Radar). Radiating power and directional diagram of these instruments is so outstanding that it also allows us to emit radio messages to outer space, which are practically detectable everywhere in the Milky Way. This dedicated program is called METI (Messaging to Extra-Terrestrial Intelligence) …

Continue reading "Alien Bad Guy Bias" »

GD Star Rating
loading...
Tagged as: ,

You Only Live Twice

“It just so happens that your friend here is only mostly dead.  There’s a big difference between mostly dead and all dead.”
        — The Princess Bride

My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that:  At the point where the current legal and medical system gives up on a patient, they aren’t really dead.

Robin has already said much of what needs saying, but a few more points:

Ben Best’s Cryonics FAQ, Alcor’s FAQ, Alcor FAQ for scientists, Scientists’ Open Letter on Cryonics

• I know more people who are planning to sign up for cryonics Real Soon Now than people who have actually signed up.  I expect that more people have died while cryocrastinating than have actually been cryopreserved.  If you’ve already decided this is a good idea, but you “haven’t gotten around to it”, sign up for cryonics NOW.  I mean RIGHT NOW.  Go to the website of Alcor or the Cryonics Institute and follow the instructions.

Continue reading "You Only Live Twice" »

GD Star Rating
loading...

The Bad Guy Bias

Shankar Vedantam:

Nations tend to focus far more time, money and attention on tragedies caused by human actions than on the tragedies that cause the greatest amount of human suffering or take the greatest toll in terms of lives. … In recent years, a large number of psychological experiments have found that when confronted by tragedy, people fall back on certain mental rules of thumb, or heuristics, to guide their moral reasoning. When a tragedy occurs, we instantly ask who or what caused it. When we find a human hand behind the tragedy — such as terrorists, in the case of the Mumbai attacks — something clicks in our minds that makes the tragedy seem worse than if it had been caused by an act of nature, disease or even human apathy. …

Tragedies, in other words, cause individuals and nations to behave a little like the detectives who populate television murder mystery shows: We spend nearly all our time on the victims of killers and rapists and very little on the victims of car accidents and smoking-related lung cancer. "We think harms of actions are much worse than harms of omission," said Jonathan Baron, a psychologist at the University of Pennsylvania. "We want to punish those who act and cause harm much more than those who do nothing and cause harm. We have more sympathy for the victims of acts rather than the victims of omission. If you ask how much should victims be compensated, [we feel] victims harmed through actions deserve higher compensation."

This bias should also afflict our future thinking, making us worry more about evil alien intent than unintentional catastrophe. 

GD Star Rating
loading...
Tagged as:

Beyond the Reach of God

Followup toThe Magnitude of His Own Folly

Today’s post is a tad gloomier than usual, as I measure such things.  It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me.  Those readers sympathetic to arguments like, "It’s important to keep our biases because they help us stay happy," should consider not reading.  (Unless they have something to protect, including their own life.)

So!  Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future’s vulnerability – a reluctance to accept that things could really turn out wrong.  Not as the result of any explicit propositional verbal belief.  More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

Some would account this a virtue (zettai daijobu da yo), and others would say that it’s a thing necessary for mental health.

But we don’t live in that world.  We live in the world beyond the reach of God.

Continue reading "Beyond the Reach of God" »

GD Star Rating
loading...

The Magnitude of His Own Folly

Followup toMy Naturalistic Awakening, Above-Average AI Scientists

In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.

In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn’t just find the right fitness metric for an evolutionary algorithm.  (Previously he had been very impressed with evolutionary algorithms.)

And the one said:  Oh, woe!  Oh, alas!  What a fool I’ve been!  Through my carelessness, I almost destroyed the world!  What a villain I once was!

Now, there’s a trap I knew I better than to fall into –

– at the point where, in late 2002, I looked back to Eliezer1997‘s AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they "really would have done".

When I finally saw the magnitude of my own folly, everything fell into place at once.  The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together.  There wasn’t a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid.  I already knew how.

And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.

It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when – I understood in the same moment of realization – my ego ought to be taking a major punch to the stomach.  I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.

Continue reading "The Magnitude of His Own Folly" »

GD Star Rating
loading...