Category Archives: Disaster

Fighting a Rearguard Action Against the Truth

Followup toThat Tiny Note of Discord, The Importance of Saying "Oops"

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don’t matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

But as our story begins – as the sky lightens to gray and the tip of the sun peeks over the horizon – Eliezer2001 hasn’t yet admitted that Eliezer1997 was mistaken in any important sense.  He’s just making Eliezer1997‘s strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"…

…which means that Eliezer2001 now has a line of retreat away from his mistake.

I don’t just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn’t have to cough out his whole mistake in one huge lump.

If you think this sounds like Eliezer2001 is too slow, I quite agree.

Continue reading "Fighting a Rearguard Action Against the Truth" »

GD Star Rating
loading...

Horrible LHC Inconsistency

Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I’m horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such… then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that’s taking into account my uncertainty about whether the anthropic principle really works that way.)

Even having noticed this triple inconsistency, I’m not sure in which direction to resolve it!

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)

GD Star Rating
loading...

How Many LHC Failures Is Too Many?

Recently the Large Hadron Collider was damaged by a mechanical failure.  This requires the collider to be warmed up, repaired, and then cooled down again, so we’re looking at a two-month delay.

Inevitably, many commenters said, "Anthropic principle!  If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn’t be here!"

This remark may be somewhat premature, since I don’t think we’re yet at the point in time when the LHC would have started producing collisions if not for this malfunction.  However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all.  (Does this mean we can foresee executing a future probability update, but can’t go ahead and update now?)

As you know, I don’t spend much time worrying about the Large Hadron Collider when I’ve got much larger existential-risk-fish to fry.  However, there’s an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?"  This tells you how low your prior probability is for the hypothesis.  If a coin comes up heads only twice, that’s definitely not a good reason to believe it’s fixed, unless you already suspected from the beginning.  But if it comes up heads 100 times, it’s taking you too long to notice.

So – taking into account the previous cancellation of the Superconducting Supercollider (SSC) – how many times does the LHC have to fail before you’ll start considering an anthropic explanation?  10?  20?  50?

After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?

GD Star Rating
loading...

Schelling and the Nuclear Taboo

Thomas Schelling’s Nobel Lecture is pretty similar to the point made by Eliezer the other day.  Here’s the first couple of paragraphs.

Continue reading "Schelling and the Nuclear Taboo" »

GD Star Rating
loading...
Tagged as: ,

Baxter’s Flood

In Oxford a few weeks ago I picked up two science fiction books, Bear’s City at the End of Time, which was mostly disappointing mysticism, and Baxter’s Flood, which I came to greatly respect, at least until I learned of its sequel.

Flood is in the great "one assumption" hard science fiction tradition, making one implausible but hardly impossible assumption, and projecting its implications as faithfully as possible.  The one assumption here is that the vast quantities of water held in Earth’s mantle, far more than in its oceans, start seeping out about 2015.  At first ocean levels rise about a meter in five years, but the rate steadily grows 14% a year – a rate that could cover Everest in three decades or so. 

The book focuses on a few relatively rich and well-connected individuals, who go from denial to crisis management to more desperate measures.  They move out of flooded areas, and hitch their wagons to groups seeking higher ground in ways ranging from uncaring to horrific, justified in terms of saving what they can of civilization.  At each stage the poor and less well connected are seen drowning or floating off on makeshift rafts, presumably to their doom.  To say more I must give spoilers, which are below the fold.

Continue reading "Baxter’s Flood" »

GD Star Rating
loading...
Tagged as:

Hiroshima Day

On August 6th, in 1945, the world saw the first use of atomic weapons against human targets.  On this day 63 years ago, humanity lost its nuclear virginity.  Until the end of time we will be a species that has used fission bombs in anger.

Time has passed, and we still haven’t blown up our world, despite a close call or two.  Which makes it difficult to criticize the decision – would things still have turned out all right, if anyone had chosen differently, anywhere along the way?

Maybe we needed to see the ruins, of the city and the people.

Maybe we didn’t.

There’s an ongoing debate – and no, it is not a settled issue – over whether the Japanese would have surrendered without the Bomb.  But I would not have dropped the Bomb even to save the lives of American soldiers, because I would have wanted to preserve that world where atomic weapons had never been used – to not cross that line.  I don’t know about history to this point; but the world would be safer now, I think, today, if no one had ever used atomic weapons in war, and the idea was not considered suitable for polite discussion.

I’m not saying it was wrong.  I don’t know for certain that it was wrong.  I wouldn’t have thought that humanity could make it this far without using atomic weapons again.  All I can say is that if it had been me, I wouldn’t have done it.

GD Star Rating
loading...

A Genius for Destruction

This is a question from a workshop after the Global Catastrophic Risks conference.  The rule of the workshop was that people could be quoted, but not attributed, so I won’t say who observed:

"The problem is that it’s often our smartest people leading us into the disasters.  Look at Long-Term Capital Management."

To which someone else replied:

"Maybe smart people are just able to work themselves up into positions of power, so that if damage gets caused, the responsibility will often lie with someone smart."

Continue reading "A Genius for Destruction" »

GD Star Rating
loading...

OK, Now I’m Worried

Nukes seem our biggest near-term disaster threat, and this worries me most:

The U.S. intelligence community "doesn’t have a story" to explain the recent Iranian tests.  One group of tests that troubled Graham, the former White House science adviser under President Ronald Reagan, were successful efforts to launch a Scud missile from a platform in the Caspian Sea. … Another troubling group of tests involved Shahab-3 launches where the Iranians "detonated the warhead near apogee, not over the target area where the thing would eventually land, but at altitude," Graham said. … Graham chairs the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack, a blue-ribbon panel established by Congress in 2001. … "That’s exactly what you would do if you had a nuclear weapon on a Scud or a Shahab-3 or other missile, and you wanted to explode it over the United States." …

Continue reading "OK, Now I’m Worried" »

GD Star Rating
loading...
Tagged as:

Refuge Markets

The topic of global catastrophic risk seems silly to many, and my conference talk last Friday on refuges against human extinction seemed even sillier to some – Ron Bailey had fun comparing me to Dr. Stranglelove, and Spiegel saw a colorful character.  Silly or not, however, refuges seem a cheap way to save humanity from worst-case disasters.

My talk went beyond my book chapter to reach a new height of silliness – I suggested refuge ticket markets.  Beyond my obvious need to be sillier-than-thou, I had another motive: to let prediction markets identify scenarios where catastrophe is a serious risk, and then advise us on how to avoid these scenarios.

You see, speculative markets have an obvious problem forecasting the end of the world, as no one is left afterward to collect on bets.  So to let speculators advise us about world’s end, we need them to trade an asset available now that remains valuable as close as possible to the end.  Refuge tickets fit that bill.

Continue reading "Refuge Markets" »

GD Star Rating
loading...
Tagged as: ,

When (Not) To Use Probabilities

Followup toShould We Ban Physics?

It may come as a surprise to some readers of this blog, that I do not always advocate using probabilities.

Or rather, I don’t always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute.  If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute – even though the probabilities would be quite well-defined, if we could afford to calculate them.

So sometimes you don’t apply probability theory.  Especially if you’re human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don’t involve verbal probability assignments.

Not sure where a flying ball will land?  I don’t advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

Continue reading "When (Not) To Use Probabilities" »

GD Star Rating
loading...