The case for dangerous testing

In 1983, NASA was planning to bring back Martian soil samples to Earth. Contaminating the Earth with alien organisms was an issue, but engineers at Jet Propulsion Laboratories had devised a "safe" capsule re-entry system to avoid that risk. However, Carl Sagan was opposed to the idea and

explained to JPL engineers that if they were so certain […] then why not put living Anthrax germs inside it, launch it into space, then [crash the capsule back to earth] exactly like the Mars Sample Return capsule would.

The engineers helpfully responded by labeling Sagan an alarmist and extremist. But why were they so unwilling to do the test, if they were so sure of their system? The answer is probably they feared that if the test failed, their careers would be over and they would have caused a catastrophe. But an out of control Martian virus, no matter how unlikely, would have been equally a catastrophe. However, that vague threat didn’t concentrate their minds like the specific example of anthrax.

Imagine for a moment that those engineers had been forced to do Sagan’s test. Fear of specific disaster would have erased their overconfidence, and they would have moved from ‘being sure that things will go right’ to ‘imagining all the ways things could go wrong’ – and preventing them. The more dangerous the test, the more the engineers would have worked to overcome every contingency.

Similar sorts of dangerous testing should be mandatory for anything we need to be absolutely safe. ‘Bomb-proof’ nuclear reactors should be hit with high explosives in the open countryside, cyanide should be poured into the river above ‘infallible’ purification plants, rocks should be fired at the space-shuttle before take-off. The more spectacular the consequences of failure would be, the more we can trust the engineers’ promises that they’ve thought of everything.

Addendum: In cases where direct testing isn’t possible (such as anti-terrorist security), we can often imitate the anthrax test by getting rid of all secondary security measures. Scrap immigration controls and no-fly lists and tell the designers of airport security machines that it is certain that terrorists will be boarding US planes regularly, and that they have to deal with that fact. Publish encrypted versions of military strategy to ‘danger test’ the encryption. Stop all security checks on low-level employees at sensitive locations to ‘danger test’ the other security measures.

Basically, if there is a low-grade security measure helping to protect a vital secret, then that measure should be scraped and the high-grade measures should stand on their own. The lack of extra protection would be the equivalent of the ‘dangerous test’ for those measures, forcing security designers to expand their imagination.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/jhertzli/ Joseph Hertzlinger

    What would have been the effects of such a policy had it been carried out in the past? I’m sure we would not have had the Titanic. I suspect we also would not have had fire.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    The Titanic is commonly blamed on engineers, but was built in an era of cutthroat competition between ocean liners. Early iron steamships – where the engineers got their full head to take whatever precautions they liked without worrying about shaving one penny more than their competitors – were actually much safer. The part about “Guaranteed not to crash!” was marketroid hype.

    Most “engineering” disasters are management disasters, though there are exceptions. I never heard the Tacoma Narrows bridge blamed on pointy-haired bosses. Still, if you’re going to cite an example of engineering overconfidence, cite Castle Bravo or something, not the Titanic.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    The law and economics literature is the right place to look for answers to this sort of question. How careful an analysis one should do of any risk should depend on how large is the harm from a failure, which varies from case to case. So rather than set some arbitrary rule about hos much testing should be done, it is better to make sure they have the right incentives to do the appropriate amount of testing. Appropriate incentives can be set by the legal liability regime. The right liability rule for any one area of behavior depends on who has what ability to prevent harm, and how easily the courts can determine whether the right kinds of precautions were taken.

  • Doug S.

    Well, it would be very expensive to run such a test (launching something into space isn’t cheap), even if you decided that such a test would be an appropriate way to convincingly demonstrate that the capsule was safe.

  • Constant

    Dangerous testing would presumably motivate pointy-haired bosses as well – anyone up the decision and responsibility hierarchy.

  • http://yorkshire-ranter.blogspot.com Alex

    The point of testing is that you arrange things so as to try different contingencies under conditions that minimise the bad consequences. Presumably, you’d suggest that (say) Cisco should push out updates to all the Internet routers in the world without doing any tests first – that way, they’d be more careful..

  • Stuart Armstrong

    I’m sure we would not have had the Titanic. I suspect we also would not have had fire.

    I’m only suggesting this sort of testing for things we need to be “absolutely safe” – where one single failure would be a disaster. I don’t suggest we do so for anything where we can accept a level of risk (whether the Titanic qualified for needing to be “absolutely safe” is an ethical and economic question, not an engineering one).

  • Stuart Armstrong

    Well, it would be very expensive to run such a test (launching something into space isn’t cheap), even if you decided that such a test would be an appropriate way to convincingly demonstrate that the capsule was safe.

    If there was a risk (as some scientists thought) of an alien organism starting a pandemic, then the cost is trivial. And the dangerous test is not really a test at all – it’s there to make sure that all proper tests have been done beforehand, and all possibilities of failure considered.

  • Stuart Armstrong

    So rather than set some arbitrary rule about how much testing should be done, it is better to make sure they have the right incentives to do the appropriate amount of testing. Appropriate incentives can be set by the legal liability regime.

    But the fact that there was a risk of contamination from a Martian organism should have been all the incentive JPL engineers needed to be totally sure of themselves (I doubt they would have been more afraid of alien pandemic + legal liability than simply an alien pandemic).

    However their reaction to Sagan’s proposal suggested they still weren’t totally sure of themselves. They were overconfident that things would go right, and it took a specific image (anthrax loose on the world through their fault) to make them doubt this.

    Overconfidence is especially a problem when the risk is very low, and when the risk is very low due to other reasons that the security of the design (in this case, the fact that there probably aren’t dangerous organisms on Mars). Dangerous testing gives them (engineers and pointy haired types) a specific image of what could go wrong, and puts the entire onus only on their system. Both help overcome overconfidence.

  • http://zbooks.blogspot.com Zubon

    With even a small risk of grey goo, what would be the proper test for self-replicating nanotechnology? There are occasional scientific experiments that have some infinitesimal chance of destroying all the matter in the universe or at least all life on Earth. What additional tests might we have done before fiddling with strange matter?

    The space capsule example seems like an easy case because there is an obvious safe way to test it (with something other than anthrax). We do a great many things that bear some tiny risk of eliminating our species (or more), and I cannot see how you test most potential rends in the universe without the risks inherent in making them.

    There are claims that global warming presents an existential threat for humans. Are there dangerous tests we should be applying there?

  • Stuart Armstrong

    Zubon, your comments are good, and I can’t come up with particularly good ideas for dangerous testing of grey goo and strange matter. Probably the best is to hand the developed technology over to terrorists.

    Speaking of terrorists and testing, I’ve added an addendum to my post for dealing with terrorism and other similar issue where a direct “dangerous test” is not possible.

    Global warming is more nebulous and political, so a specific dangerous test is hard to imagine. A bastardised version of dangerous testing may be to require either a massive increase or decrease of greenhouse gasses, no in between allowed. So those who argue against greenhouse gas reductions would have to be much more certain of their case. But that isn’t really the same thing.

  • Stuart Armstrong

    cite Castle Bravo or something, not the Titanic.

    I like your idea. It’s done.