Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

My instinct is to oppose restrictions on developing new technologies. Nonetheless, while biotech seems to be taking a long time to generate inventions that materially improve our lives, it does appear to have enormous potential to do damage. Research into the flu might generate some way to protect ourselves. But current research greatly increases the risk that we will accidentally release a deadly pathogen of our own making:

Advancing our understanding of how viruses are transmitted is important work. The more we know, the better we may be able to block transmission. However, it is a fallacy to consider every and any experiment fair game. Creating an agent more deadly than exists in nature falls into this category.

If it becomes “legitimate” to mutate a deadly virus we will see an explosion in this type of research. There are many more avian than human influenza viruses. If this controversial work is allowed to continue and more labs are going to be involved, the risk of an accidental release of a mutated H5N1 virus increases exponentially.

Accidents do happen. We need look no further than the re-emergence of the H1N1 virus in 1977, after a 20-year hiatus. A group of US scientists investigating the 1977 outbreak concluded that it leaked out of a Russian lab that was working on a live-attenuated H1N1 virus vaccine.

Historical data are not encouraging, either. Between 1978 and 1999 there were more than 1,200 incidents in which people were infected from BSL-4 labs. Since 1999, lab workers have been killed by numerous microbes, including Ebola and the Sars respiratory virus.

Scientists have a moral responsibility to speak up and question the fundamental wisdom, the ethics and the social advisability of conducting such research. This includes questioning the scientific rationale for research of “dual-use concern”, even if that means taking on the powers that be or making themselves unpopular.

This is why it is so important to maintain the moratorium on H5N1 research that involves dangerous experiments to see “what it would take” for the virus to become airborne – and therefore as transmissible from one person to another as the seasonal flu.

Is the promise of better vaccines worth the risk of accidental release? Or the risk generated by developing techniques that a dangerous but intelligent lunatic might appropriate in the future? Humans are fragile, and so destructive applications of this science can easily run ahead of the helpful ones. Regulation of biotech research has to recognise that.

Even in the best of times, it takes months to years develop and scale-up production of enough vaccines to protect more than a small number of people. If such a disease were already spreading quickly, the resulting panic would make this a much slower process. Meanwhile, recent research which produced a new, highly virulent and contagious H5N1 strain, wasn’t even performed in the most secure containment facilities.

Unfortunately, preventing disaster requires our ‘defences’ to beat back the new threats not just most of the time, but  every time. This is pretty difficult to begin with, but becomes even more so the more rapidly new innovations are appearing.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • Yes, there are areas within biotechnology that are inherently more dangerous than most other fields.

    However, I do not see any feasible path to “relinquishment” which is not, at least under my own value system, close enough to collective suicide that I’d rather accept a large existential risk (better a 1-epsilon chance of death than a 100% chance of a state arguably worse than death…).  Irreducibly large projects, like some types of nuclear weapons development, can still be effectively regulated; but the vanishingly small cannot be without, at minimum, total surveillance of everyone by everyone.

    We may simply have to keep winning each race until we can spread out further than our worst creations can replicate.

    • Gulliver

      Perhaps not. Evolution has developed robust strategies against irrecoverable catastrophe despite the fact that every selfish gene on Earth is engaged in an eons-old arms race. Indeed, the only point in history when life on this planet nearly went extinct appears to have been the result of climate shift, not evolution, yet life prevailed because it could adapt.

      I would argue that looking for primarily top-down controls on biotechnology is a futile approach. And eschewing controls in the hope that we can win each race is similarly flawed. Our survival would depend on winning every time since we’d never know which loss would be the one we couldn’t survive, but we’d only have to lose big once. Statistics would get us eventually.

      If, on the other hand, we could develop bottom-up strategies for building up resistance to mistakes, we might stand a chance of surviving our own learning curve. Which is not to say that the cautionary principle should be thrown out, only that we should not rely on it exclusively. Ethics is good. Ethics plus soft failure modes is much, much better. Humans are biased to think in top-down hierarchies, which works for us in many of our social endeavors. But if we’re going to go toe-to-toe with the complexity of nature – and we are, because some of our seven billion will whether or not we do so personally – we’ll have to do it on nature’s terms…our terms simply aren’t sophisticated enough.

      • The whole problem is that it’s very, very difficult to ensure we have “soft failure modes”.  We should do what we can–you could say that that’s one of the most important races to win–but anyone connected to Earth’s biosphere is, in principle, vulnerable to a self-replicator released there.  (Note that if we ever become capable of deploying what Drexler calls an “active shield”, we are also capable of errors and malice at that scale…)

        The strategies developed by evolution against pathogens are robust against typical random variation, but they may not hold up against what humans become capable of designing.  Same reason many more species have gone extinct because of human activities than because of the activities of any closely related species.

      • Gulliver

        The same is true of computer malware, yet software replicators can be and are defended against. Unprecedented impact from the directed activity of a tool-designing species (humans) stems from the fact that nature adapts more slowly than a memetic culture. Similar imbalances should not be assumed within the activities of the memetic culture itself.

        I’m not arguing against directed activity or memetic adaption…on the contrary, both will be essential aspects of any outbreak-monitoring system or active shield. Yet I stand by my observation that decentralized defenses are more robust than centralized controls, especially when dealing with self-replicating attacks.

        You hit on my own main concern, which is how to protect the biosphere, an arguably far more difficult task than protecting our own bodies and infrastructure. The only even remotely satisfactory answer I’ve been able to come up with is that we’ll have to instill the biosphere with a more agile defense able to keep up with memetic evolution.

        That’s where I think bioethics is essential. While it seems unlikely that we can protect against unregulated biohacking, we may be able to protect against the occasional loose cannon or rogue state, provided there are penalties against recklessness and incentives for caution so that the normal condition isn’t some sort of global melee.
        Consider terrorism. Modern metropolises are incredibly vulnerable to massive damage by relatively modest means. But because there are usually consequences for wrecking cities, actual incidents can be kept within a manageable scope. We’ll need new and less centralized controls for managing grey and green goos, but I have some tempered hope that the problem is not intractable provided outbreaks are not the status quo.

      • We have a lot more flexibility in dealing with computer malware:

        – We can disconnect computers from the Internet.
        – We can turn off computers.

        In contrast, we can’t “turn off” physics until we have a patch ready when we don’t like what is happening.

        It looks like we’re essentially in agreement on decentralization.  Space colonization is the ultimate form of this, but there are smaller steps that can be taken to increase the probability of humanity surviving a biosphere disaster, while we do what we can to shrink the chance of such a disaster happening in the first place.

      • Gulliver

        Christopher Chang,

        Didn’t say the analogy doesn’t have limits, only that we can and do police a form of self-replicating artifact and not only by turning off the substrate.

        My own educated guess is that both space colonization and Drexlerian nanotech will take many decades if not centuries to come to fruition. But even without Drexlerian assemblers, there’s still plenty of room for replication disasters, various green goos being the most obvious end-game. As with the technologies themselves, I think we’ll be looking not at a single monolithic research pathway, but a convergence and cross-pollination  of many disparate approaches.

        There are two things of which I’m certain. Outright relinquishment is about as realistic and effective as sticking our heads in the sand, and paralyzing ourselves into doing nothing to protect ourselves and our planet would give extinction its best shot at us. That said, what we do must be tempered by prudence lest the cure prove deadlier than the disease.

        I regard neither our near-term survival nor our extinction as forgone conclusions.

  • There are two quite different issues here. 

    The first issue is externality of bad bugs getting out – researchers working on them grab a much larger fraction of the grains if their research works out, than of the harm if a bad bug gets out. The second issue is that for very complex systems it can be too easy to fool people into thinking you’ve made an improvement, when you haven’t. Better looking results get published more, changes may help watched variables but hurt unwatched variables, and long term effects can be worse than short term effects. 

  • Guest

    I don’t have a strong opinion on the virus aspect, but I would strongly encourage playing around with human genetic engineering. Humans have historically passed through a bottleneck of genetic drift; there was a time when we nearly went extinct. This explains that we are still so similar, compared to the diversity of other globally distributed species inhabiting so many different environments. Furthermore, there are local maxima of biological design that are not globally optimal – e.g. our ability to suffer from certain types of stress or pain may be necessary in a primate template, but not optimal for all potential intelligent beings into which our descendants could evolve. There is an ethical element to this as well.

    Let’s remember not to give in to status quo bias, and let’s also remember that the more diverse humans become, the less likely it is that any pathogen poses a serious existential risk.

  • Carl Shulman

    People disproportionately put effort into the helpful rather than harmful things, and actively repress the latter under most situations (state bioweapons programs being the big exception). So the intrinsic difficulty of breaking vs making is only a small piece of the puzzle.

    Also, if we take “biotechnology” to include all plant and animal breeding, then it is very easy to get sustained positive effects (for our purposes) without very detailed understanding, just tracking outcome data and digital DNA or pedigrees. Crop and livestock performance has been growing incredibly for a long time:

  • Robert Koslover

    This kind of topic tends to generate comments equivalent to the classic line, “there are some things that man is not meant to know.”  And this is all part of the flaw in thinking that: (1) “we” (i.e., well-meaning reasonable people with a system of ethics in accordance with modern Western Civilization) can simply *choose* to not perform research in these areas, and that as a result, (2) potential accidents/mishaps/dangers can be avoided.  But history shows that that is simply not how science and technology develop.  Any/all subject areas *will* eventually be pursued by someone with both the means and a strong interest (whether motivated by altruism, power, money, fame, religious fanaticism, etc.).  So if “we” do not pursue this research, others will.  Meanwhile, to the extent that “we” believe ourselves to be in any way more trustworthy/responsible/mature/rational about handling such dangerous work, then “we” ought to make darn-well sure that “we” become the experts in it, so that “we” are fully ready to develop effective countermeasures to protect ourselves!  Sure, gaining a *little* knowledge is a dangerous thing.  But developing a *lot* of knowledge is the only practical option for achieving genuine security.

  • VV

    Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died.

    The reported cases are only those of people who become sick enough to be hospitalized. At this point, any form of influenza infection is quite dangerous.

    It doesn’t mean that 60% of people who come in contact with the H5N1 virus die. I suppose that most people just get a common cold or no symptoms at all.

    Sexual reproduction ensured that our cell surface proteins have an high degree of individual variety, which makes difficult for any single strain of virus to efficiently infect everybody. (In fact, one of the hypotheses for the evolution of sexual reproduction is that this was the main selective pressure behind it.)

    Maybe it is possible to engineer some super-bug capable of killing anybody it comes in contact with, but it would be something specifically designed to do that, not something you can make by accident.

  • Barnley

    I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow.
    With respect golden rice I do not understand what the complaint here is. Golden rice was developed many years ago, but progress in its widespread adoption and diffusion has been exceptionally slow because of anti-GM regulation. Which is true of GM crops the world over. Golden Rice is under a humanitarian use license so  intellectual property rights are not a barrier to its use.  

    Perhaps you believe the development of Golden Rice was remarkably slow, but I don’t see how the eight years taken in the nineties could be considered remarkably slow. 

    When it comes to genetically modified plants at least I cannot see why there is so much public fear and regulatory pressure compared to their traditionally breed counterparts. The purpose of traditional corp breeding is to modify a plant’s genetic makeup. Just  as molecular techniques seek to do in a very precise fashion. I don’t believe the novelty in the process compared to traditional breeding (which can include the use of mutagenic chemicals and radiation) is substantially different enough to justify the regulations. Regulations are based on the process not outcomes when the it should be the reverse. Furthermore, traditionally breed crops face the same intellectual property rights issues. 

    • Mosanto uses GMO to allow plants to withstand poisions that the plants otherwise wouldn’t survive. It’s not primarily used to increase the amount of vitamins in the crops.

      There a lot of commerical interest behind using GMO the Mosanto way, but little commerical interest  goes into using unlicensed vitamin rich rice.

      Focusing on unlicensed vitamin rich rice is a strawman if you want to understand the regulatory reasons for the anti-GM leglislation that currently exists. 

      • Barnley

        I explicitly made the point about golden rice because Robert stated that progress towards implementing it was “remarkably slow”. It was remarkably slow as is well known because of anti-GM regulations. Which I believe blunts the point that Robert was trying to make. 

        My wider point was that from a biological point of view I cannot understand the regulatory reasons behind anti-GM legislation with regards to GM crops when compared to traditional breeding techniques. 

        Monsanto was involved in the development of Golden Rice and it was one of the first companies that waived its patent rights to freely licence it.

        Many more genetically modified biofortified crops have been developed since golden rice. 

      • The fear and regulatory pressure doesn’t exist because of Golden Rice. 
        If you focus on Golden Rice while ignoring other development in gene manipulated crops you won’t understand the fear and regulatory pressure. There’s no reason why you should understand them when you focus only on Golden Rice.

      • Barnley

        I only mentioned Golden Rice because Robert argued its implementation was remarkably slow. If one’s point is to show that this was because of biological complexity that is a poor example because it was anti-GM regulations that slowed the implementation of Golden Rice. 

        My wider point is that I do not understand the fear and regulatory pressure around GM plants from a biological perspective. As I said “When it comes to genetically modified plants at least I cannot see why there is so much public fear and regulatory pressure compared to their traditionally breed counterparts. The purpose of traditional corp breeding is to modify a plant’s genetic makeup. Just  as molecular techniques seek to do in a very precise fashion. I don’t believe the novelty in the process compared to traditional breeding (which can include the use of mutagenic chemicals and radiation) is substantially different enough to justify the regulations. Regulations are based on the process not outcomes when the it should be the reverse”.
        Here’s another way to put it. We take a plant and using a naturally occurring bacterium (nature’s own genetic engineer) that carries out the naturally occurring process of horizontal gene transfer we artificially insert naturally occurring genes from other plants (or often the same plant) or bacteria (such as those that confer herbicide resistance) but somehow compared to the haphazard methods of irradiation and chemical mutagenesis this method is feared and inordinate regulatory pressure is brought to bear upon it. Perhaps, you could explain to me what I do not understand and why you believe, if you do, why these burdensome regulations are necessary? 

  • Pingback: Simoleon Sense » Blog Archive » Weekly Roundup 191: A Curated Linkfest For The Smartest People On The Web!()

  • DonaldWCameron

    It can if you equate “complexity” with “complicated”. It can if you equate “simplicity” with “straightforwardness”.

  • richatd silliker

    “Pulling apart the many kludges evolution has thrown into existing organisms is difficult.”

    Are you suggesting here that evolution is a top down process? 

  • guest

    If history is any guide, then in the long run, catastrophic plagues with disastrous consequences will arise naturally from time to time on a fairly regular basis.   Sure, in the short run, genetic engineering might make this more likely, but only somewhat; once we are able to better deal with the generalized problem of germs in general in better ways then we can now, then genetically modified germs won’t be an issue anymore.

  • Pingback: Weekly Wisdom Roundup #191 | The Weekly Roundup()