A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.
Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.
This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.
Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:
Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.
The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.
Advancing our understanding of how viruses are transmitted is important work. The more we know, the better we may be able to block transmission. However, it is a fallacy to consider every and any experiment fair game. Creating an agent more deadly than exists in nature falls into this category.
If it becomes “legitimate” to mutate a deadly virus we will see an explosion in this type of research. There are many more avian than human influenza viruses. If this controversial work is allowed to continue and more labs are going to be involved, the risk of an accidental release of a mutated H5N1 virus increases exponentially.
Accidents do happen. We need look no further than the re-emergence of the H1N1 virus in 1977, after a 20-year hiatus. A group of US scientists investigating the 1977 outbreak concluded that it leaked out of a Russian lab that was working on a live-attenuated H1N1 virus vaccine.
Historical data are not encouraging, either. Between 1978 and 1999 there were more than 1,200 incidents in which people were infected from BSL-4 labs. Since 1999, lab workers have been killed by numerous microbes, including Ebola and the Sars respiratory virus.
Scientists have a moral responsibility to speak up and question the fundamental wisdom, the ethics and the social advisability of conducting such research. This includes questioning the scientific rationale for research of “dual-use concern”, even if that means taking on the powers that be or making themselves unpopular.
This is why it is so important to maintain the moratorium on H5N1 research that involves dangerous experiments to see “what it would take” for the virus to become airborne – and therefore as transmissible from one person to another as the seasonal flu.
Is the promise of better vaccines worth the risk of accidental release? Or the risk generated by developing techniques that a dangerous but intelligent lunatic might appropriate in the future? Humans are fragile, and so destructive applications of this science can easily run ahead of the helpful ones. Regulation of biotech research has to recognise that.
Even in the best of times, it takes months to years develop and scale-up production of enough vaccines to protect more than a small number of people. If such a disease were already spreading quickly, the resulting panic would make this a much slower process. Meanwhile, recent research which produced a new, highly virulent and contagious H5N1 strain, wasn’t even performed in the most secure containment facilities.
Unfortunately, preventing disaster requires our ‘defences’ to beat back the new threats not just most of the time, but every time. This is pretty difficult to begin with, but becomes even more so the more rapidly new innovations are appearing.