Tag Archives: existential risk

Does complexity bias biotechnology towards doing damage?

A few months ago I attended the Singularity Summit in Australia. One of the presenters was Randal Koene (videos here), who spoke about technological progress towards whole brain emulation, and some of the impacts this advance would have.

Many enthusiasts – including Robin Hanson on this blog – hope to use mind uploading to extend their own lives. Mind uploading is an alternative to more standard ‘biological’ methods for preventing ageing proposed by others such as Aubrey de Gray of the Methuselah Foundation. Randal believes that proponents of using medicine to extend lives underestimate the difficulty of what they are attempting to do. The reason is that evolution has led to a large number of complex and interconnected molecular pathways which cause our bodies to age and decay. Stopping one pathway won’t extend your life by much, because another will simply cause your death soon after. Controlling contagious diseases extended our lives, but not for very long, because we ran up against cancer and heart disease. Unless some ‘master ageing switch’ turns up, suspending ageing will require discovering, unpacking and intervening in dozens of things that the body does. Throwing out the body, and taking the brain onto a computer, though extremely difficult, might still be the easier option.

This got me thinking about whether biotechnology can be expected to help or hurt us overall. My impression is that the practical impact of biotechnology on our lives has been much less than most enthusiasts expected. I was drawn into a genetics major at university out of enthusiasm for ideas like ‘golden rice’ and ‘designer babies’, but progress towards actually implementing these technologies is remarkably slow. Pulling apart the many kludges evolution has thrown into existing organisms is difficult. Manipulating them to reliably get the change you want, without screwing up something else you need, even more so.

Unfortunately, while making organisms work better is enormously challenging, damaging them is pretty easy. For a human to work, a lot needs to go right. For a human to fail, not much needs to go wrong. As a rule, fiddling with a complex system is a lot more likely to ruin it than improve it. As a result, a simple organism like the influenza virus can totally screw us up, even though killing its host offers it no particular evolutionary advantage:

Few pathogens known to man are as dangerous as the H5N1 avian influenza virus. Of the 600 reported cases of people infected, almost 60 per cent have died. The virus is considered so dangerous in the UK and Canada that research can only be performed in the highest biosafety level laboratory, a so-called BSL-4 lab. If the virus were to become readily transmissible from one person to another (it is readily transmissible between birds but not humans) it could cause a catastrophic global pandemic that would substantially reduce the world’s population.

The 1918 Spanish flu pandemic was caused by a virus that killed less than 2 per cent of its victims, yet went on to kill 50m worldwide. A highly pathogenic H5N1 virus that was as easily transmitted between humans could kill hundreds of millions more.

GD Star Rating
a WordPress rating system
Tagged as: , ,

If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either

A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.

While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.

A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.

While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.

What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.

To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.

Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

The geoengineering double catastrophe

I recently attended the World Congress on Risk in Sydney, primarily to see some sessions on ‘global catastrophic risk’. There were some presentations on the ‘tail risk’ of climate change that made me think that I should take it more seriously as a catastrophic risk than I have over the last few years. Despite some promising signs to the contrary, I am pessimistic that we will have enough incentive to limit emissions individually, or be able to coordinate to limit emissions collectively. It looks as though we are on track to burn most of easily accessed oil and gas, and much of the coal. If we do continue with ‘business as usual’ then I am told to expect temperature increases of at least 4 degrees Celsius over the next 100-200 years.

If temperatures rise that far then it will be very tempting to try to suppress them with geoengineering. One of the cheaper options is to release aerosols into the atmosphere. Another suggestion is to use chemicals or sea spray to seed clouds, or make them more reflective. A single country could afford to do this for the whole planet if they chose, which makes it much more likely that someone eventually will.

While this geoengineering might be better than nothing, a new paper (currently under review for publication) by Seth Baum, Tim Maher, and Jacob Haqq-Misra of the Global Catastrophic Risk Institute points out that it would leave humanity as a whole in a precarious position. Aerosols and sea spray gradually fall out of the atmosphere, so these geoengineering activities would need to be kept up continuously. If a disaster ever occurred that interfered with the project, for example a serious pandemic, then temperatures could start rising very quickly. This would lead to a second disaster – unpredictable and dramatic climate change – that humanity would have to deal with on top of the first.

Given that we should be focussed on the worst risks that threaten humanity as a whole, this kind of double-punch is particularly worrisome. That said, even if significant parts of the planet were made unliveable for mammals, it still seems improbable that climate change would lead to extinction. Some of the planet would still be suitable for humans, even if in the worst case they had to return to subsistence farming. However, having to deal with rising temperatures when our ability to adapt has already been compromised would increase the chance of a cascading collapse of law and order and sophistication in the economy, which would take us away from achieving the technologies or space colonisation that would safeguard us for the long term.

If rising carbon emissions are nonetheless inevitable an option would be to find geoengineering projects that continue working for some time without ongoing maintenance – perhaps mirrors in space or reflective white surfaces.

GD Star Rating
a WordPress rating system
Tagged as: , ,