In his broad-reaching new book, On the Future, aging famous cosmologist Martin Rees says aging famous scientists too often overreach:
Scientists don’t improve with age—that they ‘burn out’. … There seem to be three destinies for us. First, and most common, is a diminishing focus on research. …
A second pathway, followed by some of the greatest scientists, is an unwise and overconfident diversification into other fields. Those who follow this route are still, in their own eyes, ‘doing science’—they want to understand the world and the cosmos, but they no longer get satisfaction from researching in the traditional piecemeal way: they over-reach themselves, sometimes to the embarrassment of their admirers. This syndrome has been aggravated by the tendency for the eminent and elderly to be shielded from criticism. …
But there is a third way—the most admirable. This is to continue to do what one is competent at, accepting that … one can probably at best aspire to be on a plateau rather than scaling new heights.
Rees says this in a book outside his initial areas of expertise, a book that has gained many high profile fawning uncritical reviews, a book wherein he whizzes past dozens of topics just long enough to state his opinion, but not long enough to offer detailed arguments or analysis in support. He seems oblivious to this parallel, though perhaps he’d argue that the future is not “science” and so doesn’t reward specialized study. As the author of a book that tries to show that careful detailed analysis of the future is quite possible and worthwhile, I of course disagree.
As I’m far from prestigious enough to get away a book like his, let me instead try to get away with a long probably ignored blog post wherein I take issue with many of Rees’ claims. While I of course also agree with much else, I’ll focus on disagreements. I’ll first discuss his factual claims, then his policy/value claims. Quotes are indented; my responses are not.
FACTS
Social media are now globally pervasive. … Those in deprived parts of the world are aware of what they are missing. This awareness will trigger greater embitterment, motivating mass migration or conflict, if these contrasts are perceived to be excessive and unjust. … Citizens of these privileged nations are becoming far less isolated from the disadvantaged parts of the world. Unless inequality between countries is reduced, embitterment and instability will become more acute because the poor, worldwide, are now, via IT and the media, far more aware of what they’re missing.
There is little evidence that mere awareness of inequality induces violent conflict. And I’m pretty sure that the poor already knew they were poor. This seems mostly wishful thinking, a threat to induce redistribution. (When I merely compared this common sort of income-oriented threat to sex-oriented threats, many accused me of supporting sex violence. Yet few will complain that Rees is advocating violence here.)
We can’t confidently forecast lifestyles, attitudes, social structures, or population sizes even a few decades hence.
We can actually predict future population pretty well, as death rates are quite predictable, and birth rates have followed pretty predictable trends. Most human social structures, like families, firms, cities, nations, are pretty stable over decades. We can be pretty sure that most structures in these systems won’t be that different twenty years hence.
Human beings themselves—their mentality and their physique—may become malleable through the deployment of genetic modification and cyborg technologies. This is a game changer. When we admire the literature and artefacts that have survived from antiquity, we feel an affinity, across a time gulf of thousands of years, with those ancient artists and their civilisations. But we can have zero confidence that the dominant intelligences a few centuries hence will have any emotional resonance with us—even though they may have an algorithmic understanding of how we behaved.
If is fine to worry about future changes, but the mere possibility of malleability seems far from sufficient to conclude that our descendants will have no “emotional resonance with us”. Existing mental and social structures have huge inertia, as at each point the incentives will be to adopt changes that match well with existing structures. I foresee a lot of resonance.
This century is special. It is the first when one species, ours, is so empowered and dominant that it has the planet’s future in its hands. … This century is the first in which one species—ours—can determine the biosphere’s fate. I didn’t think we’d wipe ourselves out. But I did think we’d be lucky to avoid devastating breakdowns. … This is the first era in which humanity can affect our planet’s entire habitat: the climate, the biosphere, and the supply of natural resources.
We have always had the planet’s future in our hands. Rates of change have increased during the industrial era, but humans have long been changing the climate, biosphere, and natural resources. This century isn’t unique.
Back in 2003 I was worried about these hazards and rated the chance of bio error or bio terror leading to a million deaths as 50 percent by 2020. I was surprised at how many of my colleagues thought a catastrophe was even more likely than I did. More recently, however, psychologist/author Steven Pinker took me up on that bet, with a two-hundred-dollar stake. … Bio error and bio terror are possible in the near term—within ten or fifteen years. … The public is still in denial about two kinds of threats: harm that we’re causing collectively to the biosphere, and threats that stem from the greater vulnerability of our interconnected world to error or terror induced by individuals or small groups. … The emergent threat from technically empowered mavericks is growing. … If there is indeed a growing risk of conflicts triggered by ideology or perceived unjust inequality, it will be aggravated by the impact of new technology on warfare and terrorism.
His 2003 prediction seems crazy huge to me, and I and many others would have been happy to bet him, if he was willing to bet folks other than celebrities. We remain ready to bet. As I posted on recently, we see little evidence that individuals or small groups actually cause more harm to the world than before.
Demographers predict continuing urbanisation, with 70% of people living in cities by 2050. Even by 2030, Lagos, São Paulo, and Delhi will have populations greater than thirty million. Preventing megacities from becoming turbulent dystopias will be a major challenge to governance.
There is little evidence that big cities are “becoming turbulent dystopias”.
European villages in the mid-fourteenth century continued to function even when the Black Death almost halved their populations; the survivors were fatalistic about a massive about a massive death toll. In contrast, the feeling of entitlement is so strong in today’s wealthier countries that there would be a breakdown in the social order as order as soon as hospitals overflowed, key workers stayed at home, and health services were overwhelmed. This could occur when those infected were still a fraction of 1 percent.
In general, we see little breakdown in social order in big temporary crises. Social order would stay fine with one percent infected.
Earlier [SETI] searches … didn’t find anything artificial. But they were very limited—it’s like claiming that there’s no life in the oceans after analysing one glassful of seawater.
Actually, any one glass of seawater typically holds much life; that would indeed tell you there’s life in the ocean.
Even if intelligence were widespread in the cosmos, we may only ever recognise a small and atypical fraction of it. Some ‘brains’ may package reality in a fashion that we can’t conceive. Others could be living contemplative energy-conserving lives, doing nothing to reveal their presence.
I’m skeptical that there’s much life out there hiding but inactive. The competitive gains to metabolism and structure seem strong, and useful living metabolism and structure should be noticeably different than the dead versions we now see.
Whereas there are many composers whose last works are their greatest, there are few scientists for whom this is so.
Actually, a recent Nature paper found that “from artists to scientists, anyone can have a successful streak at any time.”
We might be able to download our thoughts and memories into a machine. If present technical trends proceed unimpeded, then some people now living could attain immortality—at least in the limited sense that their downloaded thoughts and memories could have a life span unconstrained by their present bodies. Those who seek this kind of eternal life will, in old-style spiritualist parlance, ‘go over to the other side’. We then confront the classic philosophical problem of personal identity. If your brain were downloaded into a machine, in what sense would it still be ‘you’?
This isn’t wrong, but as author of a book that tries to get past these tired aspects, I’m disappointed he isn’t aware there’s much more to say.
VALUES
There are some who promote a rosy view of the future, enthusing about improvements in our moral sensitivities as well as in our material progress. I don’t share this perspective. … The gulf between the way the world is and the way it could be is wider than it ever was. … The plight of the ‘bottom billion’ in today’s world could be transformed by redistributing the wealth. … The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will require redistribution of that wealth. … Various types of human enhancements are possible, … [but] as with so much technology, priorities are unduly slanted towards the wealthy. … The criterion for a progressive government should be to provide for everyone the kind of support preferred by the best-off—the ones who now have the freest choice.
It seems crazy to say a world or an era isn’t good because you think distribution could make it better. Redistribution has many subtle effects and problems, and so larger versions may just not be feasible. It also seems crazy infeasible to give everyone whatever the rich prefer; that usually violates budget constraints.
The planning horizon for infrastructure and environmental policies needs to stretch fifty or more years into the future. If you care about future generations, it isn’t ethical to discount future benefits at the same rate as you would if you were a property developer planning an office building. … Appliances and vehicles could be designed in a more modular way so that they could be readily upgraded by replacing parts rather than by being thrown away. … Effective action needs a change in mind-set. We need to value long-lasting things—and urge producers and retailers to highlight durability.
If we are going to value the future more, we should do it consistently across all our choices, including when planning office buildings. Office buildings are also there to provide future benefits to people. It seems incoherent to discount the future more for some kinds of choices than for others.
African cultural preferences may lead to a persistence of large families as a matter of choice even when child mortality is low. If this happens, the freedom to choose your family size, proclaimed as one of the UN’s fundamental rights, may come into question when the negative externalities of a rising world population are weighed in the balance. …
Africa isn’t deviating much from world trends. And population isn’t directly an externality. It is connected to the externality of innovation, but in that case more population is good. Natural resources like land, minerals, oil, and water are mostly covered by property rights, and so populations don’t cause externalities merely by consuming such things. There can be negative externalities associated with fishing and polluting commonly used biospheres like oceans, but that is all the more reason to create more property rights in such things.
I was once interviewed by a group of ‘cryonics’ enthusiasts. … I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a ‘deathist’—really old fashioned. I was surprised to learn later that three academics in England (though I’m glad to say not from my university) had signed up for ‘cryonics’.… It is hard for most of us mortals to take this aspiration seriously; moreover, if cryonics had a real prospect of success, I don’t think it would be admirable either.… the corpses would be revived into a world where they would be strangers—refugees from the past. … ‘thawed-out corpses’ would be burdening future generations by choice; so, it’s not clear how much consideration they would deserve.
Retirees today “burden” the world around them in the sense of not productively working. And being old, they are relative strangers to their world, which is why they often collect in retirement communities where they can be around similar others. Is it not admirable for people to enjoy retirement; would they be more admirable if they died on ice floes instead? Cryonics patients today are happy to pay cash for their future revival and living expenses, just as retirees pay for their retirement via savings, but our legal systems don’t make that easy.
AI system… is likely to create public concern if the system’s ‘decisions’ have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery, or even given a poor credit rating, we would expect the reasons to be accessible to us—and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy.
Actually you don’t know why you get the credit rating you do; that is an opaque algorithm. You may know some things that might influence it, but that’s a very different thing. Many medical choices made on your behalf are also based on opaque algorithms. Your life is full of inaccessible non-contestable opaque algorithms that influence what happens to you. Wake up and look around!
If the machines are zombies, we would not accord their experiences the same value as ours, and the posthuman future would seem bleak. But if they are conscious, why should we not welcome the prospect of their future hegemony?
I might not be one of them, but a many people disagree with this pretty strongly. Would be better to engage them with arguments here.
By attacking mainstream religion, rather than striving for peaceful coexistence with it, [hardline atheists] weaken the alliance against fundamentalism and fanaticism. They also weaken science.
There’s a lot to be said for speaking the truth simply and clearly. If that weakens science, so be it, some may reasonably say.
The space environment is inherently hostile for humans. … Pioneer explorers will have a more compelling incentive … [to] harness the super-powerful genetic and cyborg technologies that will be developed in coming decades. These techniques will be, one hopes, heavily regulated on Earth, on prudential and ethical grounds, but ‘settlers’ on Mars will be far beyond the clutches of the regulators. … Posthumans … won’t need an atmosphere. … So it’s in deep space—not on Earth, or even on Mars—that nonbiological ‘brains’ may develop powers that humans can’t even imagine. … We are perhaps near the end of Darwinian evolution, but a faster process, artificially directed enhancement of intelligence, is only just beginning. It will happen fastest away from the Earth—I wouldn’t expect (and certainly wouldn’t wish for) such rapid changes in humanity here on Earth.
Due to agglomeration externalities, most social and economic activity will stay here on Earth up until the point where Earth growth is forced to slow down due to Earth filling up. A great deal of posthuman change can happen before then, and while those changes may give a few more advantages in space, they will also give great competitive advantages here on Earth. So it is hard to see why regulation of posthuman changes should be much differ much in space than here on Earth. This seems an attempt to reassure readers that posthuman changes needn’t bother them if they stay on Earth, when few such reassurances can actually be offered.
We are humans, we like humans, if machines better than us become too common and are not well enough enslaved, there will not be as much room for humans (might not be any). Therefore we should make sure that any machines stay powerless enough/enslaved well enough to optimize future for humans.
Exceptions might be made for uploads because of similarity to humans.
If the machines are zombies, we would not accord their experiences the same value as ours, and the posthuman future would seem bleak. But if they are conscious, why should we not welcome the prospect of their future hegemony?I might not be one of them, but a many people disagree with this pretty strongly. Would be better to engage them with arguments here.
I can't imagine what such an argument would look like. Can anyone help?