On Homo Deus

Historian Yuval Harari’s best-selling book Sapiens mostly talked about history. His new book, Homo Deus, won’t be released in the US until February 21, but I managed to find a copy at the Istanbul airport – it came out in Europe last fall. This post is about the book, and it is long and full of quotes; you are warned.

While Homo Deus also mostly talks about the past and present, people told me I should read it because it is about the future, just like my book. Amazon lists it as a bestseller in “humanist philosophy” which seems a reasonable category for it. This is “philosophy” in the popular not the academic sense of the word. The book doesn’t draw much on academic philosophy, but instead uses various science-fictiony and futuristic scenarios to anchor a discussion of possible broad attitudes toward life and the universe. These quotes give you a flavor:

The modern deal offers us power, on the condition that we renounce our believe in a great cosmic plan that gives meaning to life. .. According to humanism, humans must draw form within their inner experiences not only the meaning of their own lives, but also the meaning of the entire universe. .. Humanism has taught us that something can be bad only if it causes someone to feel bad. .. Humanism split into three branches. .. The orthodox branch [and] .. two very different offshoots: socialist humanism .. and evolutionary humanism.

Liberals .. believe that every humans is a uniquely valuable individual, whose free choices are the ultimate source of authority. In the twenty-first century, three practical developments might make this belief obsolete: 1. Humans will lose their economic and military usefulness. .. 2. The system will still find value in humans collectively, but not in unique humans. 3. The system will still find value in some unique humans, but these will be a new elite of upgraded superhuman rather than the mass of the population. ..

The contradiction between free will and contemporary science is the elephant in the laboratory, whom many prefer not to see as they peer into their microscopes and fMRI scanners. .. If organisms indeed lack free will, it implies we could manipulate and even control their desires using drugs, genetic engineering or direct brain stimulation. ..

New techno-religions can be divided into two main types: techno-humanists and data religion. Data religion argues that humans have completed their cosmic task, ad they should now pass the torch on to entirely new kinds of entities. .. Techno-humanism .. still sees humans as the apex of creation .. but concludes that we should therefore use technology to create Homo deus – a much superior human model.

I’ve complained that futurist (and political) talk jumps too quickly to value talk, and should instead take more time to first work out details of likely scenarios. By this standard, Homo Deus is not my kind of futurism. But in the process of setting up those value discussions Homo Deus does make some more factual claims about the future. And in this post I’ll give Yuval Harari the respect of criticizing some of his claims. (Absent sufficient respect, I’d just ignore him.)

Harari starts out by claiming:

If incidences of famine, plague, and war are decreasing, something is bound to take their place on the human agenda. .. Having secured unprecedented levels of prosperity, health and harmony, and given our past record and our current values, humanity’s next targets are likely to be immortality, happiness, and divinity. .. In seeking bliss and immortality humans are in fact trying to upgrade themselves into gods .. because .. humans will first have to acquire godlike control of their own biological substratum.

But then Harari makes these qualifications:

In truth they will actually be mortal, rather than immortal. .. Future superhuman could still die in some war or accident. .. However, unlike us mortals, their life would have no expiry date. .. Hopes of eternal youth in the twenty-first century are premature. … It is unwarranted to .. conclude that we can double [average life expectancy] again to 150 in the coming century. ..

The prediction that in the twenty first century humankind is likely to aim for immortality, bliss, and divinity … this is not what most individuals will actually do .. it is what humankind as a collective will do. .. My prediction is focused on what humankind will try to achieve in the twenty first century – not what it will succeed in achieving. .. This prediction is less of a prophesy and more a way of discussing our present choices. If the discussion makes us choose differently, so that the prediction is proven wrong, all the better. ..

Our new-found knowledge leads to faster economic, social, and political changes. .. Consequently we are less and less able to make sense of the present or forecast the future.

So, at some unspecified chance and speed of success, civilization can be seen as merely trying to make progress at living longer and being happier, and by his definition succeeding at this via biotech is succeeding at “divinity”. Oh but we can’t really predict much and that isn’t what Harari is trying. So far, his claims seem too weak to allow much disagreement. Here’s a few more such weak claims:

Forget economic growth, social reforms, and political revolutions: in order to raise global happiness levels, we need to manipulate human biochemistry. .. A growing percentage of the population is taking psychiatric medicines on a regular basis. .. As the biochemical pursuit of happiness accelerates, so it will reshape politics, society, and economics, and it will become ever harder to bring it under control.

Many people will be happy to transfer much of their decision-making processes into the hands of such a system, or at least consult with it whenever they face important choices. Google will advise us which movie to see, where to go on holiday, what to study in college, which job to accept, and even whom to date and marry. .. Once Google, Facebook and other algorithms become all-knowing oracles, they may well evolve into agents and finally into sovereigns. .. Eventually we may reach point where it will become impossible to disconnect form this all-knowing network even for a moment. Disconnection will mean death. ..

The same technologies that can upgrade humans into gods might also make humans irrelevant. Computers powerful enough to understand and overcome the mechanisms of aging and death will probably also be powerful enough to replace humans in any and all tasks.

Hard to argue with the weak descriptor “reshape”, though it isn’t at all obvious to me that happy meds will become harder to control. And sure if algorithms become all knowing they might rule all, and if it is computers that upgrade us into gods they might replace us. But those are big “ifs”.

Harari does make some much stronger claims, however. Here are some which many disagree:

Homo sapiens is not going to be exterminated by a robot revolt. Rather, Homo sapiens is likely to upgrade itself step by step, merging with robots and computers in the process. ..

The attempt to upgrade Homo sapiens is likely to change the world beyond recognition in this century. ..

But you can’t disagree with Harari’s arguments for these particular claims, because he doesn’t offer any. Here is another strong claim, with which I disagree:

Once technology enables us to re-engineer human minds, Homo Sapiens will disappear, human history will come to an end, and a completely new kind of process will begin, which people like you and me cannot comprehend. Many scholars try to predict how the world will look in the year 2100 or 2200. This is a waste of time. Any worthwhile prediction must take into account the ability to re-engineer human minds, and this is impossible. .. There are no good answers to `What would beings with a different kind of mind do with biotechnology.’ .. Our present-day minds cannot grasp what might happen next.

No further argument or evidence is offered for this claim. But we now understand many things about the possible space of minds and the consequences of mind features for behavior. I don’t see why we can’t apply these insights just as we do other insights, nor why we can’t acquire more such insights.

I disagree with the following claims because they seem to ignore the possibility of the brain emulation scenario I describe in Age of Em:

The upgrading of humans into gods may follow any of three paths: biological engineering, cyborg engineering and the engineering of non-biological beings. ..

People will have much longer careers, and will have to reinvent themselves agains and again even at the age of ninety. ..

Computers function very differently form humans, and it seems unlikely that computers will become humanlike anytime soon. In particular, it doesn’t seem that computers are about to gain consciousness, and to start experiencing emotions and sensations.

Emulations offer a fourth path to intelligent machines, machines that could be conscious. Ems could run fast enough that they don’t need reinvent themselves much during their careers. And a world of ems is a world of Malthusian subsistence, which is no longer secure against past concerns of famine and war.

Finally, let me comment on a few of Harari’s non-future factual claims:

If you speak with the experts, many of them will tell you the we are still very far away from genetically engineered babies or human level artificial intelligence. But most experts think on a timescale of academic grants and college jobs. Hence, ‘very far away’ may mean twenty years, and ‘never’ may denote no more than fifty.

This just isn’t fair – many experts clearly explain that they think such events are centuries away.

The modern economy needs constant and indefinite growth in order to survive. If growth ever stops, the economy won’t settle down to some cosy equilibrium; it will fall to pieces.

Most economists disagree. Yes we get some benefits from growth, and so would pays costs for statis, but much slower growth is quite possible. That was in fact the usual case before our recent industrial era.

Precisely because technology is now moving so fact, and parliaments and dictator are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. In the early twenty-first century, politics is consequentially bereft of grand visions. .. Yet power vacuums seldom last long. If the twenty-first century tractional political structures an no longer precess the data fast enough to product meaningful visions, then new and more efficient structure will evolve to take their place.

A weaker taste for grand visions seems to me a better explanation for less political attention to such visions today. So this isn’t much of a reason to expect new political structures.

Homo Deus concludes with these two paragraphs:

If we take the really grand view of life, all other problems and developments are overshadowed by three interlinked processes: 1. Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing. 2. Intelligence is decoupling from consciousness. 3. Non-Conscious but highly intelligence algorithms may know us better than we know ourselves.

These three process raise three key questions, which I hope will stick in your mind long after you have finished this book: 1. Are organisms really just algorithms, and is life really just data processing? 2. What’s more valuable – intelligence or consciousness? 3. What will happen to society, politics, and daily life when non-conscious but highly intelligence algorithms know us better than we know ourselves?

While Harari is mainly focused on using history and futurist themes to explore his “philosophical” issues, he does make claims about the future. Harari is smart and articulate, but to my mind his book suffers from too little attention to analysis of possible future facts. (I’ll let others evaluate his value claims.) The possibility of ems shows that our recent reduction of famine and war need not continue, and that future machine-based minds smarter than us may well be conscious and retain many human values, at least for an important period.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Joe

    Regarding consciousness, what do you make of the stronger claim that since consciousness seems to have evolved independently multiple times, it is probably quite adaptive, not a mistake or even a weird unusual solution, and can therefore be expected to play some role in future intelligences at the very least?

    You recently convinced me that anything that seems conscious by all normal tests and checks we might give it must be seen as as legitimately conscious as we are. A theory of consciousness can help us avoid false negatives, such as people with locked-in syndrome, but the idea of there being ‘false positives’, i.e. entities that are conscious by all observations other than this one ‘true consciousness’ test, is just incoherent, and means the test will have to be expanded, rather than the new kind of mind excluded. So another way to predict the presence or absence of consciousness in the future is just to look at its function and whether/where that will still be needed; its specific implementation is irrelevant.

  • Boursin

    “If organisms indeed lack free will, it implies we could manipulate and
    even control their desires using drugs, genetic engineering or direct
    brain stimulation.”

    If organisms lack free will, it implies that it’s not up to us whether any organism’s desires will in fact be manipulated, and how exactly they will be manipulated – since what will be doing the manipulating will themselves be organisms and therefore lack free will when deciding on who to manipulate and how!

  • Mike Johnson

    >Emulations offer a fourth path to intelligent machines, machines that could be conscious.

    Why do you think brain emulations would be conscious? Or a better question: what is your definition of consciousness?