Tag Archives: Ems

Today, Ems Seem Unnatural

The main objections to “test tube babies” weren’t about the consequences for mothers or babies, they were about doing something “unnatural”:

Given the number of babies that have now been conceived through IVF — more than 4 million of them at last count — it’s easy to forget how controversial the procedure was during the time when, medically and culturally, it was new. … They weren’t entirely sure how IVF was different from cloning, or from the “ethereal conception” that was artificial insemination. They balked at the notion of “assembly-line fetuses grown in test tubes.” … For many, IVF smacked of a moral overstep — or at least of a potential one. … James Watson publicly decried the procedure, telling a Congressional committee in 1974 that … “All hell will break loose, politically and morally, all over the world.” (more)

Similarly, for most ordinary people, the problem with ems isn’t that the scanning process might kill the original human, or that the em might be an unconscious zombie due to their new hardware not supporting consciousness. In fact, people more averse to death have fewer objections to ems, as they see ems as a way to avoid death. The main objections to ems are just that ems seem “unnatural”:

In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. … Participants read a story about a scientist who successfully transfers his consciousness (uploads his mind) onto a computer. … In the story, the scientist injects himself with nano-machines that enter his brain and substitute his neurons one-by-one. After a neuron has been substituted, the functioning of that neuron is copied (uploaded) on a computer; and after each neuron has been copied/uploaded the nano-machines shut down, and the scientist’s body falls on the ground completely limp. Finally, the scientist wakes up inside the computer.

The following variations made NO difference:

[In Study 1] we modified our original vignette by changing the target of mind upload to be either (1) a computer, (2) an android body, (3) a chimpanzee, or (4) an artificial brain. …

[In Study 2] we changed the story in a manner that the scientist merely ingests the nano-machines in a capsule form. Furthermore, we used a 2 × 2 experimental set-up to investigate whether the body dying on a physical level [heart stops or the brain stops] impacts the condemnation of the scientist’s actions. We also investigated whether giving participants information on how the transformation feels for the scientist once he is in the new platform has an impact on the results.

What did matter:

People who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. (paper; summary; HT Stefan Schubert)

As with IVF, once ems are commonplace they will probably also come to seem less unnatural; strange never-before-seen possibilities evoke more fear and disgust than common things, unless those common things seem directly problematic.

GD Star Rating
Tagged as: ,

Age of Em Paperback

Today is the official U.S. release date for the paperback version of my first book The Age of Em: Work, Love, and Life when Robots Rule the Earth. (U.K. version came out a month ago.) Here is the new preface:

I picked this book topic so it could draw me in, and I would finish. And that worked: I developed an obsession that lasted for years. But once I delivered the “final” version to my publisher on its assigned date, I found that my obsession continued. So I collected a long file of notes on possible additions. And when the time came that a paperback edition was possible, I grabbed my chance. As with the hardback edition, I had many ideas for changes that might make my dense semi-encyclopedia easier for readers to enjoy. But my core obsession again won out: to show that detailed analysis of future scenarios is possible, by showing just how many reasonable conclusions one can draw about this scenario.

Also, as this book did better than I had a right to expect, I wondered: will this be my best book ever? If so, why not make it the best it can be? The result is the book you now hold. It has over 42% more citations, and 18% more words, but it is only a bit easier to read. And now I must wonder: can my obsession stop now, pretty please?

Many are disappointed that I do not more directly declare if I love or hate the em world. But I fear that such a declaration gives an excuse to dismiss all this; critics could say I bias my analysis in order to get my desired value conclusions. I’ve given over 100 talks on this book, and never once has my audience failed to engage value issues. I remain confident that such issues will not be neglected, even if I remain quiet.

These are the only new sections in the paperback: Anthropomorphize, Motivation, Slavery, Foom, After Ems. (I previewed two of them here & here.)  I’ll make these two claims for my book:

  1. There’s at least a 5% chance that my analysis will usefully inform the real future, i.e., that something like brain emulations are actually the first kind of human-level machine intelligence, and my analysis is mostly right on what happens then. If it is worth having twenty books on the future, it is worth having a book with a good analysis of a 5% scenario.
  2. I know of no other analysis of a substantially-different-from-today future scenario that is remotely as thorough as Age of Em. I like to quip, “Age of Em is like science fiction, except there is no plot, no characters, and it all makes sense.” If you often enjoy science fiction but are frustrated that it rarely makes sense on closer examination, then you want more books like Age of Em. The success or not of Age of Em may influence how many future authors try to write such books.
GD Star Rating
Tagged as: ,

Like the Ancients, We Have Gods. They’ll Get Greater.

Here’s a common story about gods. Our distant ancestors didn’t understand the world very well, and their minds contained powerful agent detectors. So they came to see agents all around them, such as in trees, clouds, mountains, and rivers. As these natural things vary enormously in size and power, our ancestors had to admit that such agents varied greatly in size and power. The big ones were thus “gods”, and to be feared. While our forager ancestors were fiercely egalitarian, and should thus naturally resent the existence of gods, gods were at least useful in limiting status ambitions of local humans; however big you were, you weren’t as big as gods. All-seeing powerful gods were also useful in enforcing norms; norm violators could expect to be punished by such gods.

However, once farming era war, density, and capital accumulation allowed powerful human rulers, these rulers co-opted gods to enforce their rule. Good gods turned bad. Rulers claimed the support of gods, or claimed to be gods themselves, allowing their decrees to take priority over social norms. However, now that we (mostly) know that there just isn’t a spirit world, and now that we can watch our rulers much more closely, we know that our rulers are mere humans without the support of gods. So we much less tolerate strong rulers, their claims of superiority, or their norm violations. Yay us.

There are some problems with this story, however. Until the Axial revolution of about 3500 years ago, most gods were local to a social group. For our forager ancestors, this made them VERY local, and thus typically small. Such gods cared much more that you show them loyalty than what you believed, and they weren’t very moralizing. Most gods had limited power; few were all-powerful, all-knowing, and immortal. People mostly had enough data to see that their rulers did not have vast personal powers. And finally, rather than reluctantly submitting to gods out of fear, we have long seen people quite eager to worship, praise, and idolize gods, and also their leaders, apparently greatly enjoying the experience.

Here’s a somewhat different story. Long before they became humans, our ancestors deeply craved both personal status, and also personal association with others who have the high status. This is ancient animal behavior. Forager egalitarian norms suppressed these urges, via emphasizing the also ancient envy and resentment of the high status. Foragers came to distinguish dominance, the bad status that forces submission via power, from prestige, the good status that invites you to learn and profit by watching and working with them. As part of their larger pattern of hidden motives, foragers often pretended that they liked leaders for their prestige, even when they really also accepted and even liked their dominance.

Once foragers believed in spirits, they also wanted to associate with high status spirits. Spirits increased the supply of high status others to associate with, which people liked. But foragers also preferred to associated with local spirits, to show local loyalties. With farming, social groups became larger, and status ambitions could also rise. Egalitarian norms were suppressed. So there came a demand for larger gods, encompassing the larger groups.

In this story the fact that ancient gods were spirits who could sometimes violate ordinary physical rules was incidental, not central. The key driving force was a desire to associate with high status others. The ability to violate physical rules did confer status, but it wasn’t a different kind of status than that held by powerful humans. So very powerful humans who claimed to be gods weren’t wrong, in terms of the essential dynamic. People were eager to worship and praise both kinds of gods, for similar reasons.

Thus today even if we don’t believe in spirts, we can still have gods, if we have people who can credibly acquire very high status, via prestige or dominance. High enough to induce not just grudging admiration, but eager and emotionally-unreserved submission and worship. And we do in fact have such people. We have people who are the best in the world at the abilities that the ancients would recognize for status, such as physical strength and coordination, musical or story telling ability, social savvy, and intelligence. And in addition, technology and social complexity offer many new ways to be impressive. We can buy impressive homes, clothes, and plastic surgery, and travel at impressive speeds via impressive vehicles. We can know amazing things about the universe, and about our social world, via science and surveillance.

So we today do in fact have gods, in effect if not in name. (Though actors who play gods on screen can be seen as ancient-style gods.) The resurgence of forager values in the industrial era makes us reluctant to admit it, but a casual review of celebrity culture makes it very clear, I’d say. Yes, we mostly admit that our celebrities don’t have supernatural powers, but that doesn’t much detract from the very high status that they have achieved, or our inclination to worship them.

While it isn’t obviously the most likely scenario, one likely and plausible future scenario that has been worked out in unusual detail is the em scenario, as discussed in my book Age of Em. Ems would acquire many more ways to be individually impressive, acquiring more of the features that made the mythical ancient gods so impressive. Ems could be immortal, occupy many powerful and diverse physical bodies, move around the world at the speed of light, think very very fast, have many copies, and perhaps even somewhat modify their brains to expand each copy’s mental capacity. Automation assistants could expand their abilities even more.

As most ems are copies of the few hundred most productive ems, there are enormous productivity differences among typical ems. By any reasonable measure, status would vary enormously. Some would be gods relative to others. Not just in a vague metaphorical sense, but in a deep gut-grabbing emotional sense. Humans, and ems, will deeply desire to associate with them, via praise, worship and more.

Our ancestors had gods, we have gods, and our descendants will like have even greater more compelling gods. The phenomena of gods is quite far from dead.

GD Star Rating
Tagged as: , ,

The Uploaded

In this post I again contrast my analysis of future ems in Age of Em with a fictional depictions of ems, and find that science fiction isn’t very realistic, having other priorities. Today’s example: The Uploaded, by Ferrett Steinmetz:

The world is run from the afterlife, by the minds of those uploaded at the point of death. Living is just waiting to die… and maintaining the vast servers which support digital Heaven. For one orphan that just isn’t enough – he wants more for himself and his sister than a life of servitude. Turns out he’s not the only one who wants to change the world.

The story is set 500 years and 14 human generations after a single genius invented ems. While others quickly found ways to copy this tech, his version was overwhelming preferred. (In part due to revelations of “draconian” competitor plans.) So much so that he basically was able to set the rules of this new world, and to set them globally. He became an immortal em, and so still rules the world. His rules, and the basic tech and econ arrangement, have remained stable for those 500 years, during which there seems to have been vastly less tech change and economic growth than we’ve seen in the last 500 years.

His rules are the these: typically when a biological humans dies, one emulation of them is created who is entitled to eternal leisure in luxurious virtual realities. That one em runs at ordinary human speed, no other copies of it are allowed, ems never inhabit android physical bodies, and ems are never created of still living biological humans. By now there are 15 times as many ems as humans, and major decisions are made by vote, which ems always win. Ems vote to divert most resources to their servers, and so biological humans are poor, their world is run down, and diseases are killing them off.

Virtual realities are so engaging that em parents can’t even be bothered to check in on their young children now in orphanages. But a few ems get bored and want to do useful jobs, and they take all the nice desk jobs. Old ems are stuck in their ways and uncreative, preventing change. Biological humans are only needed to do physical jobs, which are boring and soul-crushing. It is illegal for them to do programming. Some ems also spend lots of time watching via surveillance cameras, so biological humans are watched all the time.

Every day every biological human’s brain is scanned and evaluated by a team of ems, and put into one of five status levels. Higher levels are given nicer positions and privileges, while the lowest levels are not allowed to become ems. Biological humans are repeatedly told they need to focus on pleasing their em bosses so they can get into em heaven someday. To say more, I must give spoilers; you are warned. Continue reading "The Uploaded" »

GD Star Rating
Tagged as: ,

How Human Are Meditators?

Someday we may be able to create brain emulations (ems), and someday later we may understand them sufficiently to allow substantial modifications to them. Many have expressed concern that competition for efficient em workers might then turn ems into inhuman creatures of little moral worth. This might happen via reductions of brain systems, features, and activities that are distinctly human but that contribute less to work effectiveness. For example Scott Alexander fears loss of moral value due to “a very powerful ability to focus the brain on the task at hand” and ems “neurologically incapable of having their minds drift off while on the job”.

A plausible candidate for em brain reduction to reduce mind drift is the default mode network:

The default mode network is active during passive rest and mind-wandering. Mind-wandering usually involves thinking about others, thinking about one’s self, remembering the past, and envisioning the future.… becomes activated within an order of a fraction of a second after participants finish a task. … deactivate during external goal-oriented tasks such as visual attention or cognitive working memory tasks. … The brain’s energy consumption is increased by less than 5% of its baseline energy consumption while performing a focused mental task. … The default mode network is known to be involved in many seemingly different functions:

It is the neurological basis for the self:

Autobiographical information: Memories of collection of events and facts about one’s self
Self-reference: Referring to traits and descriptions of one’s self
Emotion of one’s self: Reflecting about one’s own emotional state

Thinking about others:

Theory of Mind: Thinking about the thoughts of others and what they might or might not know
Emotions of other: Understanding the emotions of other people and empathizing with their feelings
Moral reasoning: Determining just and unjust result of an action
Social evaluations: Good-bad attitude judgments about social concepts
Social categories: Reflecting on important social characteristics and status of a group

Remembering the past and thinking about the future:

Remembering the past: Recalling events that happened in the past
Imagining the future: Envisioning events that might happen in the future
Episodic memory: Detailed memory related to specific events in time
Story comprehension: Understanding and remembering a narrative

In our book The Elephant in the Brain, we say that key tasks for our distant ancestors were tracking how others saw them, watching for ways others might accuse them of norm violations, and managing stories of their motives and plans to help them defend against such accusations. The difficulty of this task was a big reason humans had such big brains. So it made sense to design our brains to work on such tasks in spare moments. However, if ems could be productive workers even with a reduced capacity for managing their social image, it might make sense to design ems to spend a lot less time and energy ruminating on their image.

Interestingly, many who seek personal insight and spiritual enlightenment try hard to reduce the influence of this key default mode network. Here is Sam Harris from his recent book Waking Up: A Guide to Spirituality Without Religion:

Psychologists and neuroscientist now acknowledge that the human mind tends to wander. .. Subjects reported being lost in thought 46.9 percent of the time. .. People are consistently less happy when their minds wander, even when the contents of their thoughts are pleasant. … The wandering mind has been correlated with activity in the … “default mode” or “resting state” network (DMN). .. Activity in the DMN decreases when subjects concentrate on tasks of the sort employed in most neuroimaging experiments.

The DMN has also been linked with our capacity for “self-representation.” … [it] is more engaged when we make such judgements of relevance about ourselves, as opposed to making them about other people. It also tends to be more active when we evaluate a scene from a first person point of view. … Generally speaking, to pay attention outwardly reduces activity in the [DMN], while thinking about oneself increases it. …

Mindfulness and loving-kindness mediation also decrease activity in the DMN – and the effect is most pronounced among experienced meditators. … Expert meditators … judge the intensity of an unpleasant stimulus the same but find it to be less unpleasant. They also show reduced activity in regions associated with anxiety while anticipanting the onsite of pain. … Mindfulness reduces both the unpleasantness and intensity of noxious stimuli. …

There is an enormous difference between being hostage to one’s thoughts and being freely and nonjudgmentally aware of life in the present. To make this shift is to interrupt the process of rumination and reactivity that often keep us so desperately at odds with ourselves and with other people. … Meditation is simply the ability to stop suffering in many of the usual ways, if only for a few moments at a time. … The deepest goal of spirituality is freedom from the illusion of the self. (pp.119-123)

I see a big conflict here. On the one hand, many are concerned that competition could destroy moral value by cutting away distinctively human features of em brains, and the default net seems a prime candidate for cutting. On the other hand, many see meditation as a key to spiritual insight, one of the highest human callings, and a key task in meditation is cutting the influence of the default net. Ems with a reduced default net could more easily focus, be mindful, see the illusion of the self, and feel more at peace and less anxious about their social image. So which is it, do such ems achieve our highest spiritual ideals, or are they empty shells mostly devoid of human value? Can’t be both, right?

By the way, I was reading Harris because he and I will record a podcast Feb 21 in Denver.

GD Star Rating
Tagged as: ,

The Ems of Altered Carbon

People keep suggesting that I can’t possibly present myself as an expert on the future if I’m not familiar with their favorite science fiction (sf). I say that sf mostly pursues other purposes and rarely tries much to present realistic futures. But I figure should illustrate my claim with concrete examples from time to time. Which brings us to Altered Carbon, a ten episode sf series just out on Netflix, based on a 2002 novel. I’ve watched the series, and read the novel and its two sequels.

Altered Carbon’s key tech premise is a small “stack” which can sit next to a human brain collecting and continually updating a digital representation of that brain’s full mental state. This state can also be transferred into the rest of that brain, copied to other stacks, or placed and run in an android body or a virtual reality. Thus stacks allow something much like ems who can move between bodies.

But the universe of Altered Carbon looks very different from my description of the Age of Em. Set many centuries in future, our descendants have colonized many star systems. Technological change then is very slow; someone revived after sleeping for centuries is familiar with almost all the tech they see, and they remain state-of-the-art at their job. While everyone is given a stack as a baby, almost all jobs are done by ordinary humans, most of whom are rather poor and still in their original body, the only body they’ll ever have. Few have any interest in living in virtual reality, which is shown as cheap, comfortable, and realistic; they’d rather die. There’s also little interest in noticeably-non-human android bodies, which could plausibly be pretty cheap.

Regarding getting new very-human-like physical bodies, some have religious objections, many are disinterested, but most are just too poor. So most stacks are actually never used. Stacks can insure against accidents that kill a body but don’t hurt the stack. Yet while it should be cheap and easy to backup stack data periodically, inexplicibly only rich folks do that.

It is very illegal for one person to have more than one stack running at a time. Crime is often punished by taking away the criminal’s body, which creates a limited supply of bodies for others to rent. Very human-like clone and android bodies are also available, but are very expensive. Over the centuries some have become very rich and long-lived “meths”, paying for new bodies as needed. Meths run everything, and are shown as inhumanly immoral, often entertaining themselves by killing poor people, often via sex acts. Our hero was once part of a failed revolution to stop meths via a virus that kills anyone with a century of subjective experience.

Oh, and there have long been fully human level AIs who are mainly side characters that hardly matter to this world. I’ll ignore them, as criticizing the scenario on these grounds is way too easy.

Now my analysis says that there’d be an enormous economic demand for copies of ems, who can do most all jobs via virtual reality or android bodies. If very human-like physical bodies are too expensive, the economy would just skip them. If allowed, ems would quickly take over all work, most activity would be crammed in a few dense cities, and the economy could double monthly. Yet while war is common in the universe of Altered Carbon, and spread across many star systems, no place ever adopts the huge winning strategy of unleashing such an em economy and its associated military power. While we see characters who seek minor local advantages get away for long times with violating the rule against copying, no one ever tries to do this to get vastly rich, or to win a war. No one even seems aware of the possibility.

Even ignoring the AI bit, I see no minor modification to make this into a realistic future scenario. It is made more to be a morality play, to help you feel righteous indignation at those damn rich folks who think they can just live forever by working hard and saving their money over centuries. If there are ever poor humans who can’t afford to live forever in very human-like bodies, even if they could easily afford android or virtual immortality, well then both the rich and the long-lived should all burn! So you can feel morally virtuous watching hour after hour of graphic sex and violence toward that end. As it so happens that hand-to-hand combat, typically producing big spurts of blood, and often among nudes, is how most conflicts get handled in this universe. Enjoy!

GD Star Rating
Tagged as: ,

Meaning is Easy to Find, Hard to Justify

One of the strangest questions I get when giving talks on Age of Em is a variation on this:

How can ems find enough meaning in their lives to get up and go to work everyday, instead of committing suicide?

As the vast majority of people in most every society do not commit suicide, and manage to get up for work on most workdays, why would anyone expect this to be a huge problem in a random new society?

Even stranger is that I mostly get this question from smart sincere college students who are doing well at school. And I also hear that such students often complain that they do not know how to motivate themselves to do many things that they “want” to do. I interpret this all as resulting from overly far thinking on meaning. Let me explain.

If we compare happiness to meaning, then happiness tends to be an evaluation of a more local situation, while meaning tends to be an evaluation of a more global situation. You are happy about this moment, but you have meaning regarding your life.

Now you can do either of these evaluations in a near or a far mode. That is, you can just ask yourself for your intuitions on how you feel about your life, within over-thinking it, or you can reason abstractly and idealistically about what sort of meaning you should have or can justify having. In that later more abstract mode, smart sincere people can be stumped. How can they justify having meaning in a world where there is so much randomness and suffering, and that is so far from being a heaven?

Of course in a sense, heaven is an incoherent concept. We have so many random idealistic constraints on what heaven should be like that it isn’t clear that anything can satisfy them all. For example, we may want to be the hero of a dramatic story, even if we know that characters in such stories wish that they could live in more peaceful worlds.

Idealistic young people have such problems in spades, because they haven’t lived long enough to see how unreasonable are their many idealistic demands. And smarter people can think up even more such demands.

But the basic fact is that most everyone in most every society does in fact find meaning in their lives, even if they don’t know how to justify it. Thus I can be pretty confident that ems also find meaning in their lives.

Here are some more random facts about meaning, drawn from my revised Age of Em, out next April.

Today, individuals who earn higher wages tend to have both more happiness and a stronger sense of purpose, and this sense of purpose seems to cause higher wages. People with a stronger sense of purpose also tend to live longer. Nations that are richer tend to have more happiness but less meaning in life, in part because they have less religion. .. Types of meaning that people get from work today include authenticity, agency, self-worth, purpose, belonging, and transcendence.

Happiness and meaning have different implications for behavior, and are sometimes at odds. That is, activities that raise happiness often lower meaning, and vice versa. For example, people with meaning think more about the future, while happy people focus on the here and now. People with meaning tend to be givers who help others, while happy people tend to be takers who are helped by others. Being a parent and spending time with loved ones gives meaning, but spending time with friends makes one happy.

Affirming one’s identity and expressing oneself increase meaning but not happiness. People with more struggles, problems, and stresses have more meaning, but are less happy. Happiness but not meaning predicts a satisfaction of desires, such as for health and money, and more frequent good relative to bad feelings. Older people gain meaning by giving advice to younger people. We gain more meaning when we follow our gut feelings rather than thinking abstractly about our situations.

My weak guess is that productivity tends to predict meaning more strongly than happiness. If this is correct, it suggests that, all else equal, ems will tend to think more about the future, more be givers who help others, spend more time with loved ones and less with friends, more affirm their identity and express themselves, give more advice, and follow gut feelings more. But they will also have more struggles and less often have their desires satisfied.

GD Star Rating
Tagged as: , , ,

Can Human-Like Software Win?

Many, perhaps most, think it obvious that computer-like systems will eventually be more productive than human-like systems in most all jobs. So they focus on how humans might maintain control, even after this transition. But this eventuality is less obvious than it seems, depending on what exactly one means by “human-like” or “computer-like” systems. Let me explain.

Today the software that sits in human brains is stuck in human brain hardware, while the other kinds of software that we write (or train) sit in the artificial hardware that we make. And this artificial hardware has been improving rapidly far more rapidly than has human brain hardware. Partly as a result of this, systems of artificial software and hardware have been improving rapidly compared to human brain systems.

But eventually we will find a way to transfer the software from human brains into artificial hardware. Ems are one way to do this, as a relatively direct port. But other transfer mechanics may be developed.

Once human brain software is in the same sort of artificial computing hardware as all the other software, then the relative productivity of different software categories comes down to a question of quality: which categories of software tend to be more productive on which tasks?

Of course there will many different variations available within each category, to match to different problems. And the overall productivity of each category will depend both on previous efforts to develop and improve software in that category, and also on previous investments in other systems to match and complement that software. For example, familiar artificial software will gain because we have spent longer working to match it to familiar artificial hardware, while human software will gain from being well matched to complex existing social systems, such as language, firms, law, and government.

People give many arguments for why they expect human-like software to mostly lose this future competition, even when it has access to the same hardware. For example, they say that other software could lack human biases and also scale better, have more reliable memory, communicate better over wider scopes, be easier to understand, have easier meta-control and self-modification, and be based more directly on formal abstract theories of learning, decision, computation, and organization.

Now consider two informal polls I recently gave my twitter followers:

Surprisingly, at least to me, the main reason that people expect human-like software to lose is that they mostly expect whole new categories of software to appear, categories quite different from both the software in the human brain and also all the many kinds of software with which we are now familiar. If it comes down to a contest between human-like and familiar software categories, only a quarter of them expect human-like to lose big.

The reason I find this surprising is that all of the reasons that I’ve seen given for why human-like software could be at a disadvantage seem to apply just as well to familiar categories of software. In addition, a new category must start with the disadvantages of having less previous investment in that category and in matching other systems to it. That is, none of these are reasons to expect imagined new categories of software to beat familiar artificial software, and yet people offer them as reasons to think whole new much more powerful categories will appear and win.

I conclude that people don’t mostly use specific reasons to conclude that human-like software will lose, once it can be moved to artificial hardware. Instead they just have a general belief that the space of possible software is huge and contains many new categories to discover. This just seems to be the generic belief that competition and innovation will eventually produce a lot of change. Its not that human-like software has any overall competitive disadvantage compared to concrete known competitors; it is at least as likely to have winning descendants as any such competitors. Its just that our descendants are likely to change a lot as they evolve over time. Which seems to me a very different story than the humans-are-sure-to-lose story we usually hear.

GD Star Rating
Tagged as: , ,

Ems in Walkaway

Some science fiction (sf) fans have taken offense at my claim that non-fiction analysis of future tech scenarios can be more accurate than sf scenarios, whose authors have other priorities. So I may periodically critique recent sf stories with ems for accuracy. Note that I’m not implying that such stories should have been more accurate; sf writing is damn hard work and its authors juggle a many difficult tradeoffs. But many seem unaware of just how often accuracy is sacrificed.

The most recent sf I’ve read that includes ems is Walkaway, by “New York Times bestselling author” Cory Doctorow, published back in April:

Now that anyone can design and print the basic necessities of life—food, clothing, shelter—from a computer, there seems to be little reason to toil within the system. It’s still a dangerous world out there, the empty lands wrecked by climate change, dead cities hollowed out by industrial flight, shadows hiding predators animal and human alike. Still, when the initial pioneer walkaways flourish, more people join them.

The emotional center of Walkaway is elaborating this vision of a decentralized post-scarcity society trying to do without property or hierarchy. Though I’m skeptical, I greatly respect attempts to describe such visions in more detail. Doctorow, however, apparently thinks we economists make up bogus math for the sole purpose of justifying billionaire wealth inequality. Continue reading "Ems in Walkaway" »

GD Star Rating
Tagged as: ,

Philosophy Vs. Duck Tests

Philosophers, and intellectuals more broadly, love to point out how things might be more complex than they seem. They identify more and subtler distinctions, suggest more complex dependencies, and warn against relying on “shallow” advisors less “deep” than they. Subtly and complexity is basically what they have to sell.

I’ve often heard people resist such sales pressure by saying things like “if it looks like a duck, walks like a duck, and quacks like a duck, it’s a duck.” Instead of using complex analysis and concepts to infer and apply deep structures, they prefer to such use a “duck test” and judge by adding up many weak surface clues. When a deep analysis disagrees with a shallow appearance, they usually prefer to go shallow.

Interestingly, this whole duck example came from philosophers trying to warn against judging from surface appearances: Continue reading "Philosophy Vs. Duck Tests" »

GD Star Rating
Tagged as: ,