Tag Archives: Ems

Imagine A Mars Boom

Most who think they like the future really just like where their favorite stories took place. As a result, much future talk focuses on space, even though prospects for much activity beyond Earth anytime foreseeable seem dim. Even so, consider the following hypothetical, with three key assumptions:

Mars boom: An extremely valuable material (anti-matter? glueballs? negative mass?) is found on Mars, justifying huge economic efforts to extract it, process it, and return it to Earth. Many orgs compete strongly against one another in all of these stages to profit from the Martian boom.

A few top workers: As robots just aren’t yet up to the task, a thousand humans must be sent to and housed on Mars. The cost of this is so great that all trips are one-way, at least for a while, and it is worth paying extra to get the very highest quality workers possible. So Martians are very impressive workers, and Mars is “where the action is” in terms of influencing the future. As slavery is rare on Earth, most all Mars workers must volunteer for the move.

Martians as aliens: Many, perhaps even most, people on Earth see those who live on Mars as aliens, for whom the usual moral rules do not apply – morality is to protect Earthlings only. Such Earth folks are less reluctant to enslave Martians. Martians undergo some changes to their body, and perhaps also to their brain, but when seen in films or tv, or when talked to via (20+min delayed) Skype, Martians act very human.

Okay, now my question for you is: Are most Martians slaves? Are they selected for and trained into being extremely docile and servile?

Slavery might let Martian orgs make Martians work harder, and thereby extract more profit from each worker. But an expectation of being enslaved should make it much harder to attract the very best human workers to volunteer. Many Earth governments may even not allow free Earthlings to volunteer to become enslaved Martians. So my best guess is that in this hypothetical, Martians are free workers, rich and high status celebrities followed and admired by most Earthlings.

I’ve created this Mars scenario as an allegory of my em scenario, because someone I respect recently told me they were persuaded by Bryan Caplan’s claim that ems would be very docile slaves. As with these hypothesized Martians, the em economy would produce enormous wealth and be where the action is, and it would result from competing orgs enticing a thousand or fewer of the most productive humans to volunteer for an expensive one-way trip to become ems. When viewed in virtual reality, or in android bodies, these ems would act very human. While some like Bryan see ems as worth little moral consideration, others disagree.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
a WordPress rating system
Tagged as: ,

Play Will Persist

We live in the third human era, industry, which followed the farming and foraging eras. Each era introduced innovations that we expect will persist into future eras. Yet some are skeptical. They foresee “post-apocalyptic” scenarios wherein civilization collapses, industrial machines are lost, and we revert to using animals like mules and horses for motive power. Where we lose cities and instead spread across the land. We might even lose organized law, and revert to each small band enforcing its own local law.

On the surface, the future scenario I describe in my book The Age of Em looks nothing like a civilization collapse. It has more better bigger tech, machines, cities, and organizations. Yet many worry that in it we would lose an even more ancient innovation: play. As in laughter, music, teasing, banter, stories, sports, hobbies, etc. Because the em era is a more competitive world where wages return to near subsistence levels, many fear the loss of play and related activities. All of life becomes nose-to-the-grindstone work, where souls grind into dust.

Yet the farming and foraging eras were full of play, even though they were also competitive eras with subsistence wages. Moreover, play is quite common among animals, pretty much all of whom have lived in competitive worlds near subsistence levels:

Play is .. found in a wide range of animals, including marsupials, birds, turtles, lizards, fish, and invertebrates. .. [It] is a diverse phenomenon that evolved independently and was even secondarily reduced or lost in many groups of animals. (more)

Here is where we’ve found play in the evolutionary tree:

playhistory

We know roughly what kind of animals play:

Animals that play often share common traits, including active life styles, moderate to high metabolic rates, generalist ecological needs requiring behavioral flexibility or plasticity, and adequate to abundant food resources. Object play is most often found in species with carnivorous, omnivorous, or scavenging foraging modes. Locomotor play is prominent in species that navigate in three-dimensional (e.g., trees, water) or complex environments and rely on escape to avoid predation. Social play is not easily summarized, but play fighting, chasing, and wrestling are the major types recorded and occur in almost every major group of animals in which play is found. (more)

Not only are humans generalists with an active lifestyle, we have neoteny, which extends youthful features and behaviors, including play, throughout our lives. So humans have always played, a lot. Given this long robust history of play in humans and animals, why would anyone expect play to suddenly disappear with ems?

Part of the problem is that from the inside play feels like an activity without a “useful” purpose:

Playful activities can be characterized as being (1) incompletely functional in the context expressed; (2) voluntary, pleasurable, or self rewarding; (3) different structurally or temporally from related serious behavior systems; (4) expressed repeatedly during at least some part of an animal’s life span; and (5) initiated in relatively benign situations. (more)

While during serious behavior we are usually aware of some important functions our behaviors serve, in play we enter a “magic circle” wherein we feel safe, focus on pleasure, and act out a wider variety of apparently-safe behaviors. We stop play temporarily when something serious needs doing, and also for longer periods when we are very stressed, such as when depressed or starving. These help give us the impression that play is “extra”, serving no other purpose than “fun.”

But of course such a robust animal behavior must serve important functions. Many specific adaptive functions have been proposed, and while there isn’t strong agreement on their relative importance, we are pretty confident that since play has big costs, it must also give big gains:

Juveniles spend an estimated 2 to 15 percent of their daily calorie budget on play, using up calories the young animal could more profitably use for growing. Frisky playing can also be dangerous, making animals conspicuous and inattentive, more vulnerable to predators and more likely to hurt themselves as they romp and cavort. .. Harcourt witnessed 102 seal pups attacked by southern sea lions; 26 of them were killed. ‘‘Of these observed kills,’’ Harcourt reported in the British journal Animal Behaviour, ‘‘22 of the pups were playing in the shallow tidal pools immediately before the attack and appeared to be oblivious to the other animals fleeing nearby.’’ In other words, nearly 85 percent of the pups that were killed had been playing. (more)

Play can help to explore possibilities, both to learn and practice the usual ways of doing things, and also to discover new ways. In addition, play can be used to signal loyalty, develop trust and coordination, and establish relative status. And via play one can indirectly say things one doesn’t like to say directly. All of these functions should continue to be relevant for ems.

Given all this, I can’t see much doubt that ems would play, at least during the early em era, and play nearly as typical humans in history. Sure it is hard to offer much assurance that play will continue into the indefinite future. But this is mainly because it is hard to offer much assurance of anything in the indefinite future, not because we have good specific reasons to expect play to go away.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Social Science Critics

Many critics of Age of Em are critics of social science; they suggest that even though we might be able to use today’s physics or computer science to guess at futures, social science is far less useful.

For example At Crooked Timber Henry Farrell was “a lot more skeptical that social science can help you make predictions”, though he was more skeptical about thinking in terms of markets than in terms of “vast and distributed hierarchies of exploitation”, as these “generate complexities” instead of “ breaking them down.”

At Science Fact & Science Fiction Concatenation, Jonathan Cowie suggests social science only applies to biological creatures:

While Hanson’s treatise is engaging and interesting, I confess that personally I simply do not buy into it. Not only have I read too much SF to think that em life will be as prescriptive as Hanson portrays, but coming from the biological sciences, I am acutely aware of the frailties of the human brain hence mind (on a psychobiological basis). Furthermore, I am uncomfortable in the way that the social science works Hanson draws upon to support his em conclusions: it is an apples and oranges thing, I do not think that they can readily translate from one to the other; from real life sociobiological constructs to, in effect, machine code. There is much we simply do not know about this, as yet, untrodden land glimpsed from afar.

At Ricochet, John Walker suggests we can’t do social science if we don’t know detail stories of specific lives:

The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don’t know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems.

At his blog, Lance Fortnow suggests my social science assumes too much rationality:

I don’t agree with all of Hanson’s conclusions, in particular he expects a certain rationality from ems that we don’t often see in humans, and if ems are just human emulations, they may not want a short life and long retirement. Perhaps this book isn’t about ems and robots at all, but about Hanson’s vision of human-like creatures as true economic beings as he espouses in his blog. Not sure it is a world I’d like to be a part of, but it’s a fascinating world nevertheless.

At Entropy Chat List, Rafal Smigrodzki suggests social science doesn’t apply if ems adjust their brain design:

My second major objection: Your pervasive assumption that em will remain largely static in their overall structure and function. I think this assumption is at least as unlikely as the em-before-AI assumption. Imagine .. you have the detailed knowledge of your own mind, the tools to modify it, and the ability to generate millions of copies to try out various modifications. .. you do analyze this possibility, you consider some options but in the end you still assume ems will be just like us. Of course, if ems are not like us, then a lot of the detailed sociological research produced on humans would not be very applicable to their world and the book would have to be shorter, but then it might be a better one. In one chapter you mention that lesbian women make more money and therefore lesbian ems might make money as well. This comes at the end of many levels of suspension of disbelief, making the sociology/gender/psychology chapters quite exhausting.

At his blog, J Storrs Hall said something similar:

Robin’s scenario precludes some of these concerns by being very specific to a single possibility: that we have the technology to copy off any single particular human brain, we don’t understand them well enough to modify them arbitrarily. Thus they have to operated in a virtual reality that is reasonably close to a simulated physical world. There is a good reason for doing it this way, of course: that’s the only uploading scenario in which all the social science studies and papers and results and so forth can be assumed to still apply.

Most social scientists, and especially most economists, don’t see what they have learned as being quite so fragile. Yes it is nice to check abstract theories against concrete anecdotes, but in fact most who publish papers do little such checking, and their results only suffer modestly from the lack. Yes being non-biological, or messing a bit with brain design, may make some modest differences. But most social science theory just isn’t that sensitive to such details. As I say in the book:

Our economic theories apply reasonably well not only to other classes and regions within rich nations today, but also to other very different nations today and to people and places thousands of years ago. Furthermore, formal economic models apply widely even though quite alien creatures usually populate them, that is, selfish rational strategic agents who never forget or make mistakes. If economic theory built using such agents can apply to us today, it can plausibly apply to future ems.

The human brain is a very large complex legacy system whose designer did not put a priority on making it easy to understand, modify, or redesign. That should greatly limit the rate at which big useful redesign is possible.

GD Star Rating
a WordPress rating system
Tagged as: ,

How Culturally Plastic?

Typical farming behaviors violated forager values. Farmers added marriage, property, war, inequality, and much less art, leisure and travel. 100K years ago if someone had suggested that foragers would be replaced by farmers, critics could easily have doubted that foragers would act like that. But tens of thousands of years was enough time for cultural variation and selection to produce new farming cultures more compatible with the new farming ways.

A typical subsistence farmer from a thousand years ago might have been similarly skeptical about a future industrial world wherein most people (not just elites) pick leaders by voting, have little religion, spend fifteen years of their youth in schools, and are promiscuous, work few hours, abide in skyscrapers, ride in fast trains, cars, & planes, and work in factories and large organizations with much and explicit rules, ranking, and domination. Many of these acts would have scared or offended typical farmers. Even those who knew that tens of millennia was enough to create cultures that embraced farming values might have doubted a few centuries was enough for industry values. But it was.

In my book The Age of Em I describe a world after which it has adapted to brain emulation tech. While I tend to assume that culture has changed to support habits productive in the competitive em world, a common criticism of my book is that the behaviors I posit for the em world conflict with values commonly held today. For example, from Steven Poole’s Guardian review:

Hanson assumes there is no big problem about the continuity of identity among such copies. .. But there is plausibly a show-stopping problem here. If someone announces they will upload my consciousness into a robot and then destroy my existing body, I will take this as a threat of murder. The robot running an exact copy of my consciousness won’t actually be “me”. (Such issues are richly analysed in the philosophical literature stemming from Derek Parfit’s thought experiments about teleportation and the like in the 1980s.) So ems – the first of whom are, by definition, going to have minds identical to those of humans – may very well exhibit the same kind of reaction, in which case a lot of Hanson’s more thrillingly bizarre social developments will not happen. (more)

Peter McCluskey has similar reservations about my saying at least dozens of human children would be scanned to supply an em economy with flexible young minds:

Robin predicts few regulatory obstacles to uploading children, because he expects the world to be dominated by ems. I’m skeptical of that. Ems will be dominant in the sense of having most of the population, but that doesn’t tell us much about em influence on human society – farmers became a large fraction of the world population without meddling much in hunter-gatherer political systems. And it’s unclear whether em political systems would want to alter the relevant regulations – em societies will have much the same conflicting interest groups pushing for and against immigration that human societies have. (more)

Farmers may not have meddled much in internal forager cultures, nor industry in internal farmer culture. But when prior era cultural values have conflicted with key activities of the new era, new eras have consistently won such conflicts. And since the em era should encompass thousands of years of subjective experience for typical ems, there seems plenty of time for em culture to adapt to new conditions. But as humans may only experience a few years during the em era and its preceding transition, it seems more of an open question how far human behaviors would adapt.

We are talking about the em world needing a small number of humans scanned, especially children. Such scans are probably destructive, at least initially. As individual human inclinations vary quite a lot, if the choice is up to individuals, enough humans would volunteer. So the question is if human coordinate enough in each area to prevent this, such as via law. If they coordinate well in most areas, but not in a few other areas, then if there are huge productivity advantages from being able to scan people or kids, the few places that allow it will quickly dominate the rest. And in anticipation of that loss, other places would cave as well. So without global coordination to prevent this, it happens.

Peter talks about the possibility of directly emulating the growth of baby brains all the way from the beginning. And yes if this was easy enough, the em world wouldn’t bother to fight organized human opposition. However, since emulation from conception seems a substantial new capacity, I didn’t feel comfortable assuming it in my book. So I focused on the case where it isn’t possible early on, in which case the above analysis applies.

This whole topic is mostly about: how culturally plastic are we? I’ve been assuming a lot of plasticity, and my critics have been saying less. The academics who most specialize in cultural plasticity, such as anthropologists, tend to say we are quite plastic. So as with my recent post on physicists being confident that there is no extra non-physical feeling stuff, this seems another case where most people have strong intuitions that conflict with expert claims, and they won’t defer to experts.

GD Star Rating
a WordPress rating system
Tagged as: ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
a WordPress rating system
Tagged as: , ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Oarsman Pay Parable

Imagine an ancient oarsman, rowing in a galley boat. Rowing takes effort, and risks personal injury, so all else equal an oarsman would rather not row, or row only weakly. How can his boss induce effort?

One simple approach is to offer a very direct and immediate incentive. Use slaves as rowers, and have a boss watch them, whipping any who aren’t rowing as hard as sustainably possible. This actually didn’t happen much in the ancient world; galley slaves weren’t common until the 1500s. But the idea is simple. And of course the same system could also work with cash; usually make positive payments for work, but sometimes fine those you discover aren’t working hard enough. Of course the boss can’t watch everyone all the time. But with a big enough penalty when caught, it might work.

Now imagine that the boss can’t watch each individual oarsman, but can only see the overall speed of the ship. Now the entire crew must be punished together, all or none of them. The boss might try to improve the situation by empowering oarsmen to punish each other for not rowing hard enough, and that might help, but rowers would also use that power for other ends, creating costs.

An even worse case is where the boss can only see how long it takes for the boat to reach its destination. Here the boss might reward the crew for a short trip, and punish them for a long one, but a great many other random factors will influence the length of the trip. Why bother to work hard, if it makes little difference to your chance of reward or punishment?

There is a general principle here. As we add more noise to the measurement of relevant outcomes visible to the ultimate boss, the harder it is to use incentives tied to such outcomes to incentivize rowers. This is true regardless of the type of incentives used. Yes, the lower the worst outcome, and the higher the best outcome, that the boss can impose, the stronger incentives can be. But even the strongest possible incentives can fail when noise is high.

Yes, one can create layers of bosses, with the lowest bosses able to see specifics best. But it can be hard to give lower bosses good incentives, if higher bosses can’t see well.

Another problem is if the boss doesn’t know just how hard each oarsman is capable of rowing. In this case most oarsmen get some slack, so that they aren’t punished for not doing more than they can. This is just one example of an “information rent”. In general, such rents come from any work-relevant info that the worker has that the boss can’t see. If rowers need to synchronize their actions with each other or with waves or wind or time of day. If a ship captain needs to choose the ship’s route based info on weather and pirates. If a captain needs to treat different cargo differently in different conditions. If a captain need to make judgements about whether to wait longer in port for more cargo.

In general, when you want a worker to see some local condition, and then take an action that depends on that condition, you must pay some extra rent. So the more relevant info that workers get, the more choices they make, and the more that rides on those choices, the more workers gain in info rents.

A related issue is the scope for sabotage. Angry resentful workers can seek hidden ways to hurt their bosses and ventures. So the more hard-to-detect ways workers have to hurt things, the more bosses want to treat them well enough to avoid anger and resentment. Pained, sullen, or depressed workers can also hurt the mood of co-workers, suppliers, customers, and investors whom they contact. And the threat of pain can stress workers, making it harder for them to think clearly and well. These issues tend to argue against often using beatings and pain for motivation, even if such things allow stronger incentives by expanding the range of possible outcomes.

Overall, these issues are bigger for more “complex” work, i.e., for more cognitive work, work that adapts more to diverse and new local conditions, and work in larger organizations. In the modern world, jobs have been getting more complex in these ways, and the organization and work literature I’ve read suggests that finding good work incentives is a central problem in modern organizations, and that more complex work is a big reason why modern workplaces substitute broad incentives and good treatment for the detailed and harsh rules and monitoring more common in past eras.

The literature I’ve read on the economics of slavery also uses job complexity to explain the severity of treatment of slaves. Slaves in artisan jobs, in cities, and in households were treated better than field slaves, arguably because of job complexity. They were beaten less, and paid more, and might eventually buy their own freedom.

Bryan Caplan has argued that ems would be treated harshly as slaves: Continue reading "Oarsman Pay Parable" »

GD Star Rating
a WordPress rating system
Tagged as: , ,