Author Archives: Robin Hanson

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

GD Star Rating
loading...
Tagged as: , ,

In Praise of Low Needs

We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

Of course few individuals today focus on filling the universe with life. Most attend to their individual needs. And as we’ve been getting rich over the last few centuries, our needs have changed. Many cite Maslow’s Hierarchy of Needs:

maslowshierarchyofneeds-svg

While few offer much concrete evidence for this, most seem to accept it or one of its many variations. Once our basic needs are met, our attention switches to “higher” needs. Wealth really does change humans. (I see this in part as our returning to forager values with increasing wealth.)

It is easy to assume that what is good for you is good overall. If you are an artist, you may assume the world is better when consumers more art. If you are a scientist, you may assume the world is better if it gives more attention and funding to science. Similarly, it is easy to assume that the world gets better if more of us get more of what we want, and thus move higher into Maslow’s Hierarchy.

But I worry: as we attend more to higher needs, we may grow and innovate less regarding lower needs. Can the universe really get filled by creatures focused mainly on self-actualization? Why should they risk or tolerate disruptions from innovations that advance low needs if they don’t care much for that stuff? And many today see their higher needs as conflicting with more capacity to fill low needs. For example, many see more physical capacities as coming at the expense of less nature, weaker indigenous cultures, larger more soul-crushing organizations, more dehumanizing capitalism, etc. Rich nations today do seem to have weaker growth in raw physical capacities because of such issues.

Yes, it is possible that even rich societies focused on high needs will consistently grow their capacities to satisfy low needs, and that will eventually lead to a universe densely filled with life. But still I worry about all those unknown obstacles yet to be seen as our descendants try to grow through another three to ten factors as large as humanity’s leap. At some of those obstacles, will a focus on high needs lead them to turn away from the grand growth path? To a comfortable “sustainable” stability without all that disruptive innovation? How much harder would become to restart growth again later?

Pretty much all the growth that we have seen so far has been in a context where humans, and their ancestors, were focused mainly on low needs. Our current turn toward high needs is quite new, and thus relatively unproven. Yes, we have continued to grow, but more slowly. That seems worth at least a bit of worry.

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.

GD Star Rating
loading...
Tagged as: ,

Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
loading...
Tagged as: ,

Chronicle Review Profile

I’m deeply honored to be the subject of a cover profile this week in The Chronicle Review:

chroniclecover-17oct2016

By David Wescott, the profile is titled Is This Economist Too Far Ahead of His Time?, October 16, 2016.

In academic journal articles where the author has an intended answer to a yes or no question, that answer is more often yes, and I think that applies here as well. The profile includes a lot about my book The Age of Em on a far future, and its title suggests that anyone who’d study a far future must be too far ahead of their time. But, when else would one study the far future other than well ahead of time? It seems to me that even in a rational world where everyone was of their time, some people would study other times. But perhaps the implied message is that we don’t live in such a world.

I’m honored to have been profiled, and broad ranging profiles tend to be imprecisely impressionistic. I think David Wescott did a good job overall, but since these impressions are about me, I’ll bother to comment on some (and signal my taste for precision). Here goes.

You inhabit a robotic body, and you stand roughly two millimeters tall. This is the world Robin Hanson is sketching out to a room of baffled undergraduates at George Mason University on a bright April morning.

Honestly, “baffled” is how most undergrads look to most professors during lectures.

Hanson is .. determined to promote his theories in an academy he finds deeply flawed; a doggedly rational thinker prone to intentionally provocative ideas that test the limits of what typically passes as scholarship.

Not sure I’m any more determined to self-promote than a typical academic. I try to be rational, but of course I fail. I seek the possibility of new useful info, and so use the surprise of a claim as a sign of its interestingness. Surprise correlates with “provocative”, and my innate social-cluelessness means I’ll neglect the usual social signs to “avoid this topic!” I question if I’m “intentionally provocative” beyond these two factors.

Hanson, deeply skeptical of conventional intellectual discourse,

I’m deeply skeptical of all discourse, intellectual or not, conventional or not.

At Caltech he found that economists based their ideas on simple models, which worked well in experiments but often failed to capture the complexities of the real world.

That is true of simple models in all fields, not just economics, and it is a feature not a bug. Models can be understood, while the full complexity of reality cannot.

But out of 3600 words, that’s all I have to correct, so good job David Wescott.

GD Star Rating
loading...
Tagged as: ,

Smart Sincere Contrarian Trap

We talk as if we pick our beliefs mainly for accuracy, but in fact we have many social motives for picking beliefs. In particular, we use many kinds of beliefs as group affiliation/conformity signals. Some of us also use a few contrarian beliefs to signal cleverness and independence, but our groups have a limited tolerance for such things.

We can sometimes win socially by joining impressive leaders with the right sort of allies who support new fashions contrary to the main current beliefs. If enough others also join these new beliefs, they can become the new main beliefs of our larger group. At that point, those who continue to oppose them become the contrarians, and those who adopted the new fashions as they were gaining momentum gain more relative to latecomers. (Those who adopt fashions too early also tend to lose.)

As we are embarrassed if we seem to pick beliefs for any reason other than accuracy, this sort of new fashion move works better when supported by good accuracy-oriented reasons for changing to the new beliefs. This produces a weak tendency, all else equal, for group-based beliefs to get more accurate over time. However, many of our beliefs are about what actions are effective at achieving the motives we claim to have. And we are often hypocritical about our motives. Because of this, workable fashion moves need not just good reasons to belief claims about the efficacy of actions for stated motives, but also enough of a correspondence between the outcomes of those actions and our actual motives. Many possible fashion moves are unworkable because we don’t actually want to pursue the motives we proclaim.

Smarter people are better able to identify beliefs better supported by reasons, which all else equal makes those beliefs better candidates for new fashions. So those with enough status to start a new fashion may want to listen to smart people in the habit of looking for such candidates. But reasonably smart people who put in the effort are capable of finding a great many places where there are good reasons for picking a non-status-quo belief. And if they also happen to be sincere, they tend to visibly support many of those contrarian beliefs, even in the absence of supporting fashion movements with a decent chance of success. Which results in such high-effort smart sincere people sending bad group affiliation/conformity signals. So while potential leaders of new fashions want to listen to such people, they don’t want to publicly affiliate with them.

I fell into this smart sincere conformity trap long ago. I’ve studied many different areas, and when I’ve discovered an alternate belief that seems to have better supporting reasons than a usual belief, I have usually not hesitated to publicly embrace it. People have told me that it would have been okay for me to publicly embrace one contrarian belief. I might then have had enough overall status to plausibly lead that as a new fashion. But the problem is that I’ve supported many contrarian beliefs, not all derived from a common core principle. And so I’m not a good candidate to be a leader for any of my groups or contrarian views.

Which flags me as a smart sincere person. Good to listen to behind the scenes to get ideas for possible new fashions, but bad to embrace publicly as a loyal group member. I might gain if my contrarian views eventually became winning new fashions, but my early visible adoption of those views probably discourages others from trying to lead them, as they can less claim to have been first with those views.

If the only people who visibly supported contrarian views were smart sincere people who put in high effort, then such views might become known for high accuracy. This wouldn’t necessarily induce most people to adopt them, but it would help. However, there seem to be enough people who visibly adopt contrarian views for others reasons to sufficiently muddy the waters.

If prediction markets were widely adopted, the visible signals of which beliefs were more accurate would tend to embarrass more people into adopting them. Such people do not relish this prospect, as it would have them send bad group affiliation signals. Smart sincere people might relish the prospect, but there are not enough of them to make a difference, and even the few there are mostly don’t seem to relish it enough to work to get prediction markets adopted. Sincerely holding a belief isn’t quite the same as being willing to work for it.

GD Star Rating
loading...
Tagged as: , ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
loading...
Tagged as: , ,

Play Blindness

I’ve recently come to see play as a powerful concept for analyzing our behaviors. As I explained recently, play is a very old and robust capacity in many kinds of animals, apparently rediscovered several times.

In non-social play, an animal might play with their body or an object. When they feel safe and satisfied, they carve out a distinct space and time, within which they feel a deep pleasure from just trying out many random ways to interact, all chosen from a relatively safe space of variations. Often animals seek variations wherein they and their play objects reach some sort of interaction equilibrium, as when dribbling a ball. In such equilibria, they can successfully adjust to random interaction variations. Animals may end play abruptly if an non-play threat or opportunity appears.

In social play, an animal again waits until safe and satisfied, and feels pleasure from a large variety of safe behavior within a distinct space and time. The difference is that now they explore behavior that interacts with other animals, seeking equilibria that adjust well to changes in other animals’ behavior. Babies and mothers interact this way, and predators and prey act out variations on chasing and evading. Cats may play with mice before killing them.

These sorts of play can serve many functions, including learning, practice, and innovation. In addition, social play requires social skills of boundary management. That is, animals must develop ways to invite others to play, to indicate the kind of play intended, to assure others when play continues, and to indicate when play has ended. As with grooming, who one plays with becomes a signal of affiliation. Animals can work out their relative status via who tends to “win” inside play games, and communicate other things (such as flirtation) indirectly via play.

As humans have developed more kinds of social behavior, have better ways to communicate, and extend youthful behaviors into our whole lives, we have more ways to play. We can nest some types of play within others, and can create new types of play on the fly. Common features of most human play are some sort of safety prerequisites, a bounded space in which play happens, a feeling of pleasure from being included, a habit of exploring a wide range of options within play, limits on acceptable behavior, and special signals to initiate, continue, and end each play period.

For example, in mild-insult verbal banter play, we must each feel safe enough to focus on the banter, we and our allies are not supposed to threaten or interfere except via the banter, we are supposed to create each new response individually without help, responses are supposed to vary widely instead of repeating predictably, and some types of insults remain off limits. People may get quite emotionally hurt by such banter, but play can only continue while they pretend otherwise.

Another key feature of most human play is that we are supposed to only play for fun, instead of for gains outside of play. So we aren’t supposed to play golf to suck up to the boss, or to join a band to attract dates. Thus we typically suppress awareness of benefits outside of play. Most people find it hard to give coherent explanations of functions of play outside “fun.”

This seems to be one of humanity’s main blind spots regarding our motives. In general we try to explain most of our behaviors using the “highest” plausible motives we can find, and try to avoid violating social norms about appropriate motives. So we can be quite consciously clueless about why we play. That hardly means, however, that play serves no important functions in our lives. Far from it, in fact.

GD Star Rating
loading...
Tagged as: ,

No One Rules The World

I’ve talked on my book Age of Em 79 times so far (#80 comes Saturday in Pisa, Italy). As it relies a lot on economics, while I mostly talk to non-econ audiences, I’ve been exposed a lot to how ordinary people react to economics. As I posted recently, one big thing I see a low confidence in any sort of social science to say anything generalizable about anything.

But the most common error I see is a lack of appreciation that coordination is hard. I hear things like:

If you asked most people today if they want a future like this, they’d say no. So how could it happen if most people don’t like it?

Their model seems to be that social outcomes are a weighted average of individual desires. If so, an outcome most people dislike just can’t happen. If you ask for a mechanism the most common choice is revolution: if there was some feature of the world that most people didn’t like, well of course they’d have a revolution to fix that. And then the world would be fixed. And not just small things: changes as big as the industrial or farming revolutions just wouldn’t happen if most people didn’t want them.

Now people seem to be vaguely aware that revolutions are hard and rare, that many attempted revolutions have failed, or succeeded but failed to achieve its stated aims, and that the world today has many features that majorities dislike. The world today has even more features where majorities feel unsure, not knowing what to think, because things are so complicated that it is hard to understand the feasible options and action consequences. Yet people seem to hold the future to a different standard, especially the far future.

Near-far theory (aka construal level theory) offers a plausible explanation for this different attitude toward the future. As we know a lot less detail about the future, we see it in a far mode, wherein we are more confident in our theories, see fewer relevant distinctions, and emphasize basic moral values relative to practical constraints. Even if the world around us seems too complex to understand and evaluate, issues and choices seem simpler and clearer regarding a distant future where in fact we can barely envision its outlines.

But of course coordination is actually very hard. Not only do most of us only dimly understand the actual range of options and consequences of our actions today, even when we do understand we find it hard to coordinate to achieve such outcomes. It is easier to act locally to achieve our local ends, but the net effect of local actions can result in net outcomes that most of us dislike. Coordination requires that we manage large organizations which are often weak, random, expensive, and out of control.

This seems especially true regarding the consequences of new tech. So far in history tech has mostly appeared whenever someone somewhere has wanted it enough, regardless of what the rest of the world thought. Mostly, no one has been driving the tech train. Sometimes we like the result, and sometimes we don’t. But no one rules the world, so these results mostly just happen either way.

GD Star Rating
loading...
Tagged as: ,

Play Will Persist

We live in the third human era, industry, which followed the farming and foraging eras. Each era introduced innovations that we expect will persist into future eras. Yet some are skeptical. They foresee “post-apocalyptic” scenarios wherein civilization collapses, industrial machines are lost, and we revert to using animals like mules and horses for motive power. Where we lose cities and instead spread across the land. We might even lose organized law, and revert to each small band enforcing its own local law.

On the surface, the future scenario I describe in my book The Age of Em looks nothing like a civilization collapse. It has more better bigger tech, machines, cities, and organizations. Yet many worry that in it we would lose an even more ancient innovation: play. As in laughter, music, teasing, banter, stories, sports, hobbies, etc. Because the em era is a more competitive world where wages return to near subsistence levels, many fear the loss of play and related activities. All of life becomes nose-to-the-grindstone work, where souls grind into dust.

Yet the farming and foraging eras were full of play, even though they were also competitive eras with subsistence wages. Moreover, play is quite common among animals, pretty much all of whom have lived in competitive worlds near subsistence levels:

Play is .. found in a wide range of animals, including marsupials, birds, turtles, lizards, fish, and invertebrates. .. [It] is a diverse phenomenon that evolved independently and was even secondarily reduced or lost in many groups of animals. (more)

Here is where we’ve found play in the evolutionary tree:

playhistory

We know roughly what kind of animals play:

Animals that play often share common traits, including active life styles, moderate to high metabolic rates, generalist ecological needs requiring behavioral flexibility or plasticity, and adequate to abundant food resources. Object play is most often found in species with carnivorous, omnivorous, or scavenging foraging modes. Locomotor play is prominent in species that navigate in three-dimensional (e.g., trees, water) or complex environments and rely on escape to avoid predation. Social play is not easily summarized, but play fighting, chasing, and wrestling are the major types recorded and occur in almost every major group of animals in which play is found. (more)

Not only are humans generalists with an active lifestyle, we have neoteny, which extends youthful features and behaviors, including play, throughout our lives. So humans have always played, a lot. Given this long robust history of play in humans and animals, why would anyone expect play to suddenly disappear with ems?

Part of the problem is that from the inside play feels like an activity without a “useful” purpose:

Playful activities can be characterized as being (1) incompletely functional in the context expressed; (2) voluntary, pleasurable, or self rewarding; (3) different structurally or temporally from related serious behavior systems; (4) expressed repeatedly during at least some part of an animal’s life span; and (5) initiated in relatively benign situations. (more)

While during serious behavior we are usually aware of some important functions our behaviors serve, in play we enter a “magic circle” wherein we feel safe, focus on pleasure, and act out a wider variety of apparently-safe behaviors. We stop play temporarily when something serious needs doing, and also for longer periods when we are very stressed, such as when depressed or starving. These help give us the impression that play is “extra”, serving no other purpose than “fun.”

But of course such a robust animal behavior must serve important functions. Many specific adaptive functions have been proposed, and while there isn’t strong agreement on their relative importance, we are pretty confident that since play has big costs, it must also give big gains:

Juveniles spend an estimated 2 to 15 percent of their daily calorie budget on play, using up calories the young animal could more profitably use for growing. Frisky playing can also be dangerous, making animals conspicuous and inattentive, more vulnerable to predators and more likely to hurt themselves as they romp and cavort. .. Harcourt witnessed 102 seal pups attacked by southern sea lions; 26 of them were killed. ‘‘Of these observed kills,’’ Harcourt reported in the British journal Animal Behaviour, ‘‘22 of the pups were playing in the shallow tidal pools immediately before the attack and appeared to be oblivious to the other animals fleeing nearby.’’ In other words, nearly 85 percent of the pups that were killed had been playing. (more)

Play can help to explore possibilities, both to learn and practice the usual ways of doing things, and also to discover new ways. In addition, play can be used to signal loyalty, develop trust and coordination, and establish relative status. And via play one can indirectly say things one doesn’t like to say directly. All of these functions should continue to be relevant for ems.

Given all this, I can’t see much doubt that ems would play, at least during the early em era, and play nearly as typical humans in history. Sure it is hard to offer much assurance that play will continue into the indefinite future. But this is mainly because it is hard to offer much assurance of anything in the indefinite future, not because we have good specific reasons to expect play to go away.

GD Star Rating
loading...
Tagged as: , ,

Liu Cixin’s Trilogy

I just finished Liu Cixin’s trilogy of books, Three Body Problem, Dark Forest, and Death’s End. They’ve gotten a lot of praise as perhaps the best classic-style science fiction in the past decade. This praise usually makes sure to mention that Cixin is Chinese, and thus adds to diversity in science fiction. Which I think has shielded him from some criticism he’d get if he were white. To explain, I have to give some spoilers, below the fold. You are warned. Continue reading "Liu Cixin’s Trilogy" »

GD Star Rating
loading...
Tagged as: ,