Beware Covert War Morality Tales

For years I’ve been saying that fiction is mainly about norm affirmation:

Both religion and fiction serve to reassure our associates that we will be nice. In addition to letting us show we can do hard things, and that we are tied to associates by doing the same things, religious beliefs show we expect the not nice to be punished by supernatural powers, and our favorite fiction shows the sort of people we think are heroes and villains, how often they are revealed or get their due reward, and so on. (more)

People fear that story-less people have not internalized social norms well – they may be too aware of how easy it would be to get away with violations, and feel too little shame from trying. Thus in equilibrium, people are encouraged to consume stories, and to deludedly believe in a more just world, in order to be liked more by others. (more)

Our actual story abilities are tuned for the more specific case of contests, where the stories are about ourselves or our rivals, especially where either we or they are suspected of violating social norms. We might also be good at winning over audiences by impressing them and making them identify more with us, and we may also be eager to listen to gain exemplars, signal norms, and exert influence. (more) Continue reading "Beware Covert War Morality Tales" »

GD Star Rating
loading...
Tagged as: , ,

Bad-News Boxes

Many firms fail to pass bad news up the management chain, and suffer as a result, even though simple fixes have long been known:

Wall Street Journal placed the blame for the “rot at GE” on former CEO Jeffrey Immelt’s “success theater,” pointing to what analysts and insiders said was a history of selectively positive projections, a culture of overconfidence and a disinterest in hearing or delivering bad news. …The article puts GE well out of its usual role as management exemplar. And it shines a light on a problem endemic to corporate America, leadership experts say. People naturally avoid conflict and fear delivering bad news. But in professional workplaces where a can-do attitude is valued above all else, and fears about job security remain common, getting unvarnished feedback and speaking candidly can be especially hard. …

So how can leaders avoid a culture of “success theater?” … They have to model the behavior, being realistic about goals and forecasts and candid when things go wrong. They should host town halls where employees can speak up with criticism, structuring them so bad news can flow to the top. For instance, he recommends getting respected mid-level managers to first interview lower-level employees about what’s not working to make sure tough subjects are aired. …

Doing that is harder than it sounds, making it critical for leaders to create systemic ways to offer feedback, rather than just talking about it. She tells the story of a former eBay manager who would leave a locked orange box near the office bathrooms where people could leave critical questions. He would later read them aloud in meetings — with someone else unlocking the box to prove he hadn’t edited its contents — hostile questions and all. “People never trusted anything was really anonymous except paper,” she said. “He did it week in and week out.”

When she worked at Google, where she led online sales and operations for AdSense, YouTube and Doubleclick, she had a crystal statue she called the “I was wrong, you were right” statue that she’d hand out to colleagues and direct reports. (more)

Consider what signal a firm sends by NOT regularly reading the contents of locked anonymous bad news boxes at staff meetings. They in effect admit that they aren’t willing to pay a small cost to overcome a big problem, if that interferes with the usual political games. You might think investors would see this as a big red flag, but in fact they hardly care.

I’m not sure how exactly to interpret this equilibrium, but it is clearly bad news for prediction markets in firms. Such markets are also sold as helping firms to uncover useful bad news. If firms don’t do easier simpler things to learn bad news, why should we expect them to do more complex expensive things?

GD Star Rating
loading...
Tagged as: ,

Signal Inertia

For millennia, we humans have shown off our intelligence via complicated arguments and large vocabularies, health via sport achievement, heavy drink, and long hours, and wealth via expensive clothes, houses, trips, etc. Today we appear to have the more efficient signaling substitutes, such as IQ tests, medical tests, and bank statements. Yet we continue to show off in the old ways, and rarely substitute such new ways. Why?

One explanation is inertia. Signaling equilibria require complex coordination, and those who try to change it via deviations can seem non-conformist and socially clueless. Another explanation is hypocrisy. As we discuss in our new book, The Elephant in the Brain, ancient and continuing norms against bragging push us to find plausible deniability for our brags. We can pretend that big vocabularies help us convey info, that sports are just fun, and that expensive clothes, etc. are prettier or more comfortable. It is much harder to find excuses to waive around your IQ test or bank statement for others to see.

Now consider these comments by Tyler Cowen on Bryan Caplan’s new book The Case Against Education:

Bryan’s strangest assumption, namely a sociologically-rooted, actually anti-economics “conformity is stronger than you think” argument, which Bryan uses to assert the status quo will continue more or less indefinitely. It won’t. To the extent Bryan is correct (and that you can debate, but at least he is more correct than most people in the educational establishment will let on), competency-based learning and changes in employer behavior will in fact bring about a new equilibrium…not quickly, but certainly in well under two decades.

And what about on-line education? Well, a lot of students don’t like it because they have to actually work on their own and pay attention. To the extent education really is just signaling, that should give on-line options a brighter future all the more. But not in the Caplanian world view, as conformity serves once again as an intervening factor. For better or worse, Bryan’s book subverts economics as a science at least as much as it does education. Bryan of course is smart enough to see the trade-offs here, and he knows if the standard model of economic competition were allowed to reign supreme, we would (even with subsidies, relative to those subsidies) tend to see strong moves toward relatively efficient means of signaling, if only through changes in the relative sizes of institutions.

Tyler suggests that Bryan’s views imply competency-based learning and on-line education are more efficient signals, and so should win a market competition for customers. Yet I don’t see it. Yes, such approaches may let some learn more faster, and signal what they have learned. But Bryan and I see school as less about learning.

Both competency-based learning and on-line education divorce learning from its usual social conformity context. You can use them to learn what you want when you want, and then to prove what you’ve learned. You don’t have to commit to and keep up with a standard plan of what to learn when shared by a large cohort, nor be visibly compared to this cohort.

Yes, such variations may let one better show initiative, independence, creativity, and self-actualization. And yes, we give lip service to admiring such features. But employers are not usually that eager to see such features in their employees. The usual learning plan, in contrast, is much more like a typical workplace, where workers have less freedom to choose their projects, must coordinate plans closely, and must deal with office politics and conformity pressures. It seems to me that success in the usual schooling plans work better as a signal of future workplace performance, and so would not be outcompeted by competency-based learning and on-line education. Even if they let you learn some things faster, and even if change was easier than it is.

GD Star Rating
loading...
Tagged as: ,

On Value Drift

The outcomes within any space-time region can be seen as resulting from 1) preferences of various actors able to influence the universe in that region, 2) absolute and relative power and influence of those actors, and 3) constraints imposed by the universe. Changes in outcomes across regions result from changes in these factors.

While you might mostly approve of changes resulting from changing constraints, you might worry more about changes due to changing values and influence. That is, you likely prefer to see more influence by values closer to yours. Unfortunately, the consistent historical trend has been for values to drift over time, increasing the distance between random future and current values. As this trend looks like a random walk, we see no obvious limit to how far values can drift. So if the value you place on the values of others falls rapidly enough with the distance between values, you should expect long term future values to be very wrong.

What influences value change?
Inertia – The more existing values are tied to important entrenched systems, the less they change.
Growth – On average, over time civilization collects more total influence over most everything.
Competition – If some values consistently win key competitive contests, those values become more common.
Influence Drift – Many processes that change the world produce random drift in agent influence.
Internal Drift – Some creatures, e.g., humans, have values that drift internally in complex ways.
Culture Drift – Some creatures, e.g., humans, have values that change together in complex ways.
Context – Many of the above processes depend on other factors, such as technology, wealth, a stable sun, etc.

For many of the above processes, rates of change are roughly proportional to overall social rates of change. As these rates of change have been increased over time, we should expect faster future change. Thus you should expect values to drift faster in the future than then did in the past, leading faster to wrong values. Also, people are living longer now than they did in the past. So even past people didn’t live long enough to see big enough changes to greatly bother them, future people may live to see much more change.

Most increases in the rates of change have been concentrated in a few sudden large jumps (associated with the culture, farmer, and industry transitions). As a result, you should expect that rates of change may soon increase greatly. Value drift may continue at past rates until it suddenly goes much faster.

Perhaps you discount the future rapidly, or perhaps the value you place on other values falls slowly with value distance. In these cases value drift may not disturb you much. Otherwise, the situation described above may seem pretty dire. Even if previous generations had to accept the near inevitability of value drift, you might not accept it now. You may be willing to reach for difficult and dangerous changes that could remake the whole situation. Such as perhaps a world government. Personally I see that move as too hard and dangerous for now, but I could understand if you disagree.

The people today who seem most concerned about value drift also seem to be especially concerned about humans or ems being replaced by other forms of artificial intelligence. Many such people are also concerned about a “foom” scenario of a large and sudden influence drift: one initially small computer system suddenly becomes able to grow far faster than the rest of the world put together, allowing it to quickly take over the world.

To me, foom seems unlikely: it posits an innovation that is extremely lumpy compared to historical experience, and in addition posits an unusually high difficulty of copying or complementing this innovation. Historically, innovation value has been distributed with a long thin tail: most realized value comes from many small innovations, but we sometimes see lumpier innovations. (Alpha Zero seems only weak evidence on the distribution of AI lumpiness.) The past history of growth rates increases suggests that within a few centuries we may see something, perhaps a very lumpy innovation, that causes a growth rate jump comparable in size to the largest jumps we’ve ever seen, such as at the origins of life, culture, farming, and industry. However, as over history the ease of copying and complementing such innovations has been increasing, it seems unlikely that copying and complementing will suddenly get much harder.

While foom seems unlikely, it does seems likely that within a few centuries we will develop machines that can outcompete biological humans for most all jobs. (Such machines might also outcompete ems for jobs, though that outcome is much less clear.) The ability to make such machines seems by itself sufficient to cause a growth rate increase comparable to the other largest historical jumps. Thus the next big jump in growth rates need not be associated with a very lumpy innovation. And in the most natural such scenarios, copying and complementing remain relatively easy.

However, while I expect machines that outcompete humans for jobs, I don’t see how that greatly increases the problem of value drift. Human cultural plasticity already ensures that humans are capable of expressing a very wide range of values. I see no obviously limits there. Genetic engineering will allow more changes to humans. Ems inherit human plasticity, and may add even more via direct brain modifications.

In principle, non-em-based artificial intelligence is capable of expressing the entire space of possible values. But in practice, in the shorter run, such AIs will take on social roles near humans, and roles that humans once occupied. This should force AIs to express pretty human-like values. As Steven Pinker says:

Artificial intelligence is like any other technology. It is developed incrementally, designed to satisfy multiple conditions, tested before it is implemented, and constantly tweaked for efficacy and safety.

If Pinker is right, the main AI risk mediated by AI values comes from AI value drift that happens after humans (or ems) no longer exercise such detailed frequent oversight.

It may be possible to create competitive AIs with protected values, i.e., so that parts where values are coded are small, modular, redundantly stored, and insulated from changes to the rest of the system. If so, such AIs may suffer much less from internal drift and cultural drift. Even so, the values of AIs with protected values should still drift due to influence drift and competition.

Thus I don’t see why people concerned with value drift should be especially focused on AI. Yes, AI may accompany faster change, and faster change can make value drift worse for people with intermediate discount rates. (Though it seems to me that altruistic discount rates should scale with actual rates of change, not with arbitrary external clocks.)

Yes, AI offers more prospects for protected values, and perhaps also for creating a world/universe government capable of preventing influence drift and competition. But in these cases if you are concerned about value drift, your real concerns are about rates of change and world government, not AI per se. Even the foom scenario just temporarily increases the rate of influence drift.

Your real problem is that you want long term stability in a universe that more naturally changes. Someday we may be able to coordinate to overrule the universe on this. But I doubt we are close enough to even consider that today. To quote a famous prayer:

God, grant me the serenity to accept the things I cannot change,
Courage to change the things I can,
And wisdom to know the difference.

For now value drift seems one of those possibly lamentable facts of life that we cannot change.

GD Star Rating
loading...
Tagged as: , ,

Small Change Good, Big Change Bad?

Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

GD Star Rating
loading...
Tagged as: ,

How Human Are Meditators?

Someday we may be able to create brain emulations (ems), and someday later we may understand them sufficiently to allow substantial modifications to them. Many have expressed concern that competition for efficient em workers might then turn ems into inhuman creatures of little moral worth. This might happen via reductions of brain systems, features, and activities that are distinctly human but that contribute less to work effectiveness. For example Scott Alexander fears loss of moral value due to “a very powerful ability to focus the brain on the task at hand” and ems “neurologically incapable of having their minds drift off while on the job”.

A plausible candidate for em brain reduction to reduce mind drift is the default mode network:

The default mode network is active during passive rest and mind-wandering. Mind-wandering usually involves thinking about others, thinking about one’s self, remembering the past, and envisioning the future.… becomes activated within an order of a fraction of a second after participants finish a task. … deactivate during external goal-oriented tasks such as visual attention or cognitive working memory tasks. … The brain’s energy consumption is increased by less than 5% of its baseline energy consumption while performing a focused mental task. … The default mode network is known to be involved in many seemingly different functions:

It is the neurological basis for the self:

Autobiographical information: Memories of collection of events and facts about one’s self
Self-reference: Referring to traits and descriptions of one’s self
Emotion of one’s self: Reflecting about one’s own emotional state

Thinking about others:

Theory of Mind: Thinking about the thoughts of others and what they might or might not know
Emotions of other: Understanding the emotions of other people and empathizing with their feelings
Moral reasoning: Determining just and unjust result of an action
Social evaluations: Good-bad attitude judgments about social concepts
Social categories: Reflecting on important social characteristics and status of a group

Remembering the past and thinking about the future:

Remembering the past: Recalling events that happened in the past
Imagining the future: Envisioning events that might happen in the future
Episodic memory: Detailed memory related to specific events in time
Story comprehension: Understanding and remembering a narrative

In our book The Elephant in the Brain, we say that key tasks for our distant ancestors were tracking how others saw them, watching for ways others might accuse them of norm violations, and managing stories of their motives and plans to help them defend against such accusations. The difficulty of this task was a big reason humans had such big brains. So it made sense to design our brains to work on such tasks in spare moments. However, if ems could be productive workers even with a reduced capacity for managing their social image, it might make sense to design ems to spend a lot less time and energy ruminating on their image.

Interestingly, many who seek personal insight and spiritual enlightenment try hard to reduce the influence of this key default mode network. Here is Sam Harris from his recent book Waking Up: A Guide to Spirituality Without Religion:

Psychologists and neuroscientist now acknowledge that the human mind tends to wander. .. Subjects reported being lost in thought 46.9 percent of the time. .. People are consistently less happy when their minds wander, even when the contents of their thoughts are pleasant. … The wandering mind has been correlated with activity in the … “default mode” or “resting state” network (DMN). .. Activity in the DMN decreases when subjects concentrate on tasks of the sort employed in most neuroimaging experiments.

The DMN has also been linked with our capacity for “self-representation.” … [it] is more engaged when we make such judgements of relevance about ourselves, as opposed to making them about other people. It also tends to be more active when we evaluate a scene from a first person point of view. … Generally speaking, to pay attention outwardly reduces activity in the [DMN], while thinking about oneself increases it. …

Mindfulness and loving-kindness mediation also decrease activity in the DMN – and the effect is most pronounced among experienced meditators. … Expert meditators … judge the intensity of an unpleasant stimulus the same but find it to be less unpleasant. They also show reduced activity in regions associated with anxiety while anticipanting the onsite of pain. … Mindfulness reduces both the unpleasantness and intensity of noxious stimuli. …

There is an enormous difference between being hostage to one’s thoughts and being freely and nonjudgmentally aware of life in the present. To make this shift is to interrupt the process of rumination and reactivity that often keep us so desperately at odds with ourselves and with other people. … Meditation is simply the ability to stop suffering in many of the usual ways, if only for a few moments at a time. … The deepest goal of spirituality is freedom from the illusion of the self. (pp.119-123)

I see a big conflict here. On the one hand, many are concerned that competition could destroy moral value by cutting away distinctively human features of em brains, and the default net seems a prime candidate for cutting. On the other hand, many see meditation as a key to spiritual insight, one of the highest human callings, and a key task in meditation is cutting the influence of the default net. Ems with a reduced default net could more easily focus, be mindful, see the illusion of the self, and feel more at peace and less anxious about their social image. So which is it, do such ems achieve our highest spiritual ideals, or are they empty shells mostly devoid of human value? Can’t be both, right?

By the way, I was reading Harris because he and I will record a podcast Feb 21 in Denver.

GD Star Rating
loading...
Tagged as: ,

A Salute To Median Calm

It is a standard trope of fiction that people often get angry when they suffer life outcomes well below what they see as their justified expectations. Such sore losers are tempted to retaliate against the individuals and institutions they blame for their loss, causing increasing damage until others agree to fix the unfairness.

Most outcomes, like income or fame, are distributed with mean outcomes well above median outcomes. As a result, well over half of everyone gets an outcome below what that they could have reasonably expected. So if this sore loser trope were true, there’d be a whole lot of angry folks causing damage. Maybe even most people would be this angry. Hard to see how civilization could function here. This scenario is often hoped-for by those who seek dramatic revolutions to fix large scale social injustices.

Actually, however, even though most people might plausibly see themselves as unfairly assigned to be losers, few become angry enough to cause much damage. Oh most people will have resentments and complaints, and this may lead on occasion to mild destruction, but most people are mostly peacefully. In the words of the old song, while they may not get what they want, they mostly get what they need.

Not only do most people achieve much less than the average outcomes, they achieve far less than the average outcomes that they see in media and fiction. Furthermore, most people eventually realize that the world is often quite hypocritical about the qualities it rewards. That is, early in life people are told that certain admired types of efforts and qualities are the ones with the best chance to lead to high outcomes. But later people learn that in fact that other less cooperative or fair strategies are often rewarded more. They may thus reasonably conclude that the game was rigged, and that they failed in part because they were fooled for too long.

Given all this, we should be somewhat surprised, and quite grateful, to live in such a calm world. Most people fall below the standard of success set by average outcomes, and far below that set by typical media-visible outcomes. And they learn that their losses are caused in part by winners taking illicit strategies and lying to them about the rewards to admired strategies. Yet contrary to the common fictional trope, this does not induce them to angrily try to burn down our shared house of civilization.

So dear mostly-calm near-median person, I respectfully salute you. Without you and your stoic acceptance, civilization would not be possible. Perhaps I should salute men a bit more, as they are more prone to violent anger, and suffer higher variance and thus higher mean to median outcome ratios. And perhaps the old a bit more too, as they see more of the world’s hypocrisy, and can hope much less for success via big future reversals. But mostly, I salute you all. Humans are indeed amazing creatures.

GD Star Rating
loading...
Tagged as: ,

The Ems of Altered Carbon

People keep suggesting that I can’t possibly present myself as an expert on the future if I’m not familiar with their favorite science fiction (sf). I say that sf mostly pursues other purposes and rarely tries much to present realistic futures. But I figure should illustrate my claim with concrete examples from time to time. Which brings us to Altered Carbon, a ten episode sf series just out on Netflix, based on a 2002 novel. I’ve watched the series, and read the novel and its two sequels.

Altered Carbon’s key tech premise is a small “stack” which can sit next to a human brain collecting and continually updating a digital representation of that brain’s full mental state. This state can also be transferred into the rest of that brain, copied to other stacks, or placed and run in an android body or a virtual reality. Thus stacks allow something much like ems who can move between bodies.

But the universe of Altered Carbon looks very different from my description of the Age of Em. Set many centuries in future, our descendants have colonized many star systems. Technological change then is very slow; someone revived after sleeping for centuries is familiar with almost all the tech they see, and they remain state-of-the-art at their job. While everyone is given a stack as a baby, almost all jobs are done by ordinary humans, most of whom are rather poor and still in their original body, the only body they’ll ever have. Few have any interest in living in virtual reality, which is shown as cheap, comfortable, and realistic; they’d rather die. There’s also little interest in noticeably-non-human android bodies, which could plausibly be pretty cheap.

Regarding getting new very-human-like physical bodies, some have religious objections, many are disinterested, but most are just too poor. So most stacks are actually never used. Stacks can insure against accidents that kill a body but don’t hurt the stack. Yet while it should be cheap and easy to backup stack data periodically, inexplicibly only rich folks do that.

It is very illegal for one person to have more than one stack running at a time. Crime is often punished by taking away the criminal’s body, which creates a limited supply of bodies for others to rent. Very human-like clone and android bodies are also available, but are very expensive. Over the centuries some have become very rich and long-lived “meths”, paying for new bodies as needed. Meths run everything, and are shown as inhumanly immoral, often entertaining themselves by killing poor people, often via sex acts. Our hero was once part of a failed revolution to stop meths via a virus that kills anyone with a century of subjective experience.

Oh, and there have long been fully human level AIs who are mainly side characters that hardly matter to this world. I’ll ignore them, as criticizing the scenario on these grounds is way too easy.

Now my analysis says that there’d be an enormous economic demand for copies of ems, who can do most all jobs via virtual reality or android bodies. If very human-like physical bodies are too expensive, the economy would just skip them. If allowed, ems would quickly take over all work, most activity would be crammed in a few dense cities, and the economy could double monthly. Yet while war is common in the universe of Altered Carbon, and spread across many star systems, no place ever adopts the huge winning strategy of unleashing such an em economy and its associated military power. While we see characters who seek minor local advantages get away for long times with violating the rule against copying, no one ever tries to do this to get vastly rich, or to win a war. No one even seems aware of the possibility.

Even ignoring the AI bit, I see no minor modification to make this into a realistic future scenario. It is made more to be a morality play, to help you feel righteous indignation at those damn rich folks who think they can just live forever by working hard and saving their money over centuries. If there are ever poor humans who can’t afford to live forever in very human-like bodies, even if they could easily afford android or virtual immortality, well then both the rich and the long-lived should all burn! So you can feel morally virtuous watching hour after hour of graphic sex and violence toward that end. As it so happens that hand-to-hand combat, typically producing big spurts of blood, and often among nudes, is how most conflicts get handled in this universe. Enjoy!

GD Star Rating
loading...
Tagged as: ,

Toward Better Signals

While we tend to say and think otherwise, in fact much of what we do is oriented toward helping us to show off. (Our new book argues for this at length.) Assuming this is true, what does a better world look like?

In simple signaling models, people tend to do too much of the activities they use to signal. This suggests that a better world is one that taxes or limits such activities. Say by taxing or limiting school, hospitals, or sporting contests. However, this is hard to arrange because signaling via political systems tends to create the opposite: subsidies and minimum required levels of such widely admired activities. (Though socializing such activities under limited government budgets is often effective.) Also, if we put most all of our life energy into signaling, then limits or taxes on just signaling activities will mainly result in us diverting our efforts to other signals.

If some signaling activities have larger positive externalities, then it seems an obvious win to use taxes, subsidies, etc. to divert our efforts into those activities. This is plausibly why we try to praise people more for showing off via charity, innovation, or whistleblowing. Similarly, we tend to criticize activities like war and other violence with large negative externalities. We should continue to do these things, and also look for other such activities worthy of extra praise or criticism.

However, on reflection I think the biggest problem with signals today is the quality of our audience. When the audience that we want to impress knows little about how our visible actions connect to larger consequences, then we also need not attend much to such connections. For example, to show an audience that we care enough about someone via helping them to get medicine, we need only push the sort of medicine that our audience thinks is effective. Similarly for using charity to convince an audience we care about the poor, politics to convince an audience we care about our nation, or using creative activities to convince an audience we promote innovation.

What if our audiences knew more about which medicines helped health, which charities helped the poor, which national policies help the nation, or which creative activities promoted innovation? That would push us to also know more, and lead us to choose more effective medicines, charities, policies, and innovations. All to the world’s benefit. So what could make the audiences that we seek to impress know more about how our activities connect to these larger consequences?

One approach is make our audiences more elite. Today our efforts to gain more likes on social media have us pandering to a pretty broad and ignorant audience. In contrast, in many old-world rags-to-riches stories, a low person rose in rank via a series of encounters with higher persons, each of whom was suitably impressed. The more that we expect to gain via impressing better-informed elites, the better informed will our show-off actions be.

But this isn’t just about who we seek to impress. It is also about whether we impress them via many small encounters, or via a few big ones. In larger encounters, our audience can take more time to judge how much we really understand about what we are doing. Yes risk and randomness could dominate if the main encounters that mattered to us were too small in number. But we seem pretty far away from that limit at the moment. For now, we’d have a better world of signals if we tried more to impress via a smaller number of more intense encounters with better informed elites.

Of course to fill this role of a better informed audience, it isn’t enough for “elites” to merely be richer, prettier, or more popular. They need to actually know more about how signaling actions connect to larger consequences. So there can be outsized gains from better educating elites on such things, and from selecting our elites more from those who are better educated on them. And anything that distracts elites from performing well in this this crucial role can have outsized costs.

Of course there’s a lot more to figure out here; I’ve just scratched the surface. But still, I thought I should plant a flag now, and show that it is possible to think more carefully about how to make a better world, when that world is chock full of signaling.

GD Star Rating
loading...
Tagged as: ,

On Unsolved Problems

Imagine a book review:

The authors present convincing evidence that since 1947 aliens from beyond Earth are here on Earth, can pass as humans, have been living among us, and increasingly influence human affairs. The authors plausibly identify the industries, professions, and geographic regions where aliens have the most influence, and the primary methods of alien influence. Furthermore the authors have made their evidence analysis accessible to a wide audience in a readable and entertaining book, and have published it via a respectable academic press to enable its conclusions to be believed by a wide audience.

Unfortunately, the authors only offer vague and general plans for dealing with these meddling aliens. They offer no cheap and reliable way to detect individual aliens, nor to overpower and neutralize them once detected. What good is it to know about aliens without a detailed response plan? Save your money and buy another book.

Or imagine deleting that last paragraph, and adding this instead:

The authors go further and offer plausible physical mechanisms by which we might detect individual aliens and neutralize their influence. The authors also offer a ten point plan and outline a rough budget for a project to implement this plan.

Unfortunately, they give no detailed schematics for physical devices to detect and neutralize aliens, nor do they offer a specific manufacturing process plan. In addition, they don’t say much about how to fund or staff their proposed project. This project would be international in scope and probably continue for decades. Yet the authors don’t bother to address how to guarantee gender, racial, and national equity when choosing personnel, nor how to achieve national and generational equity in funding. They don’t even give a detailed plan for managing the disruption should a war break out.

What good is it to know about aliens, physical mechanisms to detect and neutralize them, and a ten point plan for managing this, if we lack a detailed device schematics, manufacturing processes, plans to ensure equitable hiring and funding, and war contingencies?  Save your money and buy another book.

I could go on, but you get the idea. You should want to learn about problems you face, even if you don’t yet know how to solve them. The above snark was inspired by this review by Samuel Hammond of Elephant in the Brain. He starts with kind praise:

An entertaining and insightful book that sheds light on a diverse collection of perplexing human behaviors. …

And then he details this criticism:

The book is largely an exercise in simply convincing the reader of the elephant’s existence by hammering away with example after example. As a result of that hammering, The Elephant in the Brain ends up being light on public policy upshots — far more Theory of Moral Sentiments than Wealth of Nations. That’s unfortunate, since the ideas in the book are bursting with potential applications. Worse, however, is the scant attention paid to helping the reader pick up the pieces of their shattered psyche. Instead, Simler and Hanson simply highlight the need to better align public institutions with our hidden motives, leaving the all-important “how” question relatively untouched. …

It at least seems possible to tame the social aspects of our adaptive unconscious with the right self-help techniques, from classroom exercises to mindfulness meditation. This was essentially the strategy developed by the Cynics of ancient Greece. Through rigorous training, the Cynics managed to forgo the pursuit of wealth, sex, and fame in favor of mental clarity and rational ethics.

This is the direction I had hoped The Elephant in the Brain would lead. After all, the elephant in the brain is located squarely in what psychologists call our brain’s “System 1,” or the automatic, noncognitive, and fast mode of thinking. That still leaves our “System 2,” or analytical, cognitive, and slow mode of thinking, as a potential tool for transcending our lowly origins. By failing to give our System 2 mode a balanced consideration, The Elephant in the Brain inadvertently falls into the expanding genre of pop-psych books that simply recapitulate David Hume’s famous assertion that “reason is, and ought only to be the slave of the passions.” …

Haidt’s more recent book, The Righteous Mind, helps to illustrate the pragmatic problem. … Without denying Haidt’s empirical findings, an inviolable application of this theory raises an obvious question: How does one could ever hope to hold to a rational political philosophy at all? …

It seems like Simler was ultimately able to transcend the Silicon Valley rat-race with the employ of his System Two, or cognitive, mode of thinking. That is, he was rationally persuaded to pull the elephant by the reins and steer his life towards truth-seeking.

Our book mainly identifies hidden motives via explaining patterns of behavior that are poorly explained by our usual claimed motives. These patterns result from the usual mix of automatic and reasoned thinking, of impulse and self-control. I’ve seen no evidence that these patterns are weaker for people or places where reason or self-control matters more. This includes the example of my coauthor’s choice to write this book.

Without any concrete evidence suggesting that hidden motives matter more or less when there is more reason or self-control, I don’t see why discussing reason and self-control was a priority for our book. And I doubt that merely promoting reason or self-control is sufficient to reduce the influence of hidden motives.

GD Star Rating
loading...