Monthly Archives: February 2018

Small Change Good, Big Change Bad?

Recently I posted on how many seek spiritual insight via cutting the tendency of their minds to wander, yet some like Scott Alexandar fear ems with a reduced tendency to mind wandering because they’d have less moral value. On twitter Scott clarified that he doesn’t mind modest cuts in mind wandering; what he fears is extreme cuts. And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

On nature preserves, some fear eventually losing all of wild nature, but when arguing for any particular development others say we need new things and we still have plenty of nature. On military spending, some say the world is peaceful and we have many things we’d rather spend money on, while others say that societies who do not remain militarily vigilant are eventually conquered. On increasing inequality some say that high enough inequality must eventually result in inadequate human capital investments and destructive revolutions, while others say there’s little prospect of revolution now and inequality has historically only fallen much in big disasters such as famine, war, and state collapse. On value drift, some say it seems right to let each new generation choose its values, while others say a random walk in values across generations must eventually drift very far from current values.

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

Third, our ability to foresee the future rapidly declines with time. The more other things that may happen between today and some future date, the harder it is to foresee what may happen at that future date. We should be increasingly careful about the inferences we draw about longer terms.

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

GD Star Rating
loading...
Tagged as: ,

How Human Are Meditators?

Someday we may be able to create brain emulations (ems), and someday later we may understand them sufficiently to allow substantial modifications to them. Many have expressed concern that competition for efficient em workers might then turn ems into inhuman creatures of little moral worth. This might happen via reductions of brain systems, features, and activities that are distinctly human but that contribute less to work effectiveness. For example Scott Alexander fears loss of moral value due to “a very powerful ability to focus the brain on the task at hand” and ems “neurologically incapable of having their minds drift off while on the job”.

A plausible candidate for em brain reduction to reduce mind drift is the default mode network:

The default mode network is active during passive rest and mind-wandering. Mind-wandering usually involves thinking about others, thinking about one’s self, remembering the past, and envisioning the future.… becomes activated within an order of a fraction of a second after participants finish a task. … deactivate during external goal-oriented tasks such as visual attention or cognitive working memory tasks. … The brain’s energy consumption is increased by less than 5% of its baseline energy consumption while performing a focused mental task. … The default mode network is known to be involved in many seemingly different functions:

It is the neurological basis for the self:

Autobiographical information: Memories of collection of events and facts about one’s self
Self-reference: Referring to traits and descriptions of one’s self
Emotion of one’s self: Reflecting about one’s own emotional state

Thinking about others:

Theory of Mind: Thinking about the thoughts of others and what they might or might not know
Emotions of other: Understanding the emotions of other people and empathizing with their feelings
Moral reasoning: Determining just and unjust result of an action
Social evaluations: Good-bad attitude judgments about social concepts
Social categories: Reflecting on important social characteristics and status of a group

Remembering the past and thinking about the future:

Remembering the past: Recalling events that happened in the past
Imagining the future: Envisioning events that might happen in the future
Episodic memory: Detailed memory related to specific events in time
Story comprehension: Understanding and remembering a narrative

In our book The Elephant in the Brain, we say that key tasks for our distant ancestors were tracking how others saw them, watching for ways others might accuse them of norm violations, and managing stories of their motives and plans to help them defend against such accusations. The difficulty of this task was a big reason humans had such big brains. So it made sense to design our brains to work on such tasks in spare moments. However, if ems could be productive workers even with a reduced capacity for managing their social image, it might make sense to design ems to spend a lot less time and energy ruminating on their image.

Interestingly, many who seek personal insight and spiritual enlightenment try hard to reduce the influence of this key default mode network. Here is Sam Harris from his recent book Waking Up: A Guide to Spirituality Without Religion:

Psychologists and neuroscientist now acknowledge that the human mind tends to wander. .. Subjects reported being lost in thought 46.9 percent of the time. .. People are consistently less happy when their minds wander, even when the contents of their thoughts are pleasant. … The wandering mind has been correlated with activity in the … “default mode” or “resting state” network (DMN). .. Activity in the DMN decreases when subjects concentrate on tasks of the sort employed in most neuroimaging experiments.

The DMN has also been linked with our capacity for “self-representation.” … [it] is more engaged when we make such judgements of relevance about ourselves, as opposed to making them about other people. It also tends to be more active when we evaluate a scene from a first person point of view. … Generally speaking, to pay attention outwardly reduces activity in the [DMN], while thinking about oneself increases it. …

Mindfulness and loving-kindness mediation also decrease activity in the DMN – and the effect is most pronounced among experienced meditators. … Expert meditators … judge the intensity of an unpleasant stimulus the same but find it to be less unpleasant. They also show reduced activity in regions associated with anxiety while anticipanting the onsite of pain. … Mindfulness reduces both the unpleasantness and intensity of noxious stimuli. …

There is an enormous difference between being hostage to one’s thoughts and being freely and nonjudgmentally aware of life in the present. To make this shift is to interrupt the process of rumination and reactivity that often keep us so desperately at odds with ourselves and with other people. … Meditation is simply the ability to stop suffering in many of the usual ways, if only for a few moments at a time. … The deepest goal of spirituality is freedom from the illusion of the self. (pp.119-123)

I see a big conflict here. On the one hand, many are concerned that competition could destroy moral value by cutting away distinctively human features of em brains, and the default net seems a prime candidate for cutting. On the other hand, many see meditation as a key to spiritual insight, one of the highest human callings, and a key task in meditation is cutting the influence of the default net. Ems with a reduced default net could more easily focus, be mindful, see the illusion of the self, and feel more at peace and less anxious about their social image. So which is it, do such ems achieve our highest spiritual ideals, or are they empty shells mostly devoid of human value? Can’t be both, right?

By the way, I was reading Harris because he and I will record a podcast Feb 21 in Denver.

GD Star Rating
loading...
Tagged as: ,

A Salute To Median Calm

It is a standard trope of fiction that people often get angry when they suffer life outcomes well below what they see as their justified expectations. Such sore losers are tempted to retaliate against the individuals and institutions they blame for their loss, causing increasing damage until others agree to fix the unfairness.

Most outcomes, like income or fame, are distributed with mean outcomes well above median outcomes. As a result, well over half of everyone gets an outcome below what that they could have reasonably expected. So if this sore loser trope were true, there’d be a whole lot of angry folks causing damage. Maybe even most people would be this angry. Hard to see how civilization could function here. This scenario is often hoped-for by those who seek dramatic revolutions to fix large scale social injustices.

Actually, however, even though most people might plausibly see themselves as unfairly assigned to be losers, few become angry enough to cause much damage. Oh most people will have resentments and complaints, and this may lead on occasion to mild destruction, but most people are mostly peacefully. In the words of the old song, while they may not get what they want, they mostly get what they need.

Not only do most people achieve much less than the average outcomes, they achieve far less than the average outcomes that they see in media and fiction. Furthermore, most people eventually realize that the world is often quite hypocritical about the qualities it rewards. That is, early in life people are told that certain admired types of efforts and qualities are the ones with the best chance to lead to high outcomes. But later people learn that in fact that other less cooperative or fair strategies are often rewarded more. They may thus reasonably conclude that the game was rigged, and that they failed in part because they were fooled for too long.

Given all this, we should be somewhat surprised, and quite grateful, to live in such a calm world. Most people fall below the standard of success set by average outcomes, and far below that set by typical media-visible outcomes. And they learn that their losses are caused in part by winners taking illicit strategies and lying to them about the rewards to admired strategies. Yet contrary to the common fictional trope, this does not induce them to angrily try to burn down our shared house of civilization.

So dear mostly-calm near-median person, I respectfully salute you. Without you and your stoic acceptance, civilization would not be possible. Perhaps I should salute men a bit more, as they are more prone to violent anger, and suffer higher variance and thus higher mean to median outcome ratios. And perhaps the old a bit more too, as they see more of the world’s hypocrisy, and can hope much less for success via big future reversals. But mostly, I salute you all. Humans are indeed amazing creatures.

GD Star Rating
loading...
Tagged as: ,

The Ems of Altered Carbon

People keep suggesting that I can’t possibly present myself as an expert on the future if I’m not familiar with their favorite science fiction (sf). I say that sf mostly pursues other purposes and rarely tries much to present realistic futures. But I figure should illustrate my claim with concrete examples from time to time. Which brings us to Altered Carbon, a ten episode sf series just out on Netflix, based on a 2002 novel. I’ve watched the series, and read the novel and its two sequels.

Altered Carbon’s key tech premise is a small “stack” which can sit next to a human brain collecting and continually updating a digital representation of that brain’s full mental state. This state can also be transferred into the rest of that brain, copied to other stacks, or placed and run in an android body or a virtual reality. Thus stacks allow something much like ems who can move between bodies.

But the universe of Altered Carbon looks very different from my description of the Age of Em. Set many centuries in future, our descendants have colonized many star systems. Technological change then is very slow; someone revived after sleeping for centuries is familiar with almost all the tech they see, and they remain state-of-the-art at their job. While everyone is given a stack as a baby, almost all jobs are done by ordinary humans, most of whom are rather poor and still in their original body, the only body they’ll ever have. Few have any interest in living in virtual reality, which is shown as cheap, comfortable, and realistic; they’d rather die. There’s also little interest in noticeably-non-human android bodies, which could plausibly be pretty cheap.

Regarding getting new very-human-like physical bodies, some have religious objections, many are disinterested, but most are just too poor. So most stacks are actually never used. Stacks can insure against accidents that kill a body but don’t hurt the stack. Yet while it should be cheap and easy to backup stack data periodically, inexplicibly only rich folks do that.

It is very illegal for one person to have more than one stack running at a time. Crime is often punished by taking away the criminal’s body, which creates a limited supply of bodies for others to rent. Very human-like clone and android bodies are also available, but are very expensive. Over the centuries some have become very rich and long-lived “meths”, paying for new bodies as needed. Meths run everything, and are shown as inhumanly immoral, often entertaining themselves by killing poor people, often via sex acts. Our hero was once part of a failed revolution to stop meths via a virus that kills anyone with a century of subjective experience.

Oh, and there have long been fully human level AIs who are mainly side characters that hardly matter to this world. I’ll ignore them, as criticizing the scenario on these grounds is way too easy.

Now my analysis says that there’d be an enormous economic demand for copies of ems, who can do most all jobs via virtual reality or android bodies. If very human-like physical bodies are too expensive, the economy would just skip them. If allowed, ems would quickly take over all work, most activity would be crammed in a few dense cities, and the economy could double monthly. Yet while war is common in the universe of Altered Carbon, and spread across many star systems, no place ever adopts the huge winning strategy of unleashing such an em economy and its associated military power. While we see characters who seek minor local advantages get away for long times with violating the rule against copying, no one ever tries to do this to get vastly rich, or to win a war. No one even seems aware of the possibility.

Even ignoring the AI bit, I see no minor modification to make this into a realistic future scenario. It is made more to be a morality play, to help you feel righteous indignation at those damn rich folks who think they can just live forever by working hard and saving their money over centuries. If there are ever poor humans who can’t afford to live forever in very human-like bodies, even if they could easily afford android or virtual immortality, well then both the rich and the long-lived should all burn! So you can feel morally virtuous watching hour after hour of graphic sex and violence toward that end. As it so happens that hand-to-hand combat, typically producing big spurts of blood, and often among nudes, is how most conflicts get handled in this universe. Enjoy!

GD Star Rating
loading...
Tagged as: ,