Tag Archives: NearFar

Why We Don’t Know What We Want

Moons and Junes and Ferris wheels
The dizzy dancing way that you feel
As every fairy tale comes real
I’ve looked at love that way

But now it’s just another show
And you leave ’em laughing when you go
And if you care, don’t let them know
Don’t give yourself away

I’ve looked at love from both sides now
From give and take and still somehow
It’s love’s illusions that I recall
I really don’t know love
Really don’t know love at all

Both Sides Now, Joni Mitchell 1966.

If you look at two things up close, it is usually pretty easy to tell which one is closest. And also to tell their relative sizes, e.g., which one might fit inside the other. But if you look far in the distance, such as toward the sky or the horizon, it gets much harder to tell relative sizes or distances. While you might notice that one thing occludes another, when considering unknown things in different directions it is harder to tell relative sizes or distances.

I see similar effects also for things that are more “distant” in other ways, such as in time, social distance, or hypothetically; it also seems harder to judge relative distance when things are further away in these ways. Furthermore, it seems harder to tell of two abstract descriptions which is more abstract, but easier to tell which of two detailed things which has more detail. Thus in the sense of near-far (or construal-level) theory, it seems that we generally find it harder to compare relative distances when things are further away.

According to near-far theory, we also frame our more stable, general, and fundamental goals as more far and abstract, compared to the more near local considerations that constrain our plans. Thus this theory seems to predict that we will have more trouble comparing the relative value of our more abstract values. That is, when comparing two general persistent values, we will find it hard to say which one we value more. Thus near-far theory predicts a big puzzling human feature: we know surprisingly little about what we want. For example, we find it very hard to imaging concrete, coherent, and attractive utopias.

When we see an object from up close, and then we later see it from afar, we often remember its details from when we saw it up close. So similarly, we might learn to compare our general values by remembering examples of concrete decisions where such values were in conflict. And we do often have concrete situations where we are aware that our general values apply to those concrete cases. Such as when we are very hungry, horny, injured, or socially embarrassed. Why don’t we learn our values from those?

Here I will invoke my theory of the sacred: for some key values and things, we set our minds to try to always see them in a rather far mode, no matter how close we are to them. This enables different people in a community to bond together by seeing those sacred things in the same way, even when some of them are much closer to them than others. And this also enables a single person to better maintain a unified identity and commitments over time, even when that person sees concrete examples from different distances at different times in their life. (I thank Arnold Brooks for pointing this out in an upcoming MAM podcast.)

For example, most of us have felt strong feelings of lust, limerence, and attachment to other people at many times during our lives. So we should have plenty of data on which to base rough estimates of what exactly is “love”, and how much we value it compared to other things. But our treating love as sacred makes it harder to use that data to construct such a detailed and unified account. Even when we think about concrete examples up close, it seems hard to use those to update our general views on “love”. We still “really don’t know love at all.”

Because we really can’t see love up close and in detail. Because we treat love as sacred. And sacred things we see from afar, so we can see them together.

GD Star Rating
loading...
Tagged as: , ,

We See The Sacred From Afar, To See It Together

I’ve recently been trying to make sense of our concept of the “sacred”, by puzzling over its many correlates. And I think I’ve found a way to make more sense of it in terms of near-far (or “construal level”) theory, a framework that I’ve discussed here many times before.

When we look at a scene full of objects, a few of those objects are big and close up, while a lot more are small and far away. And the core idea of near-far is that it makes sense to put more mental energy into analyzing each object up close, objects that matters to us more, by paying more attention to their detail, detail often not available about stuff far away. And our brains do seem to be organized around this analysis principle.

That is, we do tend to think less, and think more abstractly, about things far from us in time, distance, social connection, or hypothetically. Furthermore, the more abstractly we think about something, the more distant we tend to assume are its many aspects. In fact, the more distant something is in any way, the more distant we tend to assume it is in other ways.

This all applies not just to dates, colors, sounds, shapes, sizes, and categories, but also to the goals and priorities we use to evaluate our plans and actions. We pay more attention to detailed complexities and feasibility constraints regarding actions that are closer to us, but for far away plans we are content to think about them more simply and abstractly, in terms of relatively general values and principles that depend less on context. And when we think about plans more abstractly, we tend to assume that those actions are further away and matter less to us.

Now consider some other ways in which it might make sense to simplify our evaluation of plans and actions where we care less. We might, for example, just follow our intuitions, instead of consciously analyzing our choices. Or we might just accept expert advice about what to do, and care little about experts incentives. If there are several relevant abstract considerations, we might assume they do not conflict, or just pick one of them, instead of trying to weigh multiple considerations against each other. We might simplify an abstract consideration from many parameters down to one factor, down to a few discrete options, or even all the way down to a simple binary split.

It turns out that all of these analysis styles are characteristic of the sacred! We are not supposed to calculate the sacred, but just follow our feelings. We are to trust priests of the sacred more. Sacred things are presumed to not conflict with each other, and we are not to trade them off against other things. Sacred things are idealized in our minds, by simplifying them and neglecting their defects. And we often have sharp binary categories for sacred things; things are either sacred or not, and sacred things are not to be mixed with the non-sacred.

All of which leads me to suggest a theory of the sacred: when a group is united by valuing something highly, they value it in a style that is very abstract, having the features usually appropriate for quickly evaluating things relatively unimportant and far away. Even though this group in fact tries to value this sacred thing highly. Of course, depending on what they try to value, such attempts may have only limited success.

For example, my society (US) tries to value medicine sacredly. So ordinary people are reluctant to consciously analyze or question medical advice; they are instead to just trust its priests, namely doctors, without looking at doctor incentives or track records. Instead of thinking in terms of multiple dimensions of health, we boil it all down to a single health dimension, or even a binary of dead or alive.

Instead of seeing a continuum of cost-effectiveness of medical treatments, along which the rich would naturally go further, we want a binary of good vs bad treatments, where everyone should get the good ones no matter what their cost, and regardless of any other factors besides a diagnosis. We are not to make trades of non-sacred things for medicine, and we can’t quite believe it is ever necessary to trade medicine against other sacred things. Furthermore, we want there to be a sharp distinction between what is medicine and what is not medicine, and so we struggle to classify things like mental therapy or fresh food.

Okay, but if we see sacred things as especially important to us, why ever would we want to analyze them using styles that we usually apply to things that are far away and the least important to us? Well one theory might be that our brains find it hard to code each value in multiple ways, and so typically code our most important values as more abstracted ones, as we tend to apply them most often from a distance.

Maybe, but let me suggest another theory. When a group unites itself by sharing a key “sacred” value, then its members are especially eager to show each other that they value sacred things in the same way. However, when group members hear about and observe how an associate makes key sacred choices, they will naturally evaluate those choices from a distance. So each group member also wants to look at their own choices from afar, in order to see them in the same way that others will see them.

In this view, it is the fact groups tend to be united by sacred values that is key to explaining why they treat such values in the style usually appropriate for relatively unimportant things seen from far away, even though they actually want to value those things highly. Even though such a from-a-distance treatment will probably lead to a great many errors and misjudgments when actually trying to promote that thing.

You see, it may be more important to groups to pursue a sacred value together than to pursue it effectively. Such as the way the US spends 18% of GDP on medicine, as a costly signal of how sacred medicine is to us, even though the marginal health benefit of our medical spending seems to be near zero. And we show little interest in better institutions that could make such spending far more cost effective.

Because at least this way we all see each other’s ineffective medical choices in the same way. We agree on what to do. And after all, that’s the important thing about medicine, not whether we live or die.

Added 10Sep: Other dual process theories of brains give similar predictions.

GD Star Rating
loading...
Tagged as: , ,

Motive/Emotive Blindspot

In this short post what I try to say is unusually imprecise, relative to what I usually try to say. Yet it seems important enough to try to say it anyway.

I’ve noticed a big hole in my understanding, which I think is shared by most economists, and perhaps also most social scientists: details about motives and emotions are especially hard to predict. Consider:

  1. Most of us find it hard to predict how we, or our associates, will feel in particular situations.
  2. We care greatly about how we & associates feel, yet we usually only influence feelings in rather indirect ways.
  3. Even when we have an inkling about how we feel now, we are usually pretty reluctant tell details on that.
  4. Organizations find it hard to motivate, and to predict the motives of, employees and associates.
  5. Marketers find it hard to motivate, and to predict the motives of, customers.
  6. Movie makers find it very hard to predict which movies people will like.
  7. It is hard for authors, even good ones, to imagine how characters would feel in various situations.
  8. It is hard for even good actors to believable portray what characters feel in situations.
  9. We poorly understand declining motive power of religion & ideology, nor which ones motivate what.
  10. We poorly understand declining emotion power of rituals, nor which ones induce which emotions.

We seem to be built to find it hard to see and predict both our and others’ motives and emotions. Oh we can, from a distance, see some average tendencies well enough to predict a great many overall social tendencies. But when we get to details, up close, our vision fails us.

In many common situations, the motive/emotive variance that we find it hard to predict isn’t much correlated across people or time, and so doesn’t much get in the way of aggregate predictions. But in other common situations, that puzzling variance can be quite correlated.

GD Star Rating
loading...
Tagged as: ,

Abstract Views Are Coming

Two years ago I predicted that the future will eventually take a long view:

If competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate. … The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates.

Today I predict that the future will also eventually take a more abstract view, also to its benefit. Let me explain.

Recently I posted on how while we don’t have a world government today, we do now have a close substitute: a strong culture of oft-talking world elites, that can and does successfully pressure authorities everywhere to adopt their consensus regulation opinions. This is much like how in forager bands, the prestigious would gossip to form a consensus plan, which everyone would follow.

This “world forager elite”, as I called them, includes experts, but often overrules them in their areas of expertise. And on the many topics for which this elite doesn’t bother to form a consensus, other institutions and powers are allowed to made key decisions.

The quality of their judgements depends on how able and knowledgeable is this global elite, and on how long and carefully they deliberate on each topic. And these parameters are in turn influenced by the types of topics on which they choose to have opinions, and on how thinly they spread themselves across the many topics they consider.

And this is where abstraction has great potential. For example, in order of increasing generality these elites could form opinions on the particular kinds of straws offered in a particular London restaurant, or on plastic straws in general at all restaurants, or on all kinds of straws used everywhere, on how to set taxes and subsidies for plastic and paper for all food use, or on how to set policy on all plastic and paper taxes and subsidies.

The higher they go up this abstraction ladder, they more that elites can economize on their efforts, to deal with many issues all at once. Yes, it can take more work to reason more abstractly, and there can be more ways to go wrong. And it often helps to first think about concrete examples, and then try to generalize to more abstract conclusions. But abstraction also helps to avoid biases that push us toward arbitrarily treat fundamentally similar things differently. And abstraction can better encompass indirect effects often neglected by concrete analysis. It is certainly my long experience as a social scientist and intellectual that abstraction often pays huge dividends.

So why don’t elites reason more abstractly now? Because they are mostly amateurs who do not understand most topics well enough to abstract them. And because they tend to focus on topics with strong moral colors, for which there is often an expectation of “automatic norms”, wherein we are just supposed to intuit norms without too much explicit analysis.

In the future, I expect us to have smarter better-trained better-selected elites (such as ems), who thus know more basics of more different fields, and are more able to reason abstractly about them. This has been the long term historical trend. Instead of thinking concrete issues through for themselves, and then overruling experts when they disagree, elites are more likely to focus on how manage experts and give them better incentives, so they can instead trust expert judgements. This should produce better judgements about what to regulate how, and what to leave alone how.

The future will take longer, and more abstract, views. And thus make more sensible decisions. Finally.

GD Star Rating
loading...
Tagged as: , , ,

Specialized Innovation Is Easier

Consider a few things we know about task specialization and innovation: Larger cities and larger firms both have both more specialization and more (i.e., faster) innovation. More global industries also have both more specialization and innovation. And across the great eras of human history (animal, forager, farmer, industry), each era has brought more specialization, and also faster rates of innovation.

Here’s a simple explanation for (part of) this widely observed correlation: It is easier to create tools and procedures to improve tasks the more detail you know about them, and the less that task context varies across the task category. (It is also easier to fully automate such tasks; human level generality is very hard.)

For example, it seems harder to find a way to make a 1% improvement in a generic truck, designed to take any type or size of stuff any distance over any type of road, in any type of weather, relative to a very specific type of truck, such as for carrying animals, oil, cars, ice cream, etc. It gets even easier if you specialize to particular distances, roads, weather, etc. Partly this is because most ways to improve the generic truck will also apply to specialized trucks, but the reverse isn’t true.

This might sound obvious, but note that this is not our usual explanation for these correlations in each context. We usually say that cities are more innovative because they allow more chance interactions that generate ideas, not because they are more specialized. We say larger firms are more innovative because they have larger market shares, and so internalize more of the gains from innovation. We say more global industries are more capital intensive, and capital innovates faster. And we say that it is just a coincidence that over time we have both specialized more and invented better ways to innovate.

My simpler more unified explanation suggests that, more often than we have previously realized, specialization is the key to innovation. So we should look more to finding better ways to specialize to promote future innovation. Such as less product variety and more remote work.

Added 25Sep: A relevant quote:

As Frank Knight once expressed it, the fundamental point about the division of labour is that it is also a system for increasing the efficiency of learning and thus the growth of knowledge

GD Star Rating
loading...
Tagged as: ,

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
loading...
Tagged as: , ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
loading...
Tagged as: , , ,

Meaning is Easy to Find, Hard to Justify

One of the strangest questions I get when giving talks on Age of Em is a variation on this:

How can ems find enough meaning in their lives to get up and go to work everyday, instead of committing suicide?

As the vast majority of people in most every society do not commit suicide, and manage to get up for work on most workdays, why would anyone expect this to be a huge problem in a random new society?

Even stranger is that I mostly get this question from smart sincere college students who are doing well at school. And I also hear that such students often complain that they do not know how to motivate themselves to do many things that they “want” to do. I interpret this all as resulting from overly far thinking on meaning. Let me explain.

If we compare happiness to meaning, then happiness tends to be an evaluation of a more local situation, while meaning tends to be an evaluation of a more global situation. You are happy about this moment, but you have meaning regarding your life.

Now you can do either of these evaluations in a near or a far mode. That is, you can just ask yourself for your intuitions on how you feel about your life, within over-thinking it, or you can reason abstractly and idealistically about what sort of meaning you should have or can justify having. In that later more abstract mode, smart sincere people can be stumped. How can they justify having meaning in a world where there is so much randomness and suffering, and that is so far from being a heaven?

Of course in a sense, heaven is an incoherent concept. We have so many random idealistic constraints on what heaven should be like that it isn’t clear that anything can satisfy them all. For example, we may want to be the hero of a dramatic story, even if we know that characters in such stories wish that they could live in more peaceful worlds.

Idealistic young people have such problems in spades, because they haven’t lived long enough to see how unreasonable are their many idealistic demands. And smarter people can think up even more such demands.

But the basic fact is that most everyone in most every society does in fact find meaning in their lives, even if they don’t know how to justify it. Thus I can be pretty confident that ems also find meaning in their lives.

Here are some more random facts about meaning, drawn from my revised Age of Em, out next April.

Today, individuals who earn higher wages tend to have both more happiness and a stronger sense of purpose, and this sense of purpose seems to cause higher wages. People with a stronger sense of purpose also tend to live longer. Nations that are richer tend to have more happiness but less meaning in life, in part because they have less religion. .. Types of meaning that people get from work today include authenticity, agency, self-worth, purpose, belonging, and transcendence.

Happiness and meaning have different implications for behavior, and are sometimes at odds. That is, activities that raise happiness often lower meaning, and vice versa. For example, people with meaning think more about the future, while happy people focus on the here and now. People with meaning tend to be givers who help others, while happy people tend to be takers who are helped by others. Being a parent and spending time with loved ones gives meaning, but spending time with friends makes one happy.

Affirming one’s identity and expressing oneself increase meaning but not happiness. People with more struggles, problems, and stresses have more meaning, but are less happy. Happiness but not meaning predicts a satisfaction of desires, such as for health and money, and more frequent good relative to bad feelings. Older people gain meaning by giving advice to younger people. We gain more meaning when we follow our gut feelings rather than thinking abstractly about our situations.

My weak guess is that productivity tends to predict meaning more strongly than happiness. If this is correct, it suggests that, all else equal, ems will tend to think more about the future, more be givers who help others, spend more time with loved ones and less with friends, more affirm their identity and express themselves, give more advice, and follow gut feelings more. But they will also have more struggles and less often have their desires satisfied.

GD Star Rating
loading...
Tagged as: , , ,

Future Gender Is Far

What’s the worst systematic bias in thinking on the future? My guess: too much abstraction. The far vs. near mode distinction was first noticed in future thinking, because the effect is so big there.

I posted a few weeks ago that the problem with the word “posthuman” is that it assumes our descendants will differ somehow in a way to make them “other,” without specifying any a particular change to do that. It abstracts from particular changes to just embody the abstract idea of othering-change. And I’ve previously noted there are taboos against assuming that something we see as a problem won’t be solved, and even against presenting such a problem without proposing a solution.

In this post let me point out that a related problem plagues future gender relation thoughts. While many hope that future gender relations will be “better”, most aren’t at all clear on what specifically that entails. For some, all differing behaviors and expectations about genders should disappear, while for others only “legitimate” differences remain, with little agreement on which are legitimate. This makes it hard to describe any concrete future of gender relations without violating our taboo against failing to solve problems.

For example, at The Good Men Project, Joseph Gelfer discusses the Age of Em. He seems to like or respect the book overall:

Fascinating exploration of what the world may look like once large numbers of computer-based brain emulations are a reality.

But he less likes what he reads on gender:

Hanson sees a future where an em workforce mirrors the most useful and productive forms of workforce that we experience today. .. likely choose [to scan] workaholic competitive types. Because such types tend to be male, Hanson imagines an em workforce that is disproportionately male (these workers also tend to rise early, work alone and use stimulants).

This disproportionately male workforce has implications for how sexuality manifests in em society. First, because the reproductive impetus of sex is erased in the world of ems, sexual desire will be seen as less compelling. In turn, this could lead to “mind tweaks” that have the effect of castration, .. [or] greater cultural acceptance of non-hetero forms of sexual orientation, or software that make ems of the same sex appear as the opposite sex. .. [or] paying professional em sex workers.

It is important to note that Hanson does not argue that this is the way em society should look, rather how he imagines it will look by extrapolating what he identifies in society both today and through the arc of human history. So, if we can identify certain male traits that stretch back to the beginning of the agricultural era, we should also be able to locate those same traits in the em era. What might be missing in this methodology is a full application of exponential change. In other words, Hanson rightly notes how population, technology and so forth have evolved with increasing speed throughout history, yet does not apply that same speed of evolution to attitudes towards gender. Given how much perceptions around gender have changed in the past 50 years, if we accept a pattern of exponential development in such perceptions, the minds that are scanned for first generation ems will likely have a very different attitude toward gender than today, let alone thousands of years past. (more)

Obviously Gelfer doesn’t like something about the scenario I describe, but he doesn’t identify anything particular he disagrees with, nor offer any particular arguments. His only contrary argument is a maximally abstract “exponential” trend, whereby everything gets better. Therefore gender relations must get better, therefore any future gender relations feature that he or anyone doesn’t like is doubtful.

For the record, I didn’t say the em world selects for “competitive types”, that people would work alone, or that there’d be more men. Instead I have a whole section on a likely “Gender Imbalance”:

Although it is hard to predict which gender will be more in demand in the em world, one gender might end up supplying proportionally more workers than the other.

Though I doubt Gelfer is any happier with a future with may more women than men; any big imbalance probably sounds worse to most people, and thus can’t happen according to the better future gender relations principle.

I suspect Gelfer’s errors about my book are consistently in the direction of incorrectly attributing features to the scenario that he likes less. People usually paint the future as a heaven or a hell, and so if my scenario isn’t Gelfer’s heaven, it must be his hell.

GD Star Rating
loading...
Tagged as: , ,

The Good-Near Bad-Far Bias

“Why am I late home from work? Terrible traffic slowed everyone down.”
“Why am I early home from work? I wanted to spend more time with you.”

We try to make ourselves look good. So we try to associate closely with good events, and distance ourselves more from bad events. Specifically, we prefer to explain bad events near us in terms of distant causes over which we had little influence, but explain good events near us in terms of our good long-lasting features, such as our authenticity, loyalty, creativity, or intelligence.

For example, managers are reluctant to adopt prediction markets for project deadlines, because it takes away their favorite excuse for failure: “The thing that delayed this project was a rare disaster that came out of left field; no one could have seen it coming.” Note that distant causes work best as excuses if they are rare and unpredictable. Otherwise there comes the question of why one didn’t do more to prevent or mitigate the distant influence.

As another example, when a class of people is doing poorly and we are reluctant to blame them, we prefer explanations far from their choices. So instead of blaming their self-control, laziness, or intelligence, we prefer to blame capitalism, general malaise, discrimination, foreigners, or automation. Recent over-emphasis on a sudden burst of automation as an unemployment cause comes in part from a perfect storm of not wanting to blame low-skilled workers, and wanting to brag about the technical prowess of groups we feel associated with.

Why don’t we blame close rivals more often, instead of distant causes? We do blame rivals sometimes, but if they retaliate by blaming us we risk ending up associated with a lot of blame. Better to keep the peace and both blame outsiders.

GD Star Rating
loading...
Tagged as: