Tag Archives: NearFar

Abstract Views Are Coming

Two years ago I predicted that the future will eventually take a long view:

If competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate. … The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates.

Today I predict that the future will also eventually take a more abstract view, also to its benefit. Let me explain.

Recently I posted on how while we don’t have a world government today, we do now have a close substitute: a strong culture of oft-talking world elites, that can and does successfully pressure authorities everywhere to adopt their consensus regulation opinions. This is much like how in forager bands, the prestigious would gossip to form a consensus plan, which everyone would follow.

This “world forager elite”, as I called them, includes experts, but often overrules them in their areas of expertise. And on the many topics for which this elite doesn’t bother to form a consensus, other institutions and powers are allowed to made key decisions.

The quality of their judgements depends on how able and knowledgeable is this global elite, and on how long and carefully they deliberate on each topic. And these parameters are in turn influenced by the types of topics on which they choose to have opinions, and on how thinly they spread themselves across the many topics they consider.

And this is where abstraction has great potential. For example, in order of increasing generality these elites could form opinions on the particular kinds of straws offered in a particular London restaurant, or on plastic straws in general at all restaurants, or on all kinds of straws used everywhere, on how to set taxes and subsidies for plastic and paper for all food use, or on how to set policy on all plastic and paper taxes and subsidies.

The higher they go up this abstraction ladder, they more that elites can economize on their efforts, to deal with many issues all at once. Yes, it can take more work to reason more abstractly, and there can be more ways to go wrong. And it often helps to first think about concrete examples, and then try to generalize to more abstract conclusions. But abstraction also helps to avoid biases that push us toward arbitrarily treat fundamentally similar things differently. And abstraction can better encompass indirect effects often neglected by concrete analysis. It is certainly my long experience as a social scientist and intellectual that abstraction often pays huge dividends.

So why don’t elites reason more abstractly now? Because they are mostly amateurs who do not understand most topics well enough to abstract them. And because they tend to focus on topics with strong moral colors, for which there is often an expectation of “automatic norms”, wherein we are just supposed to intuit norms without too much explicit analysis.

In the future, I expect us to have smarter better-trained better-selected elites (such as ems), who thus know more basics of more different fields, and are more able to reason abstractly about them. This has been the long term historical trend. Instead of thinking concrete issues through for themselves, and then overruling experts when they disagree, elites are more likely to focus on how manage experts and give them better incentives, so they can instead trust expert judgements. This should produce better judgements about what to regulate how, and what to leave alone how.

The future will take longer, and more abstract, views. And thus make more sensible decisions. Finally.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Specialized Innovation Is Easier

Consider a few things we know about task specialization and innovation: Larger cities and larger firms both have both more specialization and more (i.e., faster) innovation. More global industries also have both more specialization and innovation. And across the great eras of human history (animal, forager, farmer, industry), each era has brought more specialization, and also faster rates of innovation.

Here’s a simple explanation for (part of) this widely observed correlation: It is easier to create tools and procedures to improve tasks the more detail you know about them, and the less that task context varies across the task category. (It is also easier to fully automate such tasks; human level generality is very hard.)

For example, it seems harder to find a way to make a 1% improvement in a generic truck, designed to take any type or size of stuff any distance over any type of road, in any type of weather, relative to a very specific type of truck, such as for carrying animals, oil, cars, ice cream, etc. It gets even easier if you specialize to particular distances, roads, weather, etc. Partly this is because most ways to improve the generic truck will also apply to specialized trucks, but the reverse isn’t true.

This might sound obvious, but note that this is not our usual explanation for these correlations in each context. We usually say that cities are more innovative because they allow more chance interactions that generate ideas, not because they are more specialized. We say larger firms are more innovative because they have larger market shares, and so internalize more of the gains from innovation. We say more global industries are more capital intensive, and capital innovates faster. And we say that it is just a coincidence that over time we have both specialized more and invented better ways to innovate.

My simpler more unified explanation suggests that, more often than we have previously realized, specialization is the key to innovation. So we should look more to finding better ways to specialize to promote future innovation. Such as less product variety and more remote work.

Added 25Sep: A relevant quote:

As Frank Knight once expressed it, the fundamental point about the division of labour is that it is also a system for increasing the efficiency of learning and thus the growth of knowledge

GD Star Rating
a WordPress rating system
Tagged as: ,

Growth Is Change. So Is Death.

The very readable book The Wizard and the Prophet tells the story of environmental prophet William Vogt investigating the apocalypse-level deaths of guano-making birds near Peru. When he discovered the cause in the El Nino weather cycle, his policy recommendations were to do nothing to mitigate this natural cause; he instead railed against many much smaller human influences, demanding their reversal. A few years later his classic 1948 screed Road To Survival, which contained pretty much all the standard environmental advice and concepts used today, continued to warn against any but small human-caused changes to the environment, while remaining largely indifferent to even huge natural changes.

I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

But of course few are very good at resolving their near versus far incoherences. And so the positions people take end up depending a lot on how they first framed the key issues, as in terms of short or long term changes.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Meaning is Easy to Find, Hard to Justify

One of the strangest questions I get when giving talks on Age of Em is a variation on this:

How can ems find enough meaning in their lives to get up and go to work everyday, instead of committing suicide?

As the vast majority of people in most every society do not commit suicide, and manage to get up for work on most workdays, why would anyone expect this to be a huge problem in a random new society?

Even stranger is that I mostly get this question from smart sincere college students who are doing well at school. And I also hear that such students often complain that they do not know how to motivate themselves to do many things that they “want” to do. I interpret this all as resulting from overly far thinking on meaning. Let me explain.

If we compare happiness to meaning, then happiness tends to be an evaluation of a more local situation, while meaning tends to be an evaluation of a more global situation. You are happy about this moment, but you have meaning regarding your life.

Now you can do either of these evaluations in a near or a far mode. That is, you can just ask yourself for your intuitions on how you feel about your life, within over-thinking it, or you can reason abstractly and idealistically about what sort of meaning you should have or can justify having. In that later more abstract mode, smart sincere people can be stumped. How can they justify having meaning in a world where there is so much randomness and suffering, and that is so far from being a heaven?

Of course in a sense, heaven is an incoherent concept. We have so many random idealistic constraints on what heaven should be like that it isn’t clear that anything can satisfy them all. For example, we may want to be the hero of a dramatic story, even if we know that characters in such stories wish that they could live in more peaceful worlds.

Idealistic young people have such problems in spades, because they haven’t lived long enough to see how unreasonable are their many idealistic demands. And smarter people can think up even more such demands.

But the basic fact is that most everyone in most every society does in fact find meaning in their lives, even if they don’t know how to justify it. Thus I can be pretty confident that ems also find meaning in their lives.

Here are some more random facts about meaning, drawn from my revised Age of Em, out next April.

Today, individuals who earn higher wages tend to have both more happiness and a stronger sense of purpose, and this sense of purpose seems to cause higher wages. People with a stronger sense of purpose also tend to live longer. Nations that are richer tend to have more happiness but less meaning in life, in part because they have less religion. .. Types of meaning that people get from work today include authenticity, agency, self-worth, purpose, belonging, and transcendence.

Happiness and meaning have different implications for behavior, and are sometimes at odds. That is, activities that raise happiness often lower meaning, and vice versa. For example, people with meaning think more about the future, while happy people focus on the here and now. People with meaning tend to be givers who help others, while happy people tend to be takers who are helped by others. Being a parent and spending time with loved ones gives meaning, but spending time with friends makes one happy.

Affirming one’s identity and expressing oneself increase meaning but not happiness. People with more struggles, problems, and stresses have more meaning, but are less happy. Happiness but not meaning predicts a satisfaction of desires, such as for health and money, and more frequent good relative to bad feelings. Older people gain meaning by giving advice to younger people. We gain more meaning when we follow our gut feelings rather than thinking abstractly about our situations.

My weak guess is that productivity tends to predict meaning more strongly than happiness. If this is correct, it suggests that, all else equal, ems will tend to think more about the future, more be givers who help others, spend more time with loved ones and less with friends, more affirm their identity and express themselves, give more advice, and follow gut feelings more. But they will also have more struggles and less often have their desires satisfied.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Future Gender Is Far

What’s the worst systematic bias in thinking on the future? My guess: too much abstraction. The far vs. near mode distinction was first noticed in future thinking, because the effect is so big there.

I posted a few weeks ago that the problem with the word “posthuman” is that it assumes our descendants will differ somehow in a way to make them “other,” without specifying any a particular change to do that. It abstracts from particular changes to just embody the abstract idea of othering-change. And I’ve previously noted there are taboos against assuming that something we see as a problem won’t be solved, and even against presenting such a problem without proposing a solution.

In this post let me point out that a related problem plagues future gender relation thoughts. While many hope that future gender relations will be “better”, most aren’t at all clear on what specifically that entails. For some, all differing behaviors and expectations about genders should disappear, while for others only “legitimate” differences remain, with little agreement on which are legitimate. This makes it hard to describe any concrete future of gender relations without violating our taboo against failing to solve problems.

For example, at The Good Men Project, Joseph Gelfer discusses the Age of Em. He seems to like or respect the book overall:

Fascinating exploration of what the world may look like once large numbers of computer-based brain emulations are a reality.

But he less likes what he reads on gender:

Hanson sees a future where an em workforce mirrors the most useful and productive forms of workforce that we experience today. .. likely choose [to scan] workaholic competitive types. Because such types tend to be male, Hanson imagines an em workforce that is disproportionately male (these workers also tend to rise early, work alone and use stimulants).

This disproportionately male workforce has implications for how sexuality manifests in em society. First, because the reproductive impetus of sex is erased in the world of ems, sexual desire will be seen as less compelling. In turn, this could lead to “mind tweaks” that have the effect of castration, .. [or] greater cultural acceptance of non-hetero forms of sexual orientation, or software that make ems of the same sex appear as the opposite sex. .. [or] paying professional em sex workers.

It is important to note that Hanson does not argue that this is the way em society should look, rather how he imagines it will look by extrapolating what he identifies in society both today and through the arc of human history. So, if we can identify certain male traits that stretch back to the beginning of the agricultural era, we should also be able to locate those same traits in the em era. What might be missing in this methodology is a full application of exponential change. In other words, Hanson rightly notes how population, technology and so forth have evolved with increasing speed throughout history, yet does not apply that same speed of evolution to attitudes towards gender. Given how much perceptions around gender have changed in the past 50 years, if we accept a pattern of exponential development in such perceptions, the minds that are scanned for first generation ems will likely have a very different attitude toward gender than today, let alone thousands of years past. (more)

Obviously Gelfer doesn’t like something about the scenario I describe, but he doesn’t identify anything particular he disagrees with, nor offer any particular arguments. His only contrary argument is a maximally abstract “exponential” trend, whereby everything gets better. Therefore gender relations must get better, therefore any future gender relations feature that he or anyone doesn’t like is doubtful.

For the record, I didn’t say the em world selects for “competitive types”, that people would work alone, or that there’d be more men. Instead I have a whole section on a likely “Gender Imbalance”:

Although it is hard to predict which gender will be more in demand in the em world, one gender might end up supplying proportionally more workers than the other.

Though I doubt Gelfer is any happier with a future with may more women than men; any big imbalance probably sounds worse to most people, and thus can’t happen according to the better future gender relations principle.

I suspect Gelfer’s errors about my book are consistently in the direction of incorrectly attributing features to the scenario that he likes less. People usually paint the future as a heaven or a hell, and so if my scenario isn’t Gelfer’s heaven, it must be his hell.

GD Star Rating
a WordPress rating system
Tagged as: , ,

The Good-Near Bad-Far Bias

“Why am I late home from work? Terrible traffic slowed everyone down.”
“Why am I early home from work? I wanted to spend more time with you.”

We try to make ourselves look good. So we try to associate closely with good events, and distance ourselves more from bad events. Specifically, we prefer to explain bad events near us in terms of distant causes over which we had little influence, but explain good events near us in terms of our good long-lasting features, such as our authenticity, loyalty, creativity, or intelligence.

For example, managers are reluctant to adopt prediction markets for project deadlines, because it takes away their favorite excuse for failure: “The thing that delayed this project was a rare disaster that came out of left field; no one could have seen it coming.” Note that distant causes work best as excuses if they are rare and unpredictable. Otherwise there comes the question of why one didn’t do more to prevent or mitigate the distant influence.

As another example, when a class of people is doing poorly and we are reluctant to blame them, we prefer explanations far from their choices. So instead of blaming their self-control, laziness, or intelligence, we prefer to blame capitalism, general malaise, discrimination, foreigners, or automation. Recent over-emphasis on a sudden burst of automation as an unemployment cause comes in part from a perfect storm of not wanting to blame low-skilled workers, and wanting to brag about the technical prowess of groups we feel associated with.

Why don’t we blame close rivals more often, instead of distant causes? We do blame rivals sometimes, but if they retaliate by blaming us we risk ending up associated with a lot of blame. Better to keep the peace and both blame outsiders.

GD Star Rating
a WordPress rating system
Tagged as:

My Play

In social play, an animal again waits until safe and satisfied, and feels pleasure from a large variety of safe behavior within a distinct space and time. The difference is that now they explore behavior that interacts with other animals, seeking equilibria that adjust well to changes in other animals’ behavior. (more)

Over the course of their lives Kahneman and Tversky don’t seem to have actually made many big decisions. The major trajectories of their lives were determined by historical events, random coincidences, their own psychological needs and irresistible impulsions. .. Their lives weren’t so much shaped by decisions as by rapture. They were held rapt by each other’s minds. (more)

When tested in national surveys against such seemingly crucial factors as intelligence, ability, and salary, level of motivation proves to be a more significant component in predicting career success. While level of motivation is highly correlated with success, importantly, the source of motivation varies greatly among individuals and is unrelated to success. (more)

In recent posts I said that play is ancient and robust, and I outlined what play consists of. I claimed that play is a powerful concept, but I haven’t supported that claim much. Today, I’ll consider some personal examples.

As a kid I was a severe nerd. I was beaten up sometimes, and for years spent each recess being chased around the school yard. This made me quite cautious and defensive socially. Later I was terrified of girls and acted cautiously toward them too, which they didn’t take as a positive sign. In college I gave up on girls for a while, and then was surprised to find women attracted by my chatting sincerely about physics at the physics club.

Being good at school-work, I was more willing to take chances there, and focused more on what interested me. In college when I learned that the second two years of physics covered the same material as the first two years, just with more math, I stopped doing homework and played with the equations instead, and aced the exams. I went to grad school in philosophy of science because that interested me at the time, and then switched back to physics because I’d found good enough answers to my philosophy questions.

I left school for silicon valley when topics out there sounded more interesting, and a few years later switched to only working 30 hours a week so I could spend more time studying what I wanted. I started a PhD program at age 34, with two kids aged 0 and 2, and allowed myself to dabble in many topics not on the shortest path to tenure. Post tenure I’ve paid even less attention to the usual career rewards. I choose as my first book topic not the most marketable, impressive, or important topic, but the one that would most suck me in with fascinating detail. (I’d heard half the authors with a book contract don’t finish a book.)

So I must admit that much of my personal success in life has resulted less from econ-style conscious calculation, and more from play. Feeling safe enough to move into play mode freed me enough from anxiety to get things done. And even though my goals in more playful modes tended more to cuteness, curiosity, and glory, my acts there better achieved my long term goals than has conscious planning toward such ends. Yes, I did moderate my playful urges based on conscious thought, and that probably helped overall. Even so, I must admit that my personal experience raises doubts about the value of conscious planning.

My experience is somewhat unusual, but I still see play helping a lot in the successes of those I know and respect. While conscious planning can at times be important, what tends to matter more is finding a strong motivation, any strong motivation, to really get into whatever it is you are doing. And to feel comfortable enough to just explore even if none of your options seem especially promising and you face real career and resource pressures.

Playful motives are near and myopic but strong, while conscious planning can be accurate but far. Near beats far it seems. I’ll continue to ponder play, and hopefully find more to say.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
a WordPress rating system
Tagged as: ,

Student Status Puzzle

Grad students vary in their research autonomy. Some students are very willing to ask for advice and to listen to it carefully, while others put a high priority on generating and pursuing their own research ideas their own way. This varies with personality, in that more independent people pick more independent strategies. It varies over time, in that students tend to start out deferring at first, and then later in their career switch to making more independent choices. It also varies by topic; students defer more in more technical topics, and where topic choices need more supporting infrastructure, such as with lab experiments. It also varies by level of abstraction; students defer more on how to pursue a project than on which project ideas to pursue.

Many of these variations seem roughly explained by near-far theory, in that people defer more when near, and less when far. These variations seem at least plausibly justifiable, though doubts make sense too. Another kind of variation is more puzzling, however: students at top schools seem more deferential than those at lower rank schools.

Top students expect to get lots of advice, and they take it to heart. In contrast, students at lower ranked schools seem determined to generate their own research ideas from deep in their own “soul”. This happens not only for picking a Ph.D. thesis, but even just for picking topics of research papers assigned in classes. Students seem as averse to getting research topic advice as they would be to advice on with whom to fall in love. Not only are they wary of getting research ideas from professors, they even fear that reading academic journals will pollute the purity of their true vision. It seems a moral matter to them.

Of course any one student might be correct that they have a special insight into what topics are neglected by their local professors. But the overall pattern here seems perverse; people who hardly understand the basics of a field see themselves as better qualified to identify feasible interesting research topics than those nearby with higher status, and who have been in the fields for decades.

One reason may be overconfidence; students think their profs deserve more to be at a lower rank school than they do, and so estimate a lower quality difference between they and their profs. More data supporting this is that students also seem to accept the relative status ranking of profs at their own school, and so focus most of their attention on the locally top status profs. It is as if each student thinks that they personally have so far been assigned too low of a status, but thinks most others have been correctly assigned.

Another reason may be like our preferring potential to achievement; students try to fulfill the heroic researcher stories they’ve heard, wherein researchers get much credit for embracing ideas early that others come to respect later. Which can make some sense. But these students are trying to do this way too early in their career, and they go way too far with it. Being smarter and knowing more, students at top schools understand this better.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,