Tag Archives: NearFar

Near-Far Work Continues

I haven’t posted as much on near-far theory (= “construal level theory”) lately, but that’s more because my interests have wandered; research progress has continued. Here are four recent papers.

People who use more abstract language seem more powerful:

Power can be gained through appearances: People who exhibit behavioral signals of power are often treated in a way that allows them to actually achieve such power. In the current article, we examine power signals within interpersonal communication, exploring whether use of concrete versus abstract language is seen as a signal of power. Because power activates abstraction, perceivers may expect higher power individuals to speak more abstractly and therefore will infer that speakers who use more abstract language have a higher degree of power. Across a variety of contexts and conversational subjects in 7 experiments, participants perceived respondents as more powerful when they used more abstract language (vs. more concrete language). Abstract language use appears to affect perceived power because it seems to reflect both a willingness to judge and a general style of abstract thinking. (more)

Sounds evoke far mode when they are novel, slow, and reverberate more:

Psychological distance and abstractness primes have been shown to increase one’s level of construal. We tested the idea that auditory cues which are related to distance and abstractness (vs. proximity and concreteness) trigger abstract (vs. concrete) construal. Participants listened to musical sounds that varied in reverberation, novelty of harmonic modulation, and metrical segmentation. In line with the hypothesis, distance/abstractness cues in the sounds instigated the formation of broader categories, increased the preference for global as compared to local aspects of visual patterns, and caused participants to put more weight on aggregated than on individualized product evaluations. The relative influence of distance/abstractness cues in sounds, as well as broader implications of the findings for basic research and applied settings, is discussed. (more)

Employees want concrete feedback from direct leaders and abstract vision from higher leaders:

Three studies tested the hypothesis, derived from construal-level theory, that hierarchical distance between leaders and followers moderates the effectiveness of leader behaviors such that abstract behaviors produce more positive outcomes when enacted across large hierarchical distances, whereas concrete behaviors produce more positive outcomes when enacted across small hierarchical distances. In Study 1 (N = 2,206 employees of a telecommunication organization), job satisfaction was higher when direct supervisors provided employees with concrete feedback and hierarchically distant leaders shared with them their abstract vision rather than vice versa. Study 2 orthogonally crossed hierarchical distances with communication type, operationalized as articulating abstract values versus sharing a detailed story exemplifying the same values; construal misfit mediated the interactive effects of hierarchical distance and communication type on organizational commitment and social bonding. Study 3 similarly manipulated hierarchical distances and communication type, operationalized as concrete versus abstract calls for action in the context of a severe professional crisis. Group commitment and participation in collective action were higher when a hierarchically proximate leader communicated a concrete call for action and a hierarchically distant leader communicated an abstract call for action rather than vice versa. These findings highlight construal fit’s positive consequences for individuals and organizations. (more)

Tasks look easier when they are far away:

Psychological distance can reduce the subjective experience of difficulty caused by task complexity and task anxiety. Four experiments were conducted to test several related hypotheses. Psychological distance was altered by activating a construal mind-set and by varying bodily distance from a given task. Activating an abstract mind-set reduced the feeling of difficulty. A direct manipulation of distance from the task produced the same effect: participants found the task to be less difficult when they distanced themselves from the task by leaning back in their seats. The experiments not only identify psychological distance as a hitherto unexplored but ubiquitous determinant of task difficulty but also identify bodily distance as an antecedent of psychological distance. (more)

GD Star Rating
loading...
Tagged as:

Status Bid Coalitions

Katja Grace and I talked a bit recently about a possible “big scope status bias”, and she wrote a post on one of the ideas we discussed:

I’m not convinced that more abstract things are more statusful in general, or that it would be surprising if such a trend were fairly imprecise. However supposing they are and it was, here is an explanation for why some especially abstract things seem silly. … Abstract rethinking of common concepts is easily mistaken for questioning basic assumptions. Abstract questioning of basic assumptions really is questioning basic assumptions. And questioning basic assumptions has a strong surface resemblance to not knowing about basic truths, or at least not having a strong gut feeling that they are true. (more)

Yes, people who question basic assumptions can be framed as silly for not understanding basic things. But I think a similarly strong effect is that people often just don’t like reconsidering basic assumptions. Once you’ve used certain assumptions and matching concepts for a long time, your thinking comes to rely on them. Not only would you lose a lot of that investment if your assumption was wrong, but it becomes mentally hard to even consider the possibility. A third strong effect, I think, is one I mentioned in my previous post:

It is harder to reason well about big scope choices, which is part of why it impresses to do that well. … Some topics will be so abstract that very few can deal well with them, or even evaluate the dealings of others. So those few people will tend more to be on their own, and not get much praise from others. (more)

Reasoning abstractly in a way that seems to question basic assumptions is often seen as a bid for status. As with most such bids, observers have to decide if to accept or oppose that bid. Observers are tempted to reject it, not only because they don’t like others to rise in status, but also because they don’t like to have to reconsider basic assumptions, and because it is so tempting to reject by ridicule, via insinuating that the bidder is stupid and silly.

But while these temptations can be strong, observers must also consider coalition politics – how many allies how strong can the bidder bring into play. If a high status field like physics brings broad unified support to the abstract reasoning, people will mostly back down and accept the abstract status bid. But if only a few supporters can be found with only modest status, the temptation to ridicule is likely to win out. Philosophers are often on the borderline here, with enough status to intimidate many, but not enough to intimidate high status folks like physicists, who are more tempted to ridicule them.

Added 10a: This helps explain the puzzle I engaged in Too Much Consulting? When managers want to push changes that seem to question basic firm assumptions, they need especially strong high status support to resist the ridicule response. So they hire prestigious management consultants.

GD Star Rating
loading...
Tagged as: , ,

Big Scope Status Bias

Some data points:

  1. Many incoming college freshman like “international studies” or “international business.” Far fewer like local studies or local business. Yet there will be more jobs in the later area than the former.
  2. The media discusses national and international politics more than more local politics, yet most of the “news you can use” is local.
  3. Our economics department once estimated there’d be substantial demand for a “managerial economics” major. It would teach basically the same stuff as in an economics major gets, but attract students because of the word “managerial.”
  4. Within management, reorganization is usually higher status than managing within existing structures.
  5. The ratio of students who do science majors relative to engineering majors is much larger than the ratio of jobs in those areas.
  6. Within science, students tend to prefer “basic” sciences like particle physics to more “applied” sciences like geology or material science, relative to the ratio of jobs in such areas.
  7. Compared to designing things from scratch, there is far more work out there maintaining, repairing, and making minor modifications to devices and software. Yet engineering and software schools focus mainly on designing things from scratch.
  8. Within engineering, designing products is higher status than designing the processes that manufacture those products.
  9. Designing new categories of products is seen as higher status than new products within existing categories.
  10. Even when designing from scratch, most real work is testing, honing, and debugging a basic idea. Yet in school the focus is more on creating the basic idea.
  11. There seems to be an overemphasis at school on designing tools that may be useful for other design work, relative to using tools to design things of more direct value.

Do these trends have something in common? My guess: we see wider-scope choices as higher status, all else equal. That is, things associated with choices that we think will influence and constrain many other choices are seen as higher status than things associated with those other more constrained choices. For example, we think managers constrain subordinates, world policy constrains local policy, physics constrains geology, product designs constrain product maintenance, and so on. Yes reverse constraints also happen, but we think those happen less often.

The ability to control the choices of others is a kind of power, and power has long been seen as a basis for status. There may also be a far-view heuristic at work here, i.e., where choices that evoke a far mental view tend to be seen as high status. After all, power does tend to evoke a far view.

A lesson here seems to be that while it can raise your status to be associated with big scope choices, you should expect a lot of competition for that status, and a relative neglect of smaller scope choices. That is, more people may major in science, but there are more jobs in engineering. You might impress people by focusing on creating designs in school, but you are likely to spend your life maintaining pre-existing designs. If you want to get stuff done instead of gaining status, you should focus on smaller scope choices.

Now in my life I’ve spent a lot of time trying to reconsider basic big scope choices. For example, I’ve studied foundations of quantum mechanics, and proposed a new form of governance. And I’ve often thought of such topics as neglected. So how can I reconcile such views with the apparent lesson of this post?

One obvious reconciliation is that I’ve just been wrong, having succumbed to the big scope status bias.

Another possibility is that big scope topics tend more to be public goods where people tend to free-ride on the efforts of others. It is easier for a person or group to own the gains from better understanding smaller scope topics, and thus have a strong incentives to deal with them. If so, there would be positive externalities from progress on such topics, to counter the negative externalities from status and signaling. I think this explanation has some truth, but only some.

A third possibility is that it is harder to reason well about big scope choices, which is part of why it impresses to do that well. But if good reasoning is harder as the topic gets more abstract, there should be fewer people who can handle such topics. Some topics will be so abstract that very few can deal well with them, or even evaluate the dealings of others. So those few people will tend more to be on their own, and not get much praise from others.

Are there more possibilities to consider?

GD Star Rating
loading...
Tagged as: ,

Disagreement Is Far

Yet more evidence that it is far mental modes that cause disagreement:

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. (more; paper; HT Elliot Olds)

The question “why” evokes a far mode while “how” which evokes a near mode.

GD Star Rating
loading...
Tagged as: ,

Fixing Academia Via Prediction Markets

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see here, herehere, here). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehere, here, here). I’ve also talked a lot lately about what I see as the main social functions of academia (see here, here, here, here). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques. First, people willing to bet on theories are not a good source of revenue to pay for research. There aren’t many of them and they should in general be subsidized not taxed. You’d have to legally prohibit other markets to bet on these without the tax, and even then you’d get few takers.

Second, Alexander says to subsidize markets the same way they’d be taxed, by adding money to the betting pot. But while this can work fine to cancel the penalty imposed by a tax, it does not offer an additional incentive to learn about the question. Any net subsidy could be taken by anyone who put money in the pot, regardless of their info efforts. As I’ve discussed often before, the right way to subsidize info efforts for a speculative market is to subsidize a market maker to have a low bid-ask spread.

Third, Alexander’s plan to have bettors vote to agree on a question tester seems quite unworkable to me. It would be expensive, rarely satisfy both sides, and seems easy to game by buying up bets just before the vote. More important, most interesting theories just don’t have very direct ways to test them, and most tests are of whole bundles of theories, not just one theory. Fourth, for most claim tests there is no obvious definition of “everyone in the field,” nor is it obvious that everyone should have opinion on those tests. Forcing a large group to all express a public opinion seems a huge cost with unclear benefits.

OK, now let me review my proposal, the result of twenty five years of thinking about this. The market maker subsidy is a very general and robust mechanism by which research patrons can pay for accurate info on specified questions, at least when answers to those questions will eventually be known. It allows patrons to vary subsidies by questions, answers, time, and conditions.

Of course this approach does require that such markets be legal, and it doesn’t do well at the main academic function of credentialing some folks as having the impressive academic-style mental features with which others like to associate. So only the customers of academia who mainly want accurate info would want to pay for this. And alas such customers seem rare today.

For research patrons using this market-maker subsidy mechanism, their main issues are about which questions to subsidize how much when. One issue is topic. For example, how much does particle physics matter relative to anthropology? This mostly seems to be a matter of patron taste, though if the issue were what topics should be researched to best promote economic growth, decision markets might be used to set priorities.

The biggest issue, I think, is abstraction vs. concreteness. At one extreme one can ask very specific questions like what will be the result of this very specific experiment or future empirical measurement. At the other extreme, one can ask very abstract questions like “do grapes cure cancer” or “is the universe infinite”.

Very specific questions offer bettors the most protection against corruption in the judging process. Bettors need worry less about how a very specific question will be interpreted. However, subsidies of specific questions also target specific researchers pretty directly for funding. For example, subsidizing bets on the results of a very specific experiment mainly subsidizes the people doing that experiment. Also, since the interest of research patrons in very specific questions mainly results from their interest in more general questions, patrons should prefer to directly target the more general questions directly of interest to them.

Fortunately, compared to other areas where one might apply prediction markets, academia offers especially high hopes for using abstract questions. This is because academia tends to house society’s most abstract conversations. That is, academia specializes in talking about abstract topics in ways that let answers be consistent and comparable across wide scopes of time, space, and discipline. This offers hope that one could often simply bet on the long term academic consensus on a question.

That is, one can plausibly just directly express a claim in direct and clear abstract language, and then bet on what the consensus will be on that claim in a century or two, if in fact there is any strong consensus on that claim then. Today we have a strong academic consensus on many claims that were hotly debated centuries ago. And we have good reasons to believe that this process of intellectual progress will continue long into the future.

Of course future consensus is hardly guaranteed. There are many past debates that we’d still find to hard to judge today. But for research patrons interested in creating accurate info, the lack of a future consensus would usually be a good sign that info efforts in that area less were valuable than in other areas. So by subsidizing markets that bet on future consensus conditional on such a consensus existing, patrons could more directly target their funding at topics where info will actually be found.

Large subsidies for market-makers on abstract questions would indirectly result in large subsidies on related specific questions. This is because some bettors would specialize in maintaining coherence relationships between the prices on abstract and specific questions. And this would create incentives for many specific efforts to collect info relevant to answering the many specific questions related to the fewer big abstract questions.

Yes, we’d  probably end up with some politics and corruption on who qualifies to judge later consensus on any given question – good judges should know the field of the question as well as a bit of history to help them understand what the question meant when it was created. But there’d probably be less politics and lobbying than if research patrons choose very specific questions to subsidize. And that would still probably be less politics than with today’s grant-based research funding.

Of course the real problem, the harder problem, is how to add mechanisms like this to academia in order to please the customers who want accuracy, while not detracting from or interfering too much with the other mechanisms that give the other customers of academia what they want. For example, should we subsidize high relevant prestige participants in the prediction markets, or tax those with low prestige?

Those promised quotes: Continue reading "Fixing Academia Via Prediction Markets" »

GD Star Rating
loading...
Tagged as: , ,

Abstractly Ideal, Concretely Selfish

A new JPSP paper confirms that we are idealistic in far mode, and selfish in near mode. If you ask people for short abstract descriptions of their goals, they’ll say they have ideal goals. But if you ask them to describe in details what is it like to be them pursuing their goals, their selfishness shines clearly through. Details:

Completing an inventory asks the respondent to take an observer’s perspective upon the self, effectively asking, “What do you look like to others?” Imagining watching a video of oneself driving a car, playing basketball, or speaking to a friend is an experience as the self-as-actor. Rating the importance of various goals also recruits the self-as-actor. Motivated to maintain a moral reputation, the self-as-actor is infused with prosocial, culturally vetted scripts.

Another way of accessing motivation is by asking people questions about their lives. Open-ended verbal responses (e.g., narratives or implicit measures) require the respondent to produce ideas, recall details, reflect upon the significance of concrete events, imagine a future, and narrate a coherent story. In effect, prompts to narrate ask respondents, “What is it like to be you?” Imagining actually driving a car, playing basketball, or speaking to a friend is an experience as the self-as-agent (McAdams, 2013). Asking people to tell about their lives also recruits the self-as-agent. Motivated by survival, the self-as-agent is selfish in nature. …

Taken together, this leads to the prediction that frames the current research: Inventory ratings, which recruit the self-as-actor, will yield moral impressions, whereas narrated descriptions, which recruit the self-as-agent, will yield the impression of selfishness. …

The motivation to behave selfishly while appearing moral gave rise to two, divergently motivated selves. The actor—the watched self— tends to be moral; the agent—the self as executor—tends to be selfish. Each self serves its own adaptive function: The actor helps people maintain inclusion in groups, whereas the agent attends to basic survival needs. Three studies support the thesis that the actor is moral and the agent is selfish. In Study 1, actors claimed their goals were equally about helping the self and others (viz., moral); agents claimed their goals were primarily about helping the self (viz., selfish). This disparity was evident in both individualist and collectivist cultures, albeit more so among individualists. Study 2 compared actors and agents’ motives to those of people role-playing highly prosocial or selfish exemplars. In content and in the impression they made upon an outside observer, actors’ motives were similar to those of the prosocial role-players, whereas agents’ motives were similar to those of the selfish role-players. In Study 3, participants claimed that their agent’s motives were the more realistic and their actor’s motives the more idealistic of the two. When asked to take on an idealistic mindset, agents became more moral; a realistic mindset made the actor more selfish. (more)

GD Star Rating
loading...
Tagged as: ,

Moral Legacy Myths

Imagine that you decide that this week you’ll go to a different doctor from your usual one. Or that you’ll get a haircut from a different hairdresser. Ask yourself: by how much do you expect such actions to influence the distant future of all our descendants? Probably not much. As I argued recently, we should expect most random actions to have very little long term influence.

Now imagine that you visibly take a stand on a big moral question involving a recognizable large group. Like arguing against race-based slavery. Or defending the Muslim concept of marriage. Or refusing to eat animals. Imagine yourself taking a personal action to demonstrate your commitment to this moral stand. Now ask yourself: by how much do you expect these actions to influence distant descendants?

I’d guess that even if you think such moral actions will have only a small fractional influence on the future world, you expect them to have a much larger long term influence than doctor or haircut actions. Furthermore, I’d guess that you are much more willing to credit the big-group moral actions of folks centuries ago for influencing our world today, than you are willing to credit people who made different choices of doctors or hairdressers centuries ago.

But is this correct? When I put my social-science thinking cap on, I can’t find good reasons to expect big-group moral actions to have much stronger long term influence. For example, you might posit that moral opinions are more stable than other opinions and hence last longer. But more stable things should be harder to change by any one action, leaving the average influence about the same.

I can, however, think of a good reason to expect people to expect this difference: near-far (a.k.a construal level) theory. Acts based on basic principles seem more far than acts based on practical considerations. Acts identified with big groups seem more far than acts identified with small groups. And longer-term influence is also more strongly associated with a far view.

So I tentatively lean toward concluding that this expectation of long term influence from big-group moral actions is mostly wishful thinking. Today’s distribution of moral actions and the relations between large groups mostly result from a complex equilibrium of people today, where random disturbances away from that equilibrium are usually quickly washed away. Yes, sometimes they’ll be tipping points, but those should be rare, as usual, and each of us can only expect to have a small fraction influence on such things.

GD Star Rating
loading...
Tagged as: ,

Rah Local Politics

Long ago our primate ancestors learned to be “political.” That is, instead of just acting independently, we learned to join into coalitions for mutual advantage, and to switch coalitions for private advantage. Our human ancestors added social norms, i.e., rules enforced by feelings of outrage in broad coalitions. Foragers used norms and coalitions to manage bands of roughly thirty members, and farmers applied similar behaviors to village communities of roughly a thousand.

In ancient politics, people learned to attract allies, to judge who else was reliable as an ally, to gossip about who was allied with who, and to help allies and hurt rivals. In particular we learned to say good things about allies and bad things about rivals, such as accusing rivals of violating key social norms, and praising allies for upholding them.

Today many people consider themselves to be very “political”, and they treat this aspect of themselves as central to their identity. They spend lots of time talking about related views, associating with those who share them, and criticizing those who disagree. They often feel especially proud of how boldly and freely they do these things, relative to their ancestors and those in “backward” cultures.

Trouble is, such folks are mostly “political” about national or international politics. Their interest fades as the norms and coalitions at stake focus on smaller scales, such as regions, cities, or neighborhoods. The politics of firms, clubs, and families hardly engage them at all. Of course such people are members of local coalitions, and do sometimes voice support for enforcing related norms. So they are political there to some extent. But they are much less bold, self-righteous, and uncompromising about local politics, and don’t consider related views to be central to their identity. Such folks are eager to associate with those who sacrifice to improve world politics, but are only mildly interested in associating with those who sacrifice to improve local politics.

This focus on politics at the largest scale is both relatively safe, and relatively useless. On the one hand, your efforts to take sides and support norm enforcement at very local levels are far more likely to benefit you personally via better local outcomes. On the other hand, such efforts are far more likely to bother opposing coalitions, leaving you vulnerable to retaliation. Given these risks, and the greater praise given to for those who push politics at the largest scales, it is understandable if people tend to focus on safe-scale politics, unlikely to cause them personal troubles.

Near-far theory predicts that we’d tend to focus our ideals and moral outrage and praise more on the largest social scales. But a net result of this tendency is that we seem far less effective today than were our ancestors at enforcing very-local-level social norms, and at discouraging related harms from local coalitions. We chafe at the idea of letting our nation be dominated by a king, but we easily and quietly submit to local kings in firms, clubs, and families.

Our political instincts and efforts are largely wasted, because we just are much less able to coordinate to identify and right wrongs on the largest scales. Now to some extent this is healthy. There was a lot of destructive waste when most political efforts were directed at very local politics. But many wrongs were also detected and righted. The human political instinct does serve some positive functions. After all, human bands were much larger than other primate bands, suggesting that human politics was less destructive than other primate politics.

I’ve suggested that organizations use decision markets to help advise key decisions. And to illustrate the idea, I’ve discussed the example of how it could apply to national politics. I’ve done this because people seem far more interested in reforming national politics, relative to reforming local small organizations. But honestly, I see a much bigger gains overall from smaller scale applications. And small scale application is where the idea needs to start, to work out the kinks. And such trials are feasible now. If only I could get some small orgs to try. Sigh.

I posted back in ’07 on a hero of local politics:

A colleague of my wife was a nurse at a local hospital, and was assigned to see if doctors were washing their hands enough. She identified and reported the worst offender, whose patients were suffering as a result. That doctor had her fired; he still works there not washing his hands. (more)

I’d admire you much more if you acted like this, relative to your marching on Washington, soliciting door-to-door for a presidential candidate, or posting ever so many political rants on Facebook. Shouldn’t you admire such folks far more as well?

GD Star Rating
loading...
Tagged as: , ,

The Need To Believe

When a man loves a woman, …. if she is bad, he can’t see it. She can do no wrong. Turn his back on his best friend, if he puts her down. (Lyrics to “When a Man Loves A Woman”)

Kristeva analyzes our “incredible need to believe”–the inexorable push toward faith that … lies at the heart of the psyche and the history of society. … Human beings are formed by their need to believe, beginning with our first attempts at speech and following through to our adolescent search for identity and meaning. (more)

This “to believe” … is that of Montaigne … when he writes, “For Christians, recounting something incredible is an occasion for belief”; or the “to believe” of Pascal: “The mind naturally believes and the will naturally loves; so that if lacking true objects, they must attach themselves to false ones.” (more)

We often shake our heads at the gullibility of others. We hear a preacher’s sermon, a politician’s speech, a salesperson’s pitch, or a flatter’s sweet talk, and we think:

Why do they fall for that? Can’t they see this advocate’s obvious vested interest, and transparent use of standard unfair rhetorical tricks? I must be be more perceptive, thoughtful, rational, and reality-based than they. Guess that justifies my disagreeing with them.

Problem is, like the classic man who loves a woman, we find hard to see flaws in what we love. That is, it is easier to see flaws when we aren’t attached. When we “buy” we more easily see the flaws in the products we reject, and when we “sell” we can often ignore criticisms by those who don’t buy.

Why? Because we have near and far reasons to like things. And while we might actually choose for near reasons, we want to believe that we choose for far reasons. We have a deep hunger to love some things, and to believe that we love them for the ideal reasons we most respect for loving things. This applies not only to other people, but to politicians, to writers, actors, ideas.

For the options we reject, however, we can see more easily the near reasons that might induce others to choose them. We can see pandering and flimsy excuses that wouldn’t stand up to scrutiny. We can see forced smiles, implausible flattery, slavishly following fashion, and unthinking confirmation bias. We can see politicians who hold ambiguous positions on purpose.

Because of all this, we are the most vulnerable to not seeing the construction of and the low motives behind the stuff we most love. This can be functional in that we can gain from seeming to honestly sincerely and deeply love some things. This can make others that we love or who love the same things feel more bonded to us. But it also means we mistake why we love things. For example, academics are usually less interesting or insightful when researching topics where they feel the strongest; they do better on topics of only moderate interest to them.

This also explains why sellers tend to ignore critiques of their products as not idealistic enough. They know that if they can just get good enough on base features, we’ll suddenly forget our idealism critiques. For example, a movie maker can ignore criticisms that her movie is trite, unrealistic, and without social commentary. She knows that if she can make the actors pretty enough, or the action engaging enough, we may love the movie enough to tell ourselves it is realistic, or has important social commentary. Similarly, most actors don’t really need to learn how to express deep or realistic emotions. They know that if they can make their skin smooth enough, or their figure tone enough, we may want to believe their smile is sincere and their feelings deep.

Same for us academics. We can ignore critiques of our research not having important implications. We know that if we can include impressive enough techniques, clever enough data, and describe it all with a pompous enough tone, our audiences may be impressed enough to tell themselves that our trivial extension of previous ideas are deep and original.

Beware your tendency to overlook flaws in things you love.

GD Star Rating
loading...
Tagged as: , ,

Testing An Idealistic-Tech Hypothesis

Katja:

Relatively minor technological change can move the balance of power between values that already fight within each human. [For example,] Beeminder empowers a person’s explicit, considered values over their visceral urges. … In the spontaneous urges vs. explicit values conflict …, I think technology should generally tend to push in one direction. … I’d weakly guess that explicit values will win the war. (more)

The goals we humans tend to explicitly and consciously endorse tend to be more idealistic than the goals that our unconscious actions try to achieve. So one might expect or hope that tech that empowers conscious mind parts, relative to other parts, would result in more idealistic behavior.

A relevant test of this idea may be found in the behavior of human orgs, such as firms or nations. Like humans, orgs emphasize more idealistic goals in their more explicit communications. So if we can identify the parts of orgs that are most like the conscious parts of human minds, and if we can imagine ways to increase the resources or capacities of those org parts, then we can ask if increasing such capacities would move orgs to more idealistic behavior.

A standard story is that human consciousness functions primarily to manage the image we present to the world. Conscious minds are aware of the actions we may need to explain to others, and are good at spinning good-looking explanations for our own behavior, and bad-looking explanations for the behavior of rivals.

Marketing, public relation, legal, and diplomatic departments seem to be analogous parts of orgs. They attend more to how the org is seen by others, and to managing org actions that are especially influential to such appearances. If so, our test question becomes: if the relative resources and capacities of these org parts were increased, would such orgs act more idealistically? For example, would a nation live up to its self-proclaimed ideals more if the budget of its diplomatic corps were doubled?

I’d guess that such changes would tend to make org actions more consistent, but not more idealistic. That is, the mean level of idealism would stay about the same, but inconsistencies would be reduced and deviations of unusually idealistic or non-idealistic actions would move toward the mean. Similarly, I suspect humans with more empowered conscious minds do not on average act more idealistically.

But that is just my guess. Does anyone know better how the behavior of real orgs would change under this hypothetical?

GD Star Rating
loading...
Tagged as: , , ,