# We Add Near, Average Far

Quick, what is the best gift you ever got from a woman? From your parents? From a left-handed person? From a teacher? These aren’t easy questions to answer. But they seem easier than these questions: What is the total value of all the gifts you ever got from women? From your parents? From left-handed folks? From teachers?

For the first set of questions you can try to think of examples of particular people in those categories, and then think of particular gifts you got from those particular people. That can help you guess at the best gift from those categories. But to estimate the total value of gifts from people in categories, you’ll have to also estimate how many gifts you ever got from folks in each category.

Note that it also seems easy to estimate the average value of gifts from each category. To do this, you need only remember a few gifts that fit each category, and then average their values.

As another example, imagine you are looking at building entrance laid out in multi-colored tiles. Some tiles are blue, some red, some green, etc. You are looking at it from a distance, at an angle, in variable lighting. In this situation it will be much easier to estimate if there is more blue than red area in the tiles, than to estimate how many square inches of blue tile area is in that entrance. This later estimate requires you to additionally estimate distances to reference points, to estimate the total surface area.

These examples suggest that when we think in far mode, without a structured systematic representation of our topic, it is usually easier to average than to add values. So averaging is what we’ll tend to do. All of which I mention to introduce to a fascinating paper that I just noticed, even though it got a lot of publicity last December:

This analysis introduces the Presenter’s Paradox. Robust findings in impression formation demonstrate that perceivers’ judgments show a weighted averaging pattern, which results in less favorable evaluations when mildly favorable information is added to highly favorable information. Across seven studies, we show that presenters do not anticipate this averaging pattern on the part of evaluators and instead design presentations that include all of the favorable information available. This additive strategy (“more is better”) hurts presenters in their perceivers’ eyes because mildly favorable information dilutes the impact of highly favorable information. For example, presenters choose to spend more money to make a product bundle look more costly, even though doing so actually cheapened its value from the evaluators’ perspective. (more)

The authors attribute this to a near-far effect:

Presenters face many pieces of potentially relevant information and need to determine, in a bottom-up fashion, which ones to include in a presentation. This presumably draws attention to each individual piece of information as a discrete entity and a focus on piecemeal processing. If a given piece of information exceeds a neutrality threshold, the presenter will conclude that it is compatible with the message he or she seeks to convey and will include it. This results in presentations that would fare better under an adding rather than averaging rule. In contrast, evaluators’ primary task is to make a summary judgment of the overall presentation, which fosters a focus on holistic processing and the big picture and results in an averaging pattern as observed in many impression formation studies.

Additional experiments confirm this near-far interpretation. Those who prepare presentations and proposals tend to focus on them in detail, and so add part values in near mode style, while those who consume such presentations or proposals tend to pay much less attention, and so average their values in far mode style.

This result seems to me quite pregnant with interesting implications, none of which were mentioned in the dozen blog posts on the subject that have appeared since last December. So I guess it’s up to me.

First, this result predicts the usual academic advice to delete publications from low ranked journals from your vita. Yes those extra publications took extra work, and show more total intellectual contribution, but distracted readers evaluate you by averaging your publications, not adding them.

Second, this also predicts that academia will tend in general to neglect conclusions suggested by lots of weak clues, relative to conclusions based on a single strong theory or empirical comparison. People with a practical understanding of particular areas will correctly complain that academics tend too much to latch on to a few easy to explain and justify arguments, at the cost of lots of detail that practitioners appreciate.

Third, this predicts that in morality and politics, which are especially far sorts of topics, arguments tend to be won by those who push simple strong principles, even though people privately tend to choose actions that deviate from such principles. For example, while laws say no one can get medical advice from non-doctors, on the grounds that docs know best, but given a private choice most of us would often let other considerations convince us to listen to non-docs. While actions tend to be chosen in a near mode where lots of other weaker considerations get added, people know their best chance for winning an argument with a distracted audience is to focus on their one strongest point.

Fourth, this predicts Tetlock’s hedgehog vs. foxes result. Foreign policy is an especially far view sort of subject, and experts who focus on one strongest consideration get the most respect and attention, but experts who rely on many considerations, which are on average weaker, are more accurate.

Futurism is probably the most far view sort of topic, so I’d guess that all this holds there the most strongly. That is, while the most futurists who get the most attention from distracted audiences are those who harp endlessly on one clear plausible idea, the most accurate futurists are probably those who know and use hundreds of clues, many of them weak. Alas this is a problem for those of us who want to consider some aspect of the future in detail, since we quickly run out of strong principles, and then have to rely more on many weak clues.

Added Nov 25, 2012: This post gives data showing people donate money based more on the average than the total sympathy of the recipients. So you are better off asking for donations to help a particular especially sympathetic recipient, than to help many such folks.

GD Star Rating
Tagged as: , , ,
• Arthut

Just did a blog search on “Thinking Fast and Slow” and Kahneman on your blog.

You never ever cited his book, and talk about him two times, once in a few posts in Foreign Policy, and again about calibration in chess.

This seems odd, as his two thinking systems has a lot of analogies to your Far and Near mode. I just remembered this because he devotes a substantial part of his book to how one system sums and the other averages.

Just thinking, it wouldn’t be interesting to you, and to your readers if you shared your thoughts on how your systems are different, what you think are his mistakes, the differences in frames, and where do you agree with him?

• http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

My take (I’m preparing a posting on my blog) is that near and far mode involve different inter-relations between System 1 and System 2. I don’t think they can be reduced to them. It’s true that System 1 is less analytic, but it’s also true that it’s less abstract than System 2.

The neurophysiological underpinning of far mode is that it involves processing higher in the nervous system: that is, subjected to deeper analysis. That fits badly with System 1. (Right- and left-hemisphere is another dichotomy that sort of sounds related  but turns out not to have any simple relationship.)

In short, the relationship between System 1 and 2 and far mode and near mode begs for clarification, but the relationship isn’t a straightforward correspondence. There’s no compelling reason why Robin needs to rush to clarify the relationship–when the psychologists doing the research haven’t clarified it and I, after all, want to beat him to it.

• http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

It’s true that System 1 is less analytic, but it’s also true that it’s less abstract than System 2.

Should be more abstract

• http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

My essay integrating System 1 & 2 with construal-level theory is posted. “The deeper solution to the mystery of moralism—Morality and free will are hazardous to your mental health.” (http://tinyurl.com/9exlxlk)

• Steve Reilly

I wonder how this squares with basic fallacies about donating to multiple charities.  As far as I can tell, a person who donates to help feed starving children isn’t seen as quite as generous as a person who helps starving children AND helps clothe naked mole rats, even if total amount donated is the same for each person.  But the averaging principle should make us see the latter as doing less good.

• richatd silliker

The problem for presenters and evaluators is one of having to draw conclusions. However, this behaviour seems only to presents itself in hierarchical structures?

Gifts can be many things. Not just objects. They can be
experiences.

The greatest gifts given to me were the behaviors of others,
good or bad, that helped me build my intuition. As speech is a behavior many of
the gifts were presented to me in narratives. Others where the attributes of
truth that lay in-expressed within me and
needed to be brought to the surface to be realized. This could be accomplished by
benevolent dictators, loving friends and family.

So when somebody gives you an object as a gift there is also
the experience you have that’s a gift.

I’ve read or heard that in Japan the practice of gift giving
is one in which you give a person a gift that they can give to their partner.
Sounds like a good practice to me.

The real gift is in giving.

We add near, average far depends on the context of the abstraction.

• http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

I contend that abstraction is far, sequencing is near: http://tinyurl.com/7faf9nz

• richatd silliker

In my world abstraction is high I.  You can have abstractions which are high O but these are abstractions of abstractions. An example of this is economics.which is high O.

• http://juridicalcoherence.blogspot.com/ Stephen R. Diamond

What do l and O mean?

• richatd silliker

inward and outward.

• richatd silliker

Could sequencing in this case be considered hierarchical?

• Nancy Lebovitz

This suggests that the geekish desire to tell the whole story, including the weakening parts, is a disaster for getting ideas across.

• arch1

Seconding Arthut’s comment/query and adding a few details (from memory, sorry) from Thinking Fast and Slow-

1) Kahneman’s “System 1” (our quick, intuitive, unconscious mode) finds it easy to identify a line segment of average/typical length from among a “pick-up-sticks” batch of line segments displayed onscreen, but difficult to quantify the sticks’ total length

2)  When presented with side-by-side choices of A) a tableware set in good condition, B) a set consisting of A plus extra matching plates some good and some broken, subjects report willingness to spend slightly more for B; but when disjoint sets of subjects are asked how much they would spend for A or B *singly*, B is on average valued *less* than A.   (Analogous tests have been run on Ebay, with similar results).

3) TFaS cites a number of interesting ways to encourage engagement of “System 2” (our heavyweight, analytical, conscious, less easily fooled mode).  These include a) making text tough to read (e.g. small, light gray shading), b) gripping a pencil lengthwise with the lips (which engages the same frown muscles that tend to be engaged when the going gets tough generally; conversely, gripping a pencil *crosswise* with the *teeth* engages the *smile* muscles, which tends to *reduce* System 2’s engagement)

• dislikedisqus

Consistent with the Presenter’s Paradox is the observation that it is very hard to get juries to convict in complex financial prosecutions, where the case tends to consist of laborious presentation of small bits of evidence.  Especially given that the standard “beyond a reasonable doubt” applies, and the jury often appears to equate the absence of a “smoking gun” with “reasonable doubt”. When all that may be going on is they were just averaging. Or sleeping. Don’t know where commercial litigation fits in to near / far dichotomy

• Stefano Bertolo

The Affect Heuristic shows that we are actually quite willing to average in the very near and that thsi might be evolutionarily a fairly ancient heuristic

• DonaldWCameron

Hard for me to read this.  Imagine three circles. A smaller inner circle surrounded by two larger circles respectively.
The inner most circle encapsulates our likes. The second circle encapsulates our dislikes. The third encapsulates all that which lies outside of our experience. We neither like them nor dislike them.

Each person has a distinct set of circles.

In this context we have a graphic example of one “innate” set of near-far relationships, and a “metaphorical”  set of near-far relationships.
Granted, in this plausibly “far” example, we are limited for the sake of comprehension to only straight-line examples from the center of the inner circle directly into the outer circles.

With the innate set, one is merely required to straight-line-pull some outwardly and across their likes through their dislikes into their indifference (the future).

With the metaphorical set some will have centers spaced quite widely apart
Specifically those whose “likes” lie on the “far” side of one’s “likes”.

The only way averaging can work to the protagonist’s advantage is with those whose “biases” are closest to each other.

respectfully

• http://overcomingbias.com RobinHanson

I just added to this post.