A Tale Of Two Tradeoffs

The design of social minds involves two key tradeoffs, which interact in an important way.

The first tradeoff is that social minds must both make good decisions, and present good images to others.  Our thoughts influence both our actions and what others think of us.  It would be expensive to maintain two separate minds for these two purposes, and even then we would have to maintain enough consistency to convince outsiders a good-image mind was in control. It is cheaper and simpler to just have one integrated mind whose thoughts are a compromise between these two ends.

When possible, mind designers should want to adjust this decision-image tradeoff by context, depending on the relative importance of decisions versus images in each context.  But it might be hard to find cheap effective heuristics saying when images or decisions matter more.

The second key tradeoff is that minds must often think about the same sorts of things using different amounts of detail.  Detailed representations tend to give more insight, but require more mental resources.  In contrast, sparse representations require fewer resources, and make it easier to abstractly compare things to each other.  For example, when reasoning about a room a photo takes more work to study but allows more attention to detail; a word description contains less info but can be processed more quickly, and allows more comparisons to similar rooms.

It makes sense to have your mental models use more detail when what they model is closer to you in space and time, and closer to you in your social world; such things tend to be more important to you.  It also makes sense to use more detail for real events over hypothetical ones, for high over low probability events, for trend deviations over trend following, and for thinking about how to do something over why to do it.  So it makes sense to use detail thinking for "near", and sparse thinking for "far", in these ways. 

It can make sense to have specialized mental systems for these different approaches, i.e., systems best at reasoning from detailed representations, versus systems best at reasoning from sparse abstractions.  When something became important enough to think about at all you would first use sparse systems, graduating to detail systems when that thing became important enough to justify the added resources.  Even then you might continue to reason about it using sparse systems, at least if you could sufficiently coordinate the two kinds of systems.

A non-social mind, caring only about good personal decisions, would want consistency between near and far thoughts.  To be consistent, estimates made by sparse approaches should equal the average of estimates made when both sparse and detail approaches contribute.  A social mind would also want such consistency when sparse and detail tasks had the same tradeoffs between decisions and images.  But when these tradeoffs differ, inconsistency can be more attractive. 

The important interaction between these two key tradeoffs is this: near versus far seems to correlate reasonably well with when good decisions matter more, relative to good images.  Decision consequences matter less for hypothetical, fictional, and low probability events.  Social image matters more, relative to decision consequences, for opinions about what I should do in the distant future, or for what they or "we" should do now.  Others care more about my basic goals than about how exactly I achieve them, and they care especially about my attitudes toward those people.  Also, widely shared topics are better places to demonstrate mental abilities.

Thus a good cheap heuristic seems to be that image matters more for "far" thoughts, relative to decisions mattering more for "near" thoughts.  And so it makes sense for social minds to allow inconsistencies between near and far thinking systems.  Instead of having both systems produce the same average estimates, it can make sense for sparse estimates to better achieve a good image, while detail estimates better achieve good decisions. 

And this seems to be just what the human mind does.  The human mind seems to have different "near" and "far" mental systems, apparently implemented in distinct brain regions, for detail versus abstract reasoning.  Activating one of these systems on a topic for any reason makes other activations of that system on that topic more likely; all near thinking tends to evoke other near thinking, while all far thinking tends to evoke other far thinking. 

These different human mental systems tend to be inconsistent in giving systematically different estimates to the same questions, and these inconsistencies seem too strong and patterned to be all accidental.  Our concrete day-to-day decisions rely more on near thinking, while our professed basic values and social opinions, especially regarding fiction, rely more on far thinking.  Near thinking better helps us work out complex details of how to actually get things done, while far thinking better presents our identity and values to others.  Of course we aren't very aware of this hypocrisy, as that would undermine its purpose; so we habitually assume near and far thoughts are more consistent than they are. 

These near-far inconsistencies seems to me to reasonably explain puzzles like:

  • we value particular foreign-born associates, but oppose foreign immigration
  • we say we want to lose weight, but actually don't exercise more or eat less
  • we say we care about distant future folk, but don't save money for them

So which of near or far thinking is our "true" thinking?  Perhaps neither; perhaps we really contain an essential contradiction, which we don't want to admit, much less resolve.

Added:  The key puzzle I'm trying to address here is the fact that hypocrisy is hard.  It is hard enough to manage a mind with coherent opinions across a wide range of topics.  To manage two coherent systems of opinions, one for decisions and one for image, and then only let them differ where others can't see, that seems really hard.  I'm saying the near-far brain division can be handy when facing this problem; let the far system focus more on image, and the near system focus more on decisions.

GD Star Rating
a WordPress rating system
Tagged as: ,
Trackback URL:
  • http://retiredurologist.com retired urologist

    I find this to be a very well-written, very informative post. I’d like to ask for help with the implementation of the ideas it presents. As an example (not meant to have any political implications):

    Yesterday, Eric Holder, the nominee for Attorney General, said that water-boarding is torture, and that the United States would not engage in torture, which he said is illegal. It is his responsibility to enforce the laws. He is appointed by the President, who has the responsibility (among others) of ensuring public safety. Water-boarding is said to be extremely effective, with CIA volunteers resisting an average of only 14 seconds, and it is reported that valuable terrorist information has been obtained by using the method. Its use has given the USA a bad image, and authoritative sources have said that it is inhumane. It can be an example of the suffering of a few preventing the suffering of many, widely discussed in an earlier post.

    Given the quote from this post: “Others care more about my basic goals than about how exactly I achieve them“, how can i use the near-far system to come to a proper conclusion about such an issue (or any other issue where the near and far aspects seem to be in conflict)?

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Why is this feature and not a bug? In some situations, rationality is weak, and thoughts diverge from reality easier. Some emotions work by promoting or demoting certain thoughts (perceptions, expectations, plans), and you can move towards (or away from) those thoughts either by developing a situation where you have corresponding experiences and realistic intentions, or by thinking pie-in-the-sky, where limited connection with reality can’t stop you. For example, we are afraid of discomfort more than we actually suffer from it, we expect to be more happy after a good event than we actually become, we expect to grief more than we actually do. In each of these cases, weak far thoughts are affected by given emotion more than strong near thoughts.

    If image is sufficient, so that it’s enough for you to sometimes talk about pie-in-the-sky without actually succeeding, social emotions achieve their objective by only hijacking weakly rational thoughts, and there is no need to make them stronger. It looks like incentive for evolution to specifically create such schizophrenic emotions only comes with ability of organisms to communicate declarative thoughts. Maybe, there is design in this disconnect of social emotions, but maybe it’s just a bug like with other emotions.

    This view also suggests that once organism obtains ability to communicate declarative thoughts (or to methodically process them, e.g. writing them down or remembering better, to make realistic plans based on them later), balance of morality shifts. Effect of all emotions on behavior becomes stronger, to different degree for different emotions.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    retired, reading your newspaper thinking about if torture is ever acceptable is far thinking, while considering torturing a suspect in front of you to get info that would make your day is near thinking.

    Vladimir, yes language could have made our thoughts more visible, increasing image pressures on mind design.

  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Robin, I guess that’s true, but I wasn’t talking about that.

  • http://retiredurologist.com retired urologist

    retired, reading your newspaper thinking about if torture is ever acceptable is far thinking, while considering torturing a suspect in front of you to get info that would make your day is near thinking.

    My question is whether understanding that mechanism has any practical value in the resolution of a conflict that may exist between the conclusions reached by near and far thinking. Or is it only descriptive? Can I learn to use it productively, or is underlying conflict inevitable in some situations?

  • frelkins

    I warm to Vlad’s position: this is a side effect of evolution, and insofar as it hampers us today, it’s a bug. But it seems that in the past it would have been a feature; then, the near would have been much more important to us.

    It’s interesting to examine the wetware here – check out a brain pic. My impression is we shouldn’t be surprised that these two systems are disjointed and uncoordinated in themselves.

    It seems like doing laundry: even stacked on top of each other, my washer ain’t my dryer, and I tend to use them in a definite order (Nobody dries their clothes before washing them, altho’ I can easily move clothes from washer to dryer) or completely independently – but the units are quite separate.

    Now Robin’s response to Vlad about language strikes me with a stick. It appears that it would have been quite beneficial for humans to have improved the linkage between these two areas, but instead we learned to talk, thus directing more power into the social communication & near system.

    It also seems to explain why we have such difficulty speaking precisely (requires more “sparse” abstraction and more heavily using the “far,” which we ain’t good at).

    In the end it does seem like a brain region co-ordination problem. I would love to see the experiments in the Science article performed on people in MRIs. Then we could see where the two systems are in use and how strongly they link/interact/activate in different tasks.

    Robin’s arguing that since we need hypocrisy and for hypocrisy to be successful, we have to believe the lies we’re telling, to pop the hypocrisy onto this disjointed system is functionally convenient. Ok, I’m buying that.

    Most people reading this are going to be frustrated by this problem and want to consider engineering solutions, or maybe developing some awesome Zen meditation that allows us to practice forcibly linking these regions.

    However, then we might be less socially successful, since to live together nicely we unfortunately appear to need to lie well to both ourselves and other people. I think we’re stuck, no?

  • Johnicholas

    This is very interesting and provocative.

    If I understand correctly, this theory would suggest that individuals in groups (such as this one) that elevate hypocrisy and irrationality to fiercely antisocial vices will make better decisions, and present a worse image to others.

    Wvolutionary psychology is tricky stuff; just-so stories are both convincing and easy to create. More experiments will be valuable.

  • Philo

    The broad idea seems promising, but the applications are unpersuasive. “[W]e value particular foreign-born associates, but oppose foreign immigration.” We probably think our valued associates are not typical of immigrants. If we had detailed information about every immigrant, we might well judge the great majority of them to be undesirable as associates. “[W]e say we want to lose weight, but actually don’t exercise more or eat less.” This is probably just mental laziness or weakness of will. The more detailed knowledge I have of my present physical state, the more I judge that I ought to stop eating and go work out. I don’t do it simply for lack of virtue. “[W]e say we care about distant future folk, but don’t save money for them.” Again, why think that a more detailed knowledge of alternative possible futures would change our abstract judgment that we ought show concern for the interests of future people? We’re just taking the easy, selfish course; we know abstractly *and would know in detail* (if we bothered to gather the detailed information) that this is wrong.

    Maybe your theory should be that the one mental module generates decisions that are in one’s short-term self-interest, the other decisions in accord with utilitarian moral philosophy, serving the interests of everyone (including one’s own *future* self). Might *this* be the real inherent contradiction?

  • http://jamesdmiller.blogspot.com/ James D. Miller

    An example of the first tradeoff:

    Rationally evaluate when I should attack my enemies. Convince my potential enemies that if they attack my family I will seek revenge regardless of the cost to me.

    In some minds the first criteria dominates and the person is a “coward”, in others the second dominates and the person is always starting fights.

    Perhaps this tradeoff explains why humans have such difficulty ignoring sunk costs. I wonder if the economics students who have the most difficulty understanding why businesses should ignore sunk costs are the most likely to be violent?

  • billswift

    Thomas Schelling in “Strategy of Conflict” claims that deterrence necessitates convincing potential opponents that you will retaliate regardless of whether it is rational to. As he points out, once the enemy has attacked, the retaliating is less rational than rethinking how to progress from that point, therefore if you are going to deter an attack you must convince any potential attackers that you are crazy enough to retaliate no matter the consequences or you must have in place preparations that will automatically retaliate. I haven’t reread this section yet, and only read it 15 years ago so I might be misremembering details, but I disliked his conclusions enough that I paid close attention at the time.

  • billswift

    Also you might see Jane Jacobs’s “Systems of Survival” where she contends that there are two distinct ethical systems: exchange (appropriate for business and economics) and guardian (appropriate for military and defense). She makes a good argument that these two, antithetical systems are both necessary for a successful society, but that many social problems are caused by using one where the other is more appropriate, or worse by creating mixtures that cannot work. I mention this because “sunk costs” and “retaliation” cannot be effectively compared to each other. I do wonder whether the excessive honoring of sunk costs may be the result of inappropriately applied guardian morality. (If I can find my copy I’ll see if Jacobs addressed that point and I just don’t remember.)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    James, yes that is an example of the first tradeoff.

    bill, there are many proposals for how the brain divides into two systems; the far/near divide proposal seems to me to be based on much more diverse and compelling evidence than most other such proposals.

    Philo, it seems to me you are just illustrating how our brains are practiced at making up excuses for the contradictions we consistently generate. What else is “weakness of will” but just a name for a puzzling inconsistency?

    Johnicholas, yes if a group can see hypocrisy and shame it that should reduce hypocrisy and lower that group’s image in the view of outsiders. In that sense we are indeed stuck, as frelkins says.

  • billswift

    Jacobs’s system is not a “brain” divide, but a cultural difference; and I mentioned it mainly as a counter to James Miller’s comparisons of sunk costs and retaliation (revenge). In fact, the lists of different values she presents for exchange and guardian moralities both have near and far aspects/consequences.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Certainly an interesting and promising theory! Two questions present themselves: how to test it; and can we think of any apparent counterexamples? A counterexample would be something that would activate the “near” module but where we are more socially deceptive than truthful, or vice versa.

    Well, what about love? Most people would say that their loved ones are “near and dear”, that they feel tremendous closeness to them. And yet as we have discussed, this is an area where it seems that we are often more manipulative than truthful, and act more in accordance with social norms than our own self-interest. We talked about romantic love recently, but as another example, happiness studies show that child care is actually perceived as onerous and unpleasant, while most people will claim that it is the happiest and most joyful part of their lives.

  • Benja Fallenstein

    If right, this seems to me to elegantly explain running away from problems by declaring them impossible as staying in FAR mode, whether out of a desire not to look stupid or because abstract thinking seems more appropriate to the problem or both, when you would need to think NEAR to make progress.

    I wonder whether thinking about it like this can help me those times when I know I really want to think about a problem in detail, but my mind just keeps rehashing the intuitions I’ve come up with in the past…

    Certainly it seems to explain why “not running away from the problem” seems like a particular specific thing you can do differently.

  • Chad

    This reminds me more than a little bit of the work on picoeconomics that somebody linked to previously on this blog: [Breakdown of Will (pdf)](http://www.picoeconomics.com/aBreakdown_Will.pdf)

    While I agree with some of what Robin is getting at here, I am not so sure that all Robin’s examples match up with a “near/decision vs far/image” tradeoff, but rather with a will power tradeoff. For example, I don’t think most people say they want to lose weight because it’s good to be perceived as wanting to be thinner but because they actually want the benefits attendant with being thinner (whether that be health, attractiveness, or whatever).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, the theory isn’t of an exact correspondence, but just that farness is a good heuristic for when image is more important. People claim parenting is joyful in far mode, thinking about the future, but not so much in near mode, about this moment when the kid is in front of you. In far mode we say we would go to the ends of the Earth for our love, but in near mode we don’t.

    Chad, will power makes no sense without a conflict between different internal systems.

    Rosa, the question is why we don’t notice in far mode that there are costs of losing weight. Sure we might not notice all the details, but why do we get it so wrong?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I was reading this and thinking “Click” – I’m proud to say that I got the punchline before reading it, though not before starting the post, alas. But I can also think of a couple of points that seem anomalous in this light, e.g:

    1) (Good / professional / publishable) authors have to force highly detailed visualizations in order to write.

    2) The outside view is less optimistic than the inside view and much more accurate.

    Should one essay a more detailed model to account for such relatively anomalous points? Though they may not be quite anomalous, for example, you could suggest: “Authors, though biased, are less biased than people having fully abstract discussions in bars” and this could be tested.

    This seems like an important schema, but not everything seems to quite fit it; and I’ll have to let that bake and see if I notice a pattern to the exceptions.

  • http://transhumangoodness.blogspot.com Roko

    Robin: So which of near or far thinking is our “true” thinking? Perhaps neither; perhaps we really contain an essential contradiction, which we don’t want to admit, much less resolve.

    - the grim truth of that matter is that we probably contain lots of contradictions. Especially with regard to our preferences.

    We in this rationalist community seem to fall into the trap of thinking that our minds implement some abstract set of preferences, though imperfectly. A more accurate model might be to think of the mind as an input/output machine with the property that in some contexts some of its behaviors can be approximated as “implementing preferences”. Globally, though, there is absolutely no reason why our behaviors and opinions should conform to anything consistent. An optimist would call that “part of being human”. A pessimist/realist would call it “cognitive bias”.

    The human mind

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, those are indeed good items to ponder.

  • http://www.physics.ucsb.edu/People/person.php3?userid=mike Mike Blume

    retired, I believe I may be slipping off topic, but it is my understanding that the issue of torture is not as simple a cost/benefit trade as your post implies. Specifically, torture does not tend to produce accurate information, but rather tends to cause the subject to say whatever he/she thinks will end the pain. This, of course, is usually whatever the interrogator *already wants* to hear. This, along with its moral problems, explains why torture has been primarily used by self-interested regimes, concerned more for their image than for their decisions.

    I seem to have drifted back within striking distance of the topic towards the end there. Odd, that.

    Of course, I do not have double-blind tests confirming these claims, nor do I ever expect them to be done, and I suppose it is entirely possible that these claims were made by commentators trying to make a difficult political issue more one-sided.

  • http://profile.typekey.com/huono_ekonomi/ Mikko

    Robin:

    Preference between concrete and abstract thinking is probably personal, not universal trait. Most people specialize in concrete thinking, some people specialize in abstract thinking, and maybe 5% can easily jump between different abstraction levels.

    In software engineering, concrete-thinking people like bottom-up design, abstract-thinking people like top-down design. Abstraction level jumper (like the architect) is needed for the balance. Otherwise you either suffer from bad but beautiful abstractions or spend too much time arguing about nitty gritty details.

  • Grant

    frelkins,

    I believe its only a ‘bug’ from a macro (entire race) perspective. From the perspective of the individual it is very rational to separate image from reality. For the entire race this is of course a disadvantage. It seems like a classic prisoner’s dilemma to me. So in that sense we aren’t ‘stuck’, as prisoner dilemmas can be overcome (though maybe only by re-designing our minds?) provided the cost of coordination isn’t too high. Its difficult to imagine this happening with natural selection as it currently exists, since we depend on deception so heavily in order to attract mates.

    Actually I think this model fits in with the sex games people play pretty well.

    Perhaps the best solution would allow the ‘image’ part of our minds to fade away when dealing with people who we also expect to cooperate with us by revealing their ‘true’ natures. Or do we already do this? We’re often much more polite to strangers than close friends.

    FWIW, I think this is one of the best posts I’ve read on OB.

  • http://retiredurologist.com retired urologist

    @Mike Blume, et al:

    In the first comment on this post, I asked: how can I use the near-far system to come to a proper conclusion about such an issue (or any other issue where the near and far aspects seem to be in conflict)? Hanson’s response did not address the question at all, and I asked again: My question is whether understanding (the near-far) mechanism has any practical value in the resolution of a conflict that may exist between the conclusions reached by near and far thinking. Or is it only descriptive? Can I learn to use it productively, or is underlying conflict inevitable in some situations?

    So, again, is near-far only descriptive, or is it a mechanism that can be consciously controlled to improve decisions? If cognition is dichotomous, can one chose to be in only one branch, or is it hard-wired? If one believes he is using only the near or only the far to address a topic, is it self-deception? Is conflict between near and far thinking a source of “existential angst”, and is it inevitable?

  • http://profile.typekey.com/huono_ekonomi/ Mikko

    Interestingly, abstract thinking may prevent concrete thinking, and vice versa. How the issue was framed when we first time approach it, may influence our thinking about it in the future.

    For example, in Finland there seems to be conflict between regular people who deem modern architecture ugly, and architects who claim that people’s taste is just uneducated.

    It seems that regular people view buildings more abstractly than architects, and trained architects are no longer able to view buildings in this manner. They always resort to talk about individual characteristics of the building, never talk about the building as a whole.

  • androit

    retired: It’s a hard-wired product of evolution that rationalists must make conscious allowances/adjustments for. And while it may get bandied about amongst jargoneers under the moniker “construal level theory,” the idea is also out there in the popular culture. See: “Stumbling on Happiness.”

  • http://profile.typekey.com/aroneus/ Aron

    “It can make sense to have specialized mental systems for these different approaches, i.e., systems best at reasoning from detailed representations, versus systems best at reasoning from sparse abstractions. ”

    I would question this hypothesis. I find it perfectly reasonable to expect that the same basic mental architecture grows from abstract to concrete performance with improved information quite easily on a continuous spectrum. Perhaps an unconvincing analogy, but OOP has the same basic architecture regardless of where in the super/sub class hierarchy you are operating.

    My general impression is that concrete thinking is akin to a system with more extensive ‘training’ on more ‘data’. Thus, any comment of pulling out abstract thought and plugging in concrete thought seems nonsensical aside from the process of training or learning to get from one to the other. It is also nonsensical to approach a new subject from the concrete-thinking mode. One starts at abstract, and moves to concrete.

    Likewise, I think one could easily concrete-think on distant future topics but you are not guaranteed to have ‘trained’ your mind to operate on ‘data’ (which can be your own prior conclusions) that is provably connected to reality. In order to concrete-think about the future all you have to do is practice. However, it is quite easy to build a castle on sand and be unaware of it. To have a detailed, but erroneus, account of any subject. I hope there is no irony there. :p

  • Chad

    Robin: Chad, will power makes no sense without a conflict between different internal systems.

    My point was not that there wasn’t a conflict between two internal systems in will power tradeoffs. I was trying to point out that calling all such conflicts as being between “better decision vs better image” seemed like an overgeneralization.

  • Chad

    And in any case, if you haven’t read “A Breakdown of Will” before, highly recommend it (http://www.picoeconomics.com/aBreakdown_Will.pdf) — most thought provoking writing I’ve come across recently (OB aside, of course).

  • Patri Friedman

    This is fascinating! I like how this theory fits with other, more specific examples of bias, such as _Stumbling On Happiness_, and voting as signalling tribal identification rather than being accurate. And the research on deliberation where people’s views become more extreme after talking to like-minded people. If deliberation were an exchange of details using the “near” system, it should make people’s views more informed, but if it is an exercise in proving one’s values by using the “far” system, it makes sense that it drives views to the extreme.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Retired U, I’d suggest that it is a waste of time to worry about what your policy should be towards torture. You probably aren’t wondering if you should start (or stop!) torturing people. You probably aren’t even in a position to materially influence whether anyone else is torturing people. The belief that you should spend time on this issue is exactly the sort of self-serving bias that this blog is intended to eliminate. IMO.

  • billswift

    “the sort of self-serving bias”

    I think you must mean self-deluding; worrying about something you can’t even
    affect doesn’t fit any use of “self-serving” I’ve ever come across.

  • http://retiredurologist.com retired urologist

    @Hal Finney:

    I didn’t say I was concerned about the torture issue; I said it was an example of an issue where near and far thinking might give conflicting results. I asked how one should deal with issues that have such conflict. Your ad hominem comment does not advance the so-called “intentions” of this blog. IMO.

  • Matt C

    Chad, thank you for posting the link to A Breakdown of Will. Thank you for posting it twice, because I only bothered to follow the URL after you reiterated. Interesting stuff.

    Retired U: “So, again, is near-far only descriptive, or is it a mechanism that can be consciously controlled to improve decisions?”

    Most people have near-far conflicts they have trouble resolving consciously. I have known for a long time that I should exercise more. But, near-far is a description of our thinking that might lead to useful progress, where we learn better tricks for resolving these conflicts, and also identify more cases where either near or far thinking tends to be in error.

  • John Maxwell

    To be consistent, estimates made by sparse approaches should equal the average of estimates made when both sparse and detail approaches contribute.

    Isn’t this goal recursive? Why not just say “estimates made by sparse approaches should attempt to approximate estimates made by detailed approaches”?

  • http://noblesseoblige.org/wordpress Thanos

    What of the intermediary decision zone between near and far? e.g. We often make quite detailed plans for vacations and decisions about them having only fuzzy, limited details and vague notions of what we think we would like. Compare to Chess or Go strategy with their opening, middle game, and end game, or even to Delany’s concept of thinking borrowed from earth science: simplex, complex, multiplex.

  • Marshall

    A lovely post!
    My own conclusion is that” getting things right” is much more important than “being seen as nice”. Decisions are more important than image. And then I think both systems (near and far) can be used. A map of the underground is a wonderful example of “sparse thinking” and it is at the same time wonderfully useful in the near decison of which platform to stand on.
    Thus I would agree with Hal, that the issue of tortue (in this context) is image and should be rejected as a near solution to the question Retired posed on how to use the model in practice. When Hal ends his post with “IMO” this should be regarded as a social (far) signal of near/decisional modesty. When Retired ends his post with “IMO” this should be regarded as a social signal of social imodesty.
    In other words my answer to your question, Retired, is to throw away the harvesting of social benefits.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Thanos, I expect we have a continuum of systems for varying levels of detail. I’ve tried to write everything to be consistent with that.

  • http://profile.typekey.com/aroneus/ Aron

    “I expect we have a continuum of systems for varying levels of detail.”

    Is this supportable? We expect memory and expertise to be encoded in a connectionist fashion. How unfortunate it would be to have to continuously transfer these memories to new systems (read: new areas of the brain) as more detail is made available or our interest in the subject increases. A property of all learning is starting with little detail and going to more detail. Our minds will be optimized for that pattern. In your description this learning process cuts across systems every time.

    My opinion here is that near/far (really just concrete/abstract) are a matter really of amount of resources devoted, which in turn is a function of the details available to consume and whether the problem merits the effort. However, it’s all applied on the same basic machinery. If abstract thinking is as far as you get on a subject it is either because:
    a) You are unwilling to process the details available.
    b) There are no details available.
    c) You are unwilling to make-up details.
    d) The subject is too complex for you (your concrete conclusions repeatedly fail verification)

    The bias towards image making rather than accurate projection of current behavior in far-off descriptions of self is real but seems to require a different explanation. We admit that to synch these up is a hard problem and that may be precisely why it exists. Eating pizza today does not directly falsify the goal to be skinny.

  • Nick Tarleton
  • Pingback: Overcoming Bias : This is the Dream Time

  • Pingback: Overcoming Bias : Fear of Near Death Thoughts

  • Pingback: Overcoming Bias: Near and Far Thinking | danielmiessler.com

  • Pingback: Overcoming Bias : Deathbed Regret Is Far

  • Pingback: Mere end folkeaktivisme « Liberator

  • Pingback: Overcoming Bias : Beware Commitment

  • Pingback: Overcoming Bias : Two-Faced Brains

  • Pingback: Overcoming Bias : Is Selfless Evil Far?

  • Pingback: Overcoming Bias : Near Is Selfish

  • Pingback: Além Do Ativismo Tribal « Panarquia e Pan-Anarquia

  • Pingback: Overcoming Bias : Middle-Age Matters

  • Pingback: Overcoming Bias : Beware Far Values

  • Pingback: Overcoming Bias : Beware Value Talk