A Tale Of Two Tradeoffs
The design of social minds involves two key tradeoffs, which interact in an important way.
The first tradeoff is that social minds must both make good decisions, and present good images to others. Our thoughts influence both our actions and what others think of us. It would be expensive to maintain two separate minds for these two purposes, and even then we would have to maintain enough consistency to convince outsiders a good-image mind was in control. It is cheaper and simpler to just have one integrated mind whose thoughts are a compromise between these two ends.
When possible, mind designers should want to adjust this decision-image tradeoff by context, depending on the relative importance of decisions versus images in each context. But it might be hard to find cheap effective heuristics saying when images or decisions matter more.
The second key tradeoff is that minds must often think about the same sorts of things using different amounts of detail. Detailed representations tend to give more insight, but require more mental resources. In contrast, sparse representations require fewer resources, and make it easier to abstractly compare things to each other. For example, when reasoning about a room a photo takes more work to study but allows more attention to detail; a word description contains less info but can be processed more quickly, and allows more comparisons to similar rooms.
It makes sense to have your mental models use more detail when what they model is closer to you in space and time, and closer to you in your social world; such things tend to be more important to you. It also makes sense to use more detail for real events over hypothetical ones, for high over low probability events, for trend deviations over trend following, and for thinking about how to do something over why to do it. So it makes sense to use detail thinking for "near", and sparse thinking for "far", in these ways.
It can make sense to have specialized mental systems for these different approaches, i.e., systems best at reasoning from detailed representations, versus systems best at reasoning from sparse abstractions. When something became important enough to think about at all you would first use sparse systems, graduating to detail systems when that thing became important enough to justify the added resources. Even then you might continue to reason about it using sparse systems, at least if you could sufficiently coordinate the two kinds of systems.
A non-social mind, caring only about good personal decisions, would want consistency between near and far thoughts. To be consistent, estimates made by sparse approaches should equal the average of estimates made when both sparse and detail approaches contribute. A social mind would also want such consistency when sparse and detail tasks had the same tradeoffs between decisions and images. But when these tradeoffs differ, inconsistency can be more attractive.
The important interaction between these two key tradeoffs is this: near versus far seems to correlate reasonably well with when good decisions matter more, relative to good images. Decision consequences matter less for hypothetical, fictional, and low probability events. Social image matters more, relative to decision consequences, for opinions about what I should do in the distant future, or for what they or "we" should do now. Others care more about my basic goals than about how exactly I achieve them, and they care especially about my attitudes toward those people. Also, widely shared topics are better places to demonstrate mental abilities.
Thus a good cheap heuristic seems to be that image matters more for "far" thoughts, relative to decisions mattering more for "near" thoughts. And so it makes sense for social minds to allow inconsistencies between near and far thinking systems. Instead of having both systems produce the same average estimates, it can make sense for sparse estimates to better achieve a good image, while detail estimates better achieve good decisions.
And this seems to be just what the human mind does. The human mind seems to have different "near" and "far" mental systems, apparently implemented in distinct brain regions, for detail versus abstract reasoning. Activating one of these systems on a topic for any reason makes other activations of that system on that topic more likely; all near thinking tends to evoke other near thinking, while all far thinking tends to evoke other far thinking.
These different human mental systems tend to be inconsistent in giving systematically different estimates to the same questions, and these inconsistencies seem too strong and patterned to be all accidental. Our concrete day-to-day decisions rely more on near thinking, while our professed basic values and social opinions, especially regarding fiction, rely more on far thinking. Near thinking better helps us work out complex details of how to actually get things done, while far thinking better presents our identity and values to others. Of course we aren't very aware of this hypocrisy, as that would undermine its purpose; so we habitually assume near and far thoughts are more consistent than they are.
These near-far inconsistencies seems to me to reasonably explain puzzles like:
we value particular foreign-born associates, but oppose foreign immigration
we say we want to lose weight, but actually don't exercise more or eat less
we say we care about distant future folk, but don't save money for them
So which of near or far thinking is our "true" thinking? Perhaps neither; perhaps we really contain an essential contradiction, which we don't want to admit, much less resolve.
Added: The key puzzle I'm trying to address here is the fact that hypocrisy is hard. It is hard enough to manage a mind with coherent opinions across a wide range of topics. To manage two coherent systems of opinions, one for decisions and one for image, and then only let them differ where others can't see, that seems really hard. I'm saying the near-far brain division can be handy when facing this problem; let the far system focus more on image, and the near system focus more on decisions.