Category Archives: Psychology

Getting Nearer

Reply toA Tale Of Two Tradeoffs

I'm not comfortable with compliments of the direct, personal sort, the "Oh, you're such a nice person!" type stuff that nice people are able to say with a straight face.  Even if it would make people like me more – even if it's socially expected – I have trouble bringing myself to do it.  So, when I say that I read Robin Hanson's "Tale of Two Tradeoffs", and then realized I would spend the rest of my mortal existence typing thought processes as "Near" or "Far", I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.

Among other things, this clears up a major puzzle that's been lingering in the back of my mind for a while now.  Growing up as a rationalist, I was always telling myself to "Visualize!" or "Reason by simulation, not by analogy!" or "Use causal models, not similarity groups!"  And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.

But later, I learned about the Outside View versus the Inside View, and that people asking "What rough class does this project fit into, and when did projects like this finish last time?" were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects.  And this didn't seem to fit very well with my injunction to "Visualize!"

So now I think I understand what this principle was actually doing – it was keeping me in Near-side mode and away from Far-side thinking.  And it's not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).

Continue reading "Getting Nearer" »

GD Star Rating
loading...

A Tale Of Two Tradeoffs

The design of social minds involves two key tradeoffs, which interact in an important way.

The first tradeoff is that social minds must both make good decisions, and present good images to others.  Our thoughts influence both our actions and what others think of us.  It would be expensive to maintain two separate minds for these two purposes, and even then we would have to maintain enough consistency to convince outsiders a good-image mind was in control. It is cheaper and simpler to just have one integrated mind whose thoughts are a compromise between these two ends.

When possible, mind designers should want to adjust this decision-image tradeoff by context, depending on the relative importance of decisions versus images in each context.  But it might be hard to find cheap effective heuristics saying when images or decisions matter more.

The second key tradeoff is that minds must often think about the same sorts of things using different amounts of detail.  Detailed representations tend to give more insight, but require more mental resources.  In contrast, sparse representations require fewer resources, and make it easier to abstractly compare things to each other.  For example, when reasoning about a room a photo takes more work to study but allows more attention to detail; a word description contains less info but can be processed more quickly, and allows more comparisons to similar rooms.

Continue reading "A Tale Of Two Tradeoffs" »

GD Star Rating
loading...
Tagged as: ,

Seduced by Imagination

Previously in seriesJustified Expectation of Pleasant Surprises

"Vagueness" usually has a bad name in rationality – connoting skipped steps in reasoning and attempts to avoid falsification.  But a rational view of the Future should be vague, because the information we have about the Future is weak.  Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge – the power of pleasant surprises.

But there's also a more severe warning that I must deliver:  It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them.  It can suck the emotional energy out of your actual, current, ongoing life.

Epistemically, we know the Past much more specifically than the Future.  But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing.  Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.

Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live – having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens – I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there.  That may suck out your soul like an emotional vacuum cleaner.

Continue reading "Seduced by Imagination" »

GD Star Rating
loading...

Data On Fictional Lies

A spectacular paper analyses a dataset of 519 Victorian literature experts describing 382 characters from 201 canonical British novels of the nineteenth century.  Characters were described by gender, as major or minor, as good or bad, by role (protagonist, antagonist, friend of p, friend of a, or other), by a five factor personality type (from a ten-question instrument), as their (5-point-scale) degree of twelve different motives (converted to five factors: social dominance, constructive effort, romance, nurture, subsistence), and as the degree of ten different emotions they arouse in readers (converted to three factors: dislike, sorrow, interest). Experts agreed 87% of the time.  They found:

Antagonists virtually personify Social Dominance – the self-interested pursuit of wealth, prestige, and power. In these novels, those ambitions are sharply segregated from prosocial and culturally acquisitive dispositions. Antagonists are not only selfish and unfriendly but also undisciplined, emotionally unstable, and intellectually dull. Protagonists, in contrast, display motive dispositions and personality traits that exemplify strong personal development and healthy social adjustment. Protagonists are agreeable, conscientious, emotionally stable, and open to experience. … The male protagonists in this study are relatively moderate, mild characters. They are introverted and agreeable, and they do not seek to dominate others socially. They are pleasant and conscientious, and they are also curious and alert. They are attractive characters, but they are not very assertive or aggressive characters. …

Continue reading "Data On Fictional Lies" »

GD Star Rating
loading...
Tagged as: ,

Disagreement Is Near-Far Bias

Back in November I read this Science review by Nira Liberman and Yaacov Trope on their awkwardly-named "Construal level theory", and wrote a post I estimated "to be the most dense with useful info on identifying our biases I've ever written":

[NEAR] All of these bring each other more to mind: here, now, me, us; trend-deviating likely real local events; concrete, context-dependent, unstructured, detailed, goal-irrelevant incidental features; feasible safe acts; secondary local concerns; socially close folks with unstable traits. 

[FAR] Conversely, all these bring each other more to mind: there, then, them; trend-following unlikely hypothetical global events; abstract, schematic, context-freer, core, coarse, goal-related features; desirable risk-taking acts, central global symbolic concerns, confident predictions, polarized evaluations, socially distant people with stable traits. 

Since then I've become even more impressed with it, as it explains most biases I know and care about, including muddled thinking about economics and the future.  For example, Ross's famous "fundamental attribution error" is a trivial application. 

The key idea is that when we consider the same thing from near versus far, different features become salient, leading our minds to different conclusions.  This is now my best account of disagreement.  We disagree because we explain our own conclusions via detailed context (e.g., arguments, analysis, and evidence), and others' conclusions via coarse stable traits (e.g., demographics, interests, biases).  While we know abstractly that we also have stable relevant traits, and they have detailed context, we simply assume we have taken that into account, when we have in fact done no such thing. 

For example, imagine I am well-educated and you are not, and I argue for the value of education and you argue against it.  I find it easy to dismiss your view as denigrating something you do not have, but I do not think it plausible I am mainly just celebrating something I do have.  I can see all these detailed reasons for my belief, and I cannot easily see and appreciate your detailed reasons. 

And this is the key error: our minds often assure us that they have taken certain factors into account when they have done no such thing.  I tell myself that of course I realize that I might be biased by my interests; I'm not that stupid.  So I must have already taken that possible bias into account, and so my conclusion must be valid even after correcting for that bias.  But in fact I haven't corrected for it much at all; I've just assumed that I did so.

GD Star Rating
loading...
Tagged as: ,

Show-Off Bias

It seems to me that self-identified smart people are biased towards complex or counter-intuitive answers to problems.  The reason is simple: complex or counter-intuitive answers allow one to show off intelligence.  So let’s call this bias “show off bias.”

Axelrod’s Tit-For-Tat may provide a good example of show off bias.  Tit-For-Tat is a simple decision algorithm for an iterated Prisoner’s Dilemma.  In deciding whether to cooperate or defect, Tit-For-Tat states: just do whatever the other person previously did.  If the other cooperated, you cooperate.  If the other defected, you defect.  Tit-For-Tat. 

The algorithm works surprisingly well.  Wikipedia tells me that “tit for tat was the most effective, winning in several annual automated tournaments against (generally far more complex) strategies created by teams of computer scientists, economists, and psychologists.”

Why didn’t these smart scientists think of Tit-For-Tat?  They probably did, or could have.  But something made Tit-For-Tat unattractive to them.  I’m suggesting that part of what made Tit-For-Tat unattractive was a smart person’s natural desire to show off.

Let me relate two other possible examples of show off bias: one well known, and one personal.  I’ll begin with the personal anecdote.

Continue reading "Show-Off Bias" »

GD Star Rating
loading...
Tagged as: ,

Thinking Helps

Published in 2005

Most people believe that they should avoid changing their answer when taking multiple choice tests.  Virtually all research on this topic, however, has suggested that this strategy is ill-founded: Most answer changes are from incorrect to correct, and people who change their answers usually improve their test scores.  Why? …. Changing an answer when one should have stuck with one’s original answer leads to more "if only …" self-recriminations …[making such events] more memorable.   

GD Star Rating
loading...
Tagged as:

All Are Skill Unaware

The blogsphere adores Kruger and Dunning‘s ’99 paper "Unskilled and Unaware of It".  Google blog search lists ten blog mentions just in the last month.  For example:

Perhaps the single academic study most germane to the present election … In short, smart people tend to believe that everyone else "gets it." Incompetent people display both an increasing tendency to overestimate their cognitive abilities and a belief that they are smarter than the majority of those demonstrably sharper. 

This paper describes everyone’s favorite theory of those they disagree with, that they are hopelessly confused idiots unable to see they are idiots; no point in listening to or reasoning with such fools.  However, many psychologists have noted Kruger and Dunning’s main data is better explained by positing simply that we all have noisy estimates of our ability and of task difficulty.  For example, Burson, Larrick, and Klayman’s ’06 paper "Skilled or Unskilled, but Still Unaware of It":

Continue reading "All Are Skill Unaware" »

GD Star Rating
loading...
Tagged as:

Positive vs. Optimal

I’ve been thinking a little lately about the difference between doing something useful, and doing the most useful thing. The latter is a lot harder, yet a lot more productive. I wonder if this is a basic area of human irrationality. I think you can classify a lot of the bad arguments that get made for things like the bailout of banks, or of car companies, as people saying “Here is why this money would help these companies”, and missing out on “But it would help the rest of the world (like, companies that are profitable) even more”.

Normally I rail against zero-sum thinking, the belief that we’re just dividing up a fixed pie. But in the short-term, the inputs to producing happiness are constrained. I only have 24 hours in the day. The GDP of the US is only so much. We’re investing those resources to produce even more resources – but the inputs at this stage are fixed. We can’t invest in every positive-sum project. When you are figuring out what to do with these constrained inputs, you need to balance your use against *every other possible use* (or more specifically, the best alternative use). (This is nerve-wracking and tortuous, but you don’t actually have to do it that well – if you just do a decent job, you’ll be doing way better than someone who just does whatever positive projects happen to catch their attention.)

I think this connects to important topics at the micro and the macro level. Personal productivity techniques like Eat That Frog or Big Rocks are based on fighting our inclination to do what seems urgent, and instead doing what is optimal. I know I have a lot of trouble getting distracted by small urgent things, rather than doing the core, important work, and it seems to be a general problem. Our intuition is a terrible task prioritizer. And much of the erroneous analysis about the benefits of regulation has to do with ignoring the invisible (the best alternative use of the resources), as Henry Hazlitt so eloquently writes. Our intuition seizes on the visible consequences, and has trouble seeing the subtle, distributed, unrealized, un-proposed alternatives.

Which suggests a technique for overcoming this, at both the personal and professional levels. Try to always present alternatives. Reify the other options – or your mind will focus on whether your proposal does net good, rather than the most good with its limited resources.

GD Star Rating
loading...
Tagged as: ,

Conformity Shows Loyalty

"The world has too many people showing too much loyalty to their groups.  That is why I’m so proud to be member of ALU, anti-loyalists united, where we refuse to show loyalty to any other groups. My local chapter just kicked out George for suspicion of showing loyalty to California, and we chastised Ellen for expressing doubts about the latest anti-loyalty directives from headquarters.  We’ll only lick loyalty by showing we are united behind our courageous ALU leaders.  All hail ALU!"

Sounds pretty silly, right?  But I hear something pretty similar when I hear folks say they are proud to be part of a group that fights conformity by pushing their unusual beliefs.  Especially when such folks seem more comfortable claiming their beliefs contribute to diversity than that they are true.   

We use belief conformity to show loyalty to particular groups, relative to other groups.  We rarely bother to show loyalty to humanity as a whole, because non-humans threaten little.  So we rarely bother to try to conform our beliefs with humanity as a whole, which is why herding experiments with random subjects show no general conformity tendencies

Our conformity efforts instead target smaller in-groups, with more threatening out-groups.  And we are most willing to conform our beliefs on abstract ideological topics, like politics or religion, where our opinions have few other personal consequences.  Our choices show to which conflicting groups we feel the most allied.   

You just can’t fight "conformity" by indulging the evil pleasure of enjoying your conformity to a small tight-knit group of "non-conformists."  All this does is promote some groups at the expense of other groups, and poisons your mind in the process.  It is like fighting "loyalty" by dogged devotion to an anti-loyalty alliance.

Best to clear your mind and emotions of group loyalties and resentments and ask, if this belief gave me no pleasure of rebelling against some folks or identifying with others, if it was just me alone choosing, would my best evidence suggest that this belief is true?  All else is the road to rationality ruin. 

GD Star Rating
loading...
Tagged as: ,