37 Comments

Anne, yes of course it is fine to ignore an *inaccurate* stereotype.

Expand full comment

Martin said: "A stereotype does not concern a person, it concerns a group."

Exactly. And I don't think it's somehow more honest to ignore relevant data about *an individual* in favor of applying a stereotype.

Stereotyping might be an acceptable tool in some cases (for instance, the case in which a large man is running toward you wielding a knife -- you really don't want to be giving him the benefit of the doubt at that point!) but it is a mistake to think that stereotypes somehow have priveleged access to truth. Expediency and utilitarian value in a survival situation is not the same thing as greater coherence with reality.

In some cases, groups of people are stereotyped inaccurately because they're in a position of lesser power than another group and because the group-in-power has a vested interest (though not necessarily one even known to all group members) in stereotyping an outgroup.

Slavery would be an example of this; it was important, for instance, to maintain the lie that people belonging to the "slave race" were intellectually inferior or not really people at all. If we were having this discussion pre Civil War, would someone making a sincere effort to overcome bias really suggest that we should, by default, believe that members of an enslaved race really were inferior?

Stereotyping as a habit most certainly compels people to ignore relevant information about individuals *and* groups when people insist on holding onto obsolete and/or tainted information -- which is why we have things like the Civil Rights movement, feminism, and the men's movement.

Expand full comment

Robin,

"how can it be just to ignore relevant information about a person?"

Well I don't suppose it is; but I don't think my position is particularly controversial. A stereotype does not concern a person, it concerns a group. (I am assuming that we agree that by stereotype we mean a general observation applied to a definable group that is true given some statisical qualification)

Such an observation might be useful in some context (for example consideration of some general policy) but it might be entirely inappropriate in another context. It is almost impossible to discuss this in the abstract: as so often, it all depends on the numbers and the specific circumstances. There is a very strong correlation between being male and having testicles. I am a male, and it is therefore reasonable to assume that I have testicles, but there might still be circumstances (perhaps best left to the imagination) in which it was appropriate to make a specific enquiry as to the facts in my case. My gender also makes me more likely to be a violent criminal. If I was on trial for battering my fishmonger, this would be a revelant, but unjust, consideration at my trial because the other facts (notably whether I committed the offence or not) would be overwhelmingly more relevant.

A general observation may be true, but once you have narrowed your field of observation to a specific individual there becomes available much better information upon which to base a conclusion - at which point the stereotype is unlikely to assist in the objective assessment of that information, and in practice, seems to have the opposite effect.

Expand full comment

Well, here's an example:

http://www.psychologicalsci...

The author of this article recommends, "...that we test our interpretations for bias by peeling off the labels, as I’ve done here. If our interpretations make little sense, then our science is biased."

If "labels" are similar in this context to "stereotypes", then this article could be relevant to this discussion. When it comes to experimental science, stereotyping of groups of people (according to psychological diagnosis and gender in this case) can influence the interpretation of results in ways that can be detrimental to a clearer scientific understanding of a given phenomenon.

You might suggest that peeling off the labels, as the aforementioned author suggests, is actually a form of ignoring relevant information -- however, I would disagree that this is necessarily the case, since one cannot entirely discount the chance that leaving the labels applied could lead to a misinterpretation of the results (because at least some of the information associated with a given label could be obsolete). Perhaps ideally, scientists should be able to evaluate the data in full knowledge of what the applicable stereotyping labels suggest, but at the same time understand that the labels could be misleading them and interpret accordingly. But perhaps that's what the initial post was suggesting in the first place.

Expand full comment

Anne, almost all information is "just" about statistical correlations between characteristics. If you propose that humans are prone to ignore certain info when presented with stereotype info, and that this justifies discouraging the use of stereotype info, then until we have data showing this tendency, I'd have to classify this as a seen bias justified by an unseen one.

Expand full comment

What about the potential for the application of a stereotype to obfuscate relevant information about a person? A stereotype isn't relevant information about *a person*, it's information about a statistical correlation between one or more characteristics of a person that are apparent, with other characteristics which might not be immediately apparent.

Expand full comment

Martin, how can it be just to ignore relevant information about a person?

Expand full comment

In the statement "To make assumptions about an individual based on a stereotype is wrong", I meant wrong in the sense of "unjust" rather than "false".

Long established and valuable principles such as presumption of innocence and equality under the law are based on the idea that an individual, possessed of free will, should be given the benefit of the doubt.

This does not in any way imply that the formulation or observation of stereotypes is not useful.

Expand full comment

Ray, if we are clever enough then for any X and Y we can make up a story whereby more X leads to more Y. But since there are many other processes going on, it is important to actually look at data to see what the net effect of X on Y seems to be.

Expand full comment

Ray, I recall there being a study that showed that techniques used to increase students' confidence, while they did increase self-reported confidence, actually hurt performance a little bit. I don't think that the causal relationship between high confidence and success has been established.

In any case, Robin's point is that the overconfidence bias is only exacerbated by higher self-confidence. (This has been established by studies.) There are no biases that are decreased by higher self-confidence.

"Vigorous thinking would lead people to consider other viewpoints, and even question themselves. This will eventually lead to a more accurate set of assumptions"

It's not clear that people can consistently increase the accuracy of their set of assumptions by considering other viewpoints.

"Taking your reply at face value, the more bias free I become, the less confident I would have to be."

It looks like you might be conflating self-confidence with credulity. "Less confident" might mean "more skeptical", or "more humble". I do think that, on the balance, biases tend to dispose people towards higher belief in propositions rather than lower, but I don't think it follows at all that less biased = more humble. If anything, it would be the opposite.

Expand full comment

Robin;More confidence necessarily equals more biases? Really?

Less confidence equals fewer biases?

That doesn't work.

Lazy thinking leads people to accepting the "consensus" view without any serious consideration of the matter. Thus personal biases are built up over time that are largely incorrect, or at least based on faulty premises.

Vigorous thinking would lead people to consider other viewpoints, and even question themselves. This will eventually lead to a more accurate set of assumptions, biases even, as I don't think it possible for the human computer to be void of bias.

Lazy thinking stems from a lack of confidence, as in "Well, I read in a magazine that X causes Y- - - it must be true." They don't have the confidence to question what they see in print or hear in the media.

Vigorous thinking requires a certain amount of self-confidence in order to ask questions.

Taking your reply at face value, the more bias free I become, the less confident I would have to be. This would have the negative side effect of not being able to question anyone however, and thus produce a non sequitur in that I would eventually have to accept everyone else's biases as fact.

I think there's a hint of a bias in your thinking. One that tacity assumes confidence equals to some degree over-confidence. If one places confidence on a straight sliding scale, the supremely confident would of course be all the closer and more prone to over-confidence.

But, what if that straight scale is wrong? Accepting that people have a tendency towards self-deception, their initial reactions may be that of a self-confident person that wasn't willing to consider other factors. But such over-confidence stems from a lack of vigorous thinking at a more basic level, which I believe has more to do with a lack of confidence. This makes the average person a rickety building of poorly planned walls on top of uncertain foundations.

And yes, the supremely confident person can be easily over-confident as well, but this doesn't make confidence in and of itself a negative.

Expand full comment

Ray, the data suggest that overconfidence is more common than underconfidence. So while you might help your students in life by making them more confident, they are probably more biased nonetheless.

Expand full comment

This is difficult to answer briefly. My comment yesterday seemed to be misunderstood, in that it seemed that I was qualifying a bias as not a bias if the person making it was of a “good” nature.

That was not my intention at all; I merely meant to point out the usefulness of stereotyping, and how it doesn’t always qualify as a pejorative if used appropriately.

“To make assumptions about an individual based on a stereotype is wrong, even if the stereotypical view is broadly accurate.”

This is just foolish, and is only defensible in the largely innocuous world of theory. The closer a decision hits to home, the more that we would find such a statement violated, even by the person who made it.

“I say honesty demands we stereotype people, instead of giving them "the benefit of the doubt."

Well, if we have to lean to one extreme or the other, then of course, it is only prudent to engage in stereotyping, but there is a balance to a normal, productive life.

It has been my experience that the more intelligent and honest a person is, the more they tend to give the benefit of the doubt. It is a qualified benefit of doubt, but nonetheless it is often given.

I believe it is the confidence they derive from both their intelligence, and their self-image that enables them to give others a break.

Self-image, or self-confidence is probably the most important factor among all of the personal factors involved. While I taught high school, the one thing I drilled into my students more than anything else was the need for self-confidence. That they had to do their reasonable best in everything in order to reap as much confidence as possible. Only self-confidence keeps away the hobgoblins of self-doubt, and herd-mentality.

Of course, one can be over-confident, but this butts up against personality types, and many other factors that are difficult to address in brief.

Expand full comment

Hello Robin,

> " To decide if you are less biased than average, you must consider the sorts of reasons that will occur to others, and ask if your reasons are better than those. "

This is about the most pointless exercise any person could attempt.

1. To begin with, a person cannot ever truly know his/her own biases. Our biases are our reality, therefore they are in large measure imperceptible.

2. Secondarily, other people have biases -- of course they do, otherwise they are not human -- but how is it ever possible to know whether another person has more of less bias than you do? In order to perform the comparison legitimately you must first know yourself perfectly. But all the time invested in gaining a perfect knowledge of your own self is time not available for gaining an accurate knowledge of someone else's biases.

3. Thirdly, gaining the advantage (regarding relative biases) over another individual doesn't amount to much. Even if you are less biases than someone else that alone doesn't prevent you from being motre of a fool than that person. Geniuses have a long history of making mistakes (including Aristotle, Newton and Einstein). If humankind's great intellects can make errors it follows naturally that the average human is wrong about a great many things.

4. Fourthly, the question of Truth (or Accuracy) is separate from the question of Bias. Biased people can know the truth just as objective minds can make errors. So determining that you have less bias than someone else does not serve to guarantee that your opinions are better than that other person.

5. Finally, the whole bias exercise appears like a technique for people to compliment their own self. As such, it serves more of an emotional self-esteem purpose than a truth-determining function.

****

As to the question of creationism vs. evolutionism, there are plenty of fools and lots of error in all the parties to the conflict. I think that this conflict serves political goals rather than any real truth interest in either side. As for myself, I don't perceive any dispute between creationism and evolution whatsoever. I accept that God exists, I accept that the Universe is ancient, and I accept that life evolves. So there is no longer any conflict between creationism and evolution. Other people may have other opinions but their opinions don't appear particularly relevant to my views. I'd prefer to disagree with everyone than choose a side.

Expand full comment

Perry, I am not suggesting that experimentalists adjust their error estimates based on how far away their estimate was from other estimates. My suggestion is for all experimentalists to respond to the data showing their error estimates were too small, and increase their estimates of systematic error, perhaps adjusting for any strong reasons to think they are different from average on this issue.

Expand full comment

Robin, you say: "Your advice seems to be to just ignore the overconfidence bias in previous measurements as irrelevant."

No. My advice is to give the reader all the same information you have, and let them be aware of how you came to your conclusions. I think that you are assuming there is overconfidence bias here, but I contend there is no real way to determine that by objective means. When you are worried that you might be measuring inaccurately but you do not objectively know whether that is true or not, you should tell the reader of your paper of your concern instead of inventing a larger confidence interval from thin air without any solid basis for doing so.

You note, accurately, that "experimentalists usually have no sound math method for estimating systematic error, but they must estimate it nonetheless." That is not entirely true. What an experimentalist generally does is reason out an estimate for each of the components participating in the measurement of error. Now, if the error appears to be larger, should the experimentalist then pretend by artificially inflating one of those components, or pretend by spreading out the error among multiple components in order to sweep the fact that his educated guesses about the various factors were wrong? I think that's a form of lying. Instead, the honest thing to do is to say "our team came up with the following estimates for error based on the following sorts of reasoning. In the end, our calculated number was further from the consensus figure than one would have expected. This might be because we are underestimating error, or it might be because of a rare fluke in our measurements, or we might in fact have the right number and everyone else might be wrong. Here is everything we did, and here are all our numbers and calculations, and posterity gets to judge what happened."

What you are suggesting instead, Robin, is that the experimental team do precisely what we would not want, which is look at their numbers after the fact and adjust them to get the "correct" answer. That's not the way I want scientists to behave. I'd prefer if they said "our number is X plus or minus Y, which seems wrong, but that's what we got. here is how we got it, maybe we did something wrong and maybe we didn't, let us know if you spot our mistake." That is the intellectually honest way to go about things.

Expand full comment