Benefit of Doubt = Bias

One dictionary defines "to give the benefit of the doubt" as

To believe something good about someone, rather than something bad, when you have the possibility of doing either.

That is, assume the best.  This may be better than assuming the worst, but honesty requires you to instead remain uncertain, assigning chances according to your evidence.   Does this mean we should stereotype people?  After all, M Lafferty commented

To make assumptions about an individual based on a stereotype is wrong, even if the stereotypical view is broadly accurate.

To the contrary, I say honesty demands we stereotype people, instead of giving them "the benefit of the doubt."  Bryan Caplan has emphasized to me that most stereotypes are on average accurate:

Obviously, every stereotype has exceptions; stereotypes are useful because they are better than nothing, not because they are infallible.

For more, see John Ray’s, "Do We Stereotype Stereotyping?"   I suspect people justify the usual dictum against stereotypes as countering a human tendency to assume the worst about outsiders.  But until I see evidence of this, I’ll classify this as a seen bias justified by an unseen one.

Consider Perry Metzger’s recent comment:

What you are saying, essentially, is that after seeing that a number of estimates of some constant do not fall within each other’s error bars, physicists should then increase the size of the error bars. I don’t think that is reasonable.  Not all methods of measurement are identical, and different groups use different instruments, so the systematic errors made by different groups are different. That means that it is not necessarily the case that all groups are underestimating their errors — in fact, it is most likely that only some of them are underestimating error.

Yes, a set that is biased overall may be include subsets which are less biased.  And by adjusting to correct for the overall bias we may increase the error in the less-biased subset.  Nevertheless, unless we can distinguish the subsets that are more vs. less biased, we must accept this outcome

The general principle is:  you need a better than average reason to think something is better than average.   A physicist might say "I don’t need to adjust as much because I’m measuring voltage, where systematic bias is less a problem," or "I’m from Harvard, where we are more careful."  But he needs to actually have evidence that there is less bias with voltage or at Harvard; no fair just giving himself "the benefit of the doubt." 

Furthermore, since our minds are good at selectively attending to factors favoring us, we must realize that others’ minds will attend to other factors, such as their years of experience or their IQ.   To decide if you are less biased than average, you must consider the sorts of reasons that will occur to others, and ask if your reasons are better than those.  Furthermore, if you are better in general at coming up with reasons for things, you must count that against yourself. 

Finally, consider Eliezer Yudkowsky’s complaint about modesty:

How can you know which of you is the honest truthseeker, and which the stubborn self-deceiver?  The creationist believes that he is the sane one and you are the fool.  Doesn’t this make the situation symmetric around the two of you?  … "But I know perfectly well who the fool is.  It’s the other guy.  It doesn’t matter that he says the same thing – he’s still the fool."  This reply sounds bald and unconvincing when you consider it abstractly.  But if you actually face a creationist, then it certainly feels like the correct answer. … Those who dream do not know they dream; but when you wake you know you are awake.   

The key question is: what concrete evidence can you cite that you are more sane than a creationist, or more awake than a dreamer?   Perhaps you know more biology than a creationist, and you are more articulate than a dreamer.  But the mere feeling that you are right does not justify giving yourself "the benefit of the doubt." 

GD Star Rating
Tagged as:
Trackback URL:
  • ChrisA

    Wow Robin, are you in a contest with Hal to get the most responses? Peak oil and creationism….

    I am not sure that you can really have a truth seeking dialogue with a creationist, Dawkins has given up and he has created some real crackers of arguments. But vestigial organs (appendix, wisdom teeth etc) are pretty concrete examples that we evolved from herbivores.

    I am not sure you can have a dialogue with a dreamer either, but if you wanted to convince someone they were not in a dream, punch them on the nose.

  • conchis

    The link to the John Ray Paper doesn’t appear to work. Here is a functioning url:

  • ericgeorge

    Well, this post touches a lot of issues, so I’m mainly addressing the ending part about symmetric accusations of bias.

    Truth is a civil trial, not a criminal one: a preponderance of the evidence vs beyond a reasonable doubt. Much knowledge is inarticulable (especially for those not having to make formal arguments as part of their day job) and ‘feelings’ are a result of this knowledge, it is what we call intuition. Nevertheless, a theory or belief should be preferred to another if it is more validated (implications, especially indirect ones, are shown true) and less invalidated (same shown false). Other criteria are internal consistency (truth is consistent, falsity usually not), parsimony, and even beauty (eg, e=mc^2).

    So while feelings aren’t sufficient, even necessary, for a belief preference, they are usually relevant, and often the only information one has.

  • Perry E. Metzger

    Two topics are showing up here at once. 🙂

    On topic one, I will point out that many groupings try hard to give people “the benefit of the doubt” for reasons of keeping social interactions functional. Wikipedia, for example, has a rule that one should assume other Wikipedians are working in good faith. Is this always true? Of course not. However, if you don’t assume good faith, you end up in debilitating arguments too often instead of resolving problems peacefully, so the assumption, even though it is known false, is productive. Life is full of situations where “benefit of the doubt”, though inaccurate, none the less serves everyone’s interests.

    Similarly, I must add again on topic two that if, say, 50 groups have measured the fine structure constant to 98% confidence, and six of the groups are lying way outside the bounds instead of just the expected one group, the solution is not for all fifty groups to assume greater systematic error. It is far, far more likely that most groups got it largely right and only a small number grossly underestimated their systematic error. If everyone assumed greater systematic error, you would just throw away useful information instead of improving accuracy. (This is just a compact way of re-expressing what I’ve expressed earlier in the thread you were quoting.) The false assumption you are making in your recommendation is that the “metaerror distribution” — the error in estimating systematic error — is uniformly distributed among the groups. I don’t think there is good evidence for that, and unless the metaerror distribution is uniform, there is no cause to assume that the outliers and those in the middle of the scatter chart are equally likely to have made mistakes in their systematic error estimates.

  • “the error in estimating systematic error — is uniformly distributed among the groups. I don’t think there is good evidence for that”

    I don’t recall any evidence of the bimodal distribution you’re talking about being brought up in the Academic Overconfidence thread, nor any discussion of distribution at all.

    “unless the metaerror distribution is uniform, there is no cause to assume that the outliers and those in the middle of the scatter chart are equally likely to have made mistakes in their systematic error estimates.”

    That’s not true. There are plenty of non-uniform distributions, even ones where there are obviously many too-narrow confidence intervals, that don’t give you enough information to identify outliers.

  • OT: It’d be nice to see a post about in-group bias. Also, I just wrote one:

  • Bruce G Charlton

    The debate between biology and creationism is not really about bias – it is about whether the question of ‘how humans got here’ is a part of biology or part of theology.

    Since my job is ‘Reader in Evolutionary Psychiatry’ you can guess my opinion; but the question cannot be settled by localized debate (as Dawkins tries to do). The question can only be settled by a kind of global consideration as to the overall benefit of including the question in biology versus theology.

    IMO biology will almost certainly win this, because natural selection is the core concept of biology, and the modern world cannot do without biology. Theology will just have to find (evolve) a way to do without their current explanation of how humans got here – but it won’t be easy for them.

  • Chris, do people in dreams never get punched in the nose?

    Conchis, thanks for the correction.

    Eric, I didn’t say feelings were invalid evidence. The point is that both sides in a disagreement have the feeling that they are right. So having such a feeling is not an indicator of being right in a disagreement.

    Perry, are you proposing that those who run experiments whose estimates are outliers should as a result increase their estimate of the size of their systematic error?

    Pdf, you post is thoughtful and relevant.

  • Robin, thanks.

    Also, the EconLog post on stereotype accuracy is a poor one to reference. Their argument based on Meyers-Briggs result distributions is weak, and their link to John Ray’s article is broken. I tried to find the article by googling, and ran across Ray’s blog. The links there to his papers are also all broken (even though they’re much more recent), and from the evidence of his blogroll he’s a fairly extreme partisan.

  • Matthew

    Pdf’s post is a must-read, it bears very directly on the entire project of this blog.

  • Paul Gowder

    There’s a difference between giving oneself the benefit of the doubt and giving others the benefit of the doubt. Pointing out the reasons why one might be biased in favor of one’s self (the end of this post) is no reason to cast suspicion on the former.

    But perhpas more importantly — Robin, what sort of evidence would you like for the unseen bias of a tendency to believe the worst about outsiders before you classify it as a seen one? I mean, the world has a vast history of nationalism, racism, and xenophobia, and there’s huge scads of psychological evidence (give me a week to get back to my library and I’ll dig it up if need be) for the proposition that people generally have more averse attitudes toward out-groups than to in-groups. Isn’t that enough?

  • Matthew

    There was a great fMRI study on political partisans that got a lot of press last January. All about how different areas of the brain were used depending on whether you agreed or disagreed with a politician’s party.

    Of course, they probably photoshopped the brain scans. . . 😉

    I suggest that the results of the study are broadly applicable to all manner of ingroups and outgroups, not just political ones. Of course this research needs to be confirmed. . .

  • ChrisA

    I thought what you meant was how would you convince someone else they were not dreaming you. This is really a variation on solipsism. As you know many clever people have suggested ideas against solipsism without, it seems, coming up with the killer logic to convince solipsists (can there be a plural to this word?) of their arguments. But I am willing to bet that the solipsist would become angry at me if punched him on the nose, and would, at least for a while, forget that about the theoretical possibility I was a figment of his imagination. After all a solipsist should not become angry at himself.

    If I was trying to convince myself that I was really awake or in a lucid dream (as opposed to debating with a solipsist), I would look for persistence of experience, the longer my experience lasted the more I would be convinced I was awake. I know from previous experience that dreams only last for a while, then I wake up.

  • The two issues here seem to be biases in how we perceive other people and biases in how we see ourselves. The overconfidence bias is well documented and causes us to evaluate ourselves too favorably in many areas. It should be possible to take standardized tests and calibrate our self-evaluations, at least in broad terms. A creationist who is able to produce accurately calibrated and well reasoned answers to a wide range of factual questions not specific to creationism would be more credible.

    Sounds like biases in terms of evaluating other people are more questionable and not as well supported by research. While we may evaluate others less favorably than ourselves, the bias is in terms of how we view ourselves rather than how we view others. That is where the correction should be applied.

  • John DePalma

    Underweighting base rate information as a bias:

    “In making probabilistic inferences perceivers ought to take account of general, broadly based information about population characteristics, and more specifically the prior probability of an event occuring. The tendency to under use, sometimes even ignore, such information is called the base rate fallacy.”

    ( and )

  • Matthew

    “While we may evaluate others less favorably than ourselves, the bias is in terms of how we view ourselves rather than how we view others. That is where the correction should be applied.”

    The relevance of this factor to truth-seeking occurs when we hear someone who says something in disagreement with our own present beliefs. In that instance we implicitly elevate our current understanding and denigrate theirs. So it doesn’t matter where the correction is applied, as we are comparing our (obviously correct) opinion to their (clearly mistaken) contrary viewpoint.

    In addition, there is an enormous tendency to discount and dismiss the opinions of others if we see them as belonging to an outgroup to us.

    For the large majority of people participating in this forum, this form of bias will be very marked when evaluating the experiences, opinions, and models offered by those who in some way are deemed to not be proper members of the “rationalist” or “scientific” tribes. Indeed I have read several blog entries here that basically amount to tribal rituals of ingroup cheerleading and outgroup excoriation, and why the people who are in the ingroup here are quite right to dismiss one particular set of outgroup concepts or another without due process or consideration, and how such dismissal is not truly a form of bias. If this project is going to succeed in its mission, this will have to be recognized. Bias, once actually seen as bias can then be seen through.

  • Jake Shannon

    Honesty requires a radical epistemological “agnosticm” in my opinion. Stereotyping, while a natural impulse, is not congruent with solid unbiased decision making.

    I look at it a lot like Value at Risk, sure you have a time-series distribution of asset returns and you can make a decision based upon that information but in a world characterised by volatilty and uncertainty while the occasional and unpredictable deviation are rare, they cannot be dismissed as “outliers” or ancillaries since their cumulative long-term impact is so large.

    Black swans are still swans…

  • Let’s simplify. The reason that, say, economists almost completely ignore the enormous amount of data available to them about the importance of IQ in economic life (data which, indeed, confirm stereotypes), is cowardly careerism. They don’t want to end up persecuted for their research, so economists (with a handful of honorable exceptions such as Garett Jones and Bryan Caplan) ignore IQ altogether.

  • Perry E. Metzger


    In answering your question, I would rather say that, if one discovers that one’s measurement appears to be far away from the consensus value, one should carefully and skeptically examine one’s apparatus and methods, and do additional trials, and one should be especially careful in documenting one’s method, data and calculations in any publication.

    However, I don’t think one should arbitrarily increase one’s estimate of systematic error. It is best to keep estimates of such errors as objective as possible, rather than simply guessing. The best thing to do for the prospective reader of your work is to be as careful as possible and to present as much of your information as you can.

    I think that simply arbitrarily picking larger error bars when you don’t have a mathematically sound methodology to dictate how much larger they should be is as much of a statistical sin as pretending that your measured standard deviation tells you all you need to know about your error band.

  • People tend to suspect others of what they are prone to be guilty of. Thieves suspect everyone of being a thief, etc.

    The flip side is that people who are conscious of doing the right thing, of trying to consider their neighbor first, also tend to give people the benefit of the doubt, though not necessarily to a fault.

    Consider the otherwise nice, and honest person who in traffic is an inconsiderate jerk. Traffic is merging for some construction or something, and the jerk speeds forward to cut in line, nearly causing an accident, and then seems surprised that others in traffic are angry with them, even to the point of wanting to fight.

    That person is more likely to assume the worst in others.

    Consider the other kind of driver. The guy that would rather turn right, and go around than to attempt a left hand turn that is going to cause something of a disturbance.

    That guy is going to give people the benefit of the doubt, again, not necessarily to a fault.

    The most successful businessmen that I personally know have a fantastic balance of being able to see how they ought to stereotype, but they know when to cut some slack regardless. Or just the opposite, they know when to stick to the stereotype without waiver.

  • Perry, experimentalists usually have no sound math method for estimating systematic error, but they must estimate it nonetheless. And they already think they are especially careful in examining their apparatus and calculations. Your advice seems to be to just ignore the overconfidence bias in previous measurements as irrelevant.

    Ray, assuming others are like yourself is a bias whether you are good or bad, at least if you know about how you deviate from the average.

  • Perry E. Metzger

    Robin, you say: “Your advice seems to be to just ignore the overconfidence bias in previous measurements as irrelevant.”

    No. My advice is to give the reader all the same information you have, and let them be aware of how you came to your conclusions. I think that you are assuming there is overconfidence bias here, but I contend there is no real way to determine that by objective means. When you are worried that you might be measuring inaccurately but you do not objectively know whether that is true or not, you should tell the reader of your paper of your concern instead of inventing a larger confidence interval from thin air without any solid basis for doing so.

    You note, accurately, that “experimentalists usually have no sound math method for estimating systematic error, but they must estimate it nonetheless.” That is not entirely true. What an experimentalist generally does is reason out an estimate for each of the components participating in the measurement of error. Now, if the error appears to be larger, should the experimentalist then pretend by artificially inflating one of those components, or pretend by spreading out the error among multiple components in order to sweep the fact that his educated guesses about the various factors were wrong? I think that’s a form of lying. Instead, the honest thing to do is to say “our team came up with the following estimates for error based on the following sorts of reasoning. In the end, our calculated number was further from the consensus figure than one would have expected. This might be because we are underestimating error, or it might be because of a rare fluke in our measurements, or we might in fact have the right number and everyone else might be wrong. Here is everything we did, and here are all our numbers and calculations, and posterity gets to judge what happened.”

    What you are suggesting instead, Robin, is that the experimental team do precisely what we would not want, which is look at their numbers after the fact and adjust them to get the “correct” answer. That’s not the way I want scientists to behave. I’d prefer if they said “our number is X plus or minus Y, which seems wrong, but that’s what we got. here is how we got it, maybe we did something wrong and maybe we didn’t, let us know if you spot our mistake.” That is the intellectually honest way to go about things.

  • Perry, I am not suggesting that experimentalists adjust their error estimates based on how far away their estimate was from other estimates. My suggestion is for all experimentalists to respond to the data showing their error estimates were too small, and increase their estimates of systematic error, perhaps adjusting for any strong reasons to think they are different from average on this issue.

  • Hello Robin,

    > ” To decide if you are less biased than average, you must consider the sorts of reasons that will occur to others, and ask if your reasons are better than those. ”

    This is about the most pointless exercise any person could attempt.

    1. To begin with, a person cannot ever truly know his/her own biases. Our biases are our reality, therefore they are in large measure imperceptible.

    2. Secondarily, other people have biases — of course they do, otherwise they are not human — but how is it ever possible to know whether another person has more of less bias than you do? In order to perform the comparison legitimately you must first know yourself perfectly. But all the time invested in gaining a perfect knowledge of your own self is time not available for gaining an accurate knowledge of someone else’s biases.

    3. Thirdly, gaining the advantage (regarding relative biases) over another individual doesn’t amount to much. Even if you are less biases than someone else that alone doesn’t prevent you from being motre of a fool than that person. Geniuses have a long history of making mistakes (including Aristotle, Newton and Einstein). If humankind’s great intellects can make errors it follows naturally that the average human is wrong about a great many things.

    4. Fourthly, the question of Truth (or Accuracy) is separate from the question of Bias. Biased people can know the truth just as objective minds can make errors. So determining that you have less bias than someone else does not serve to guarantee that your opinions are better than that other person.

    5. Finally, the whole bias exercise appears like a technique for people to compliment their own self. As such, it serves more of an emotional self-esteem purpose than a truth-determining function.


    As to the question of creationism vs. evolutionism, there are plenty of fools and lots of error in all the parties to the conflict. I think that this conflict serves political goals rather than any real truth interest in either side. As for myself, I don’t perceive any dispute between creationism and evolution whatsoever. I accept that God exists, I accept that the Universe is ancient, and I accept that life evolves. So there is no longer any conflict between creationism and evolution. Other people may have other opinions but their opinions don’t appear particularly relevant to my views. I’d prefer to disagree with everyone than choose a side.

  • This is difficult to answer briefly. My comment yesterday seemed to be misunderstood, in that it seemed that I was qualifying a bias as not a bias if the person making it was of a “good” nature.

    That was not my intention at all; I merely meant to point out the usefulness of stereotyping, and how it doesn’t always qualify as a pejorative if used appropriately.

    “To make assumptions about an individual based on a stereotype is wrong, even if the stereotypical view is broadly accurate.”

    This is just foolish, and is only defensible in the largely innocuous world of theory. The closer a decision hits to home, the more that we would find such a statement violated, even by the person who made it.

    “I say honesty demands we stereotype people, instead of giving them “the benefit of the doubt.”

    Well, if we have to lean to one extreme or the other, then of course, it is only prudent to engage in stereotyping, but there is a balance to a normal, productive life.

    It has been my experience that the more intelligent and honest a person is, the more they tend to give the benefit of the doubt. It is a qualified benefit of doubt, but nonetheless it is often given.

    I believe it is the confidence they derive from both their intelligence, and their self-image that enables them to give others a break.

    Self-image, or self-confidence is probably the most important factor among all of the personal factors involved. While I taught high school, the one thing I drilled into my students more than anything else was the need for self-confidence. That they had to do their reasonable best in everything in order to reap as much confidence as possible. Only self-confidence keeps away the hobgoblins of self-doubt, and herd-mentality.

    Of course, one can be over-confident, but this butts up against personality types, and many other factors that are difficult to address in brief.

  • Ray, the data suggest that overconfidence is more common than underconfidence. So while you might help your students in life by making them more confident, they are probably more biased nonetheless.

  • Robin;
    More confidence necessarily equals more biases? Really?

    Less confidence equals fewer biases?

    That doesn’t work.

    Lazy thinking leads people to accepting the “consensus” view without any serious consideration of the matter. Thus personal biases are built up over time that are largely incorrect, or at least based on faulty premises.

    Vigorous thinking would lead people to consider other viewpoints, and even question themselves. This will eventually lead to a more accurate set of assumptions, biases even, as I don’t think it possible for the human computer to be void of bias.

    Lazy thinking stems from a lack of confidence, as in “Well, I read in a magazine that X causes Y- – – it must be true.” They don’t have the confidence to question what they see in print or hear in the media.

    Vigorous thinking requires a certain amount of self-confidence in order to ask questions.

    Taking your reply at face value, the more bias free I become, the less confident I would have to be. This would have the negative side effect of not being able to question anyone however, and thus produce a non sequitur in that I would eventually have to accept everyone else’s biases as fact.

    I think there’s a hint of a bias in your thinking. One that tacity assumes confidence equals to some degree over-confidence. If one places confidence on a straight sliding scale, the supremely confident would of course be all the closer and more prone to over-confidence.

    But, what if that straight scale is wrong? Accepting that people have a tendency towards self-deception, their initial reactions may be that of a self-confident person that wasn’t willing to consider other factors. But such over-confidence stems from a lack of vigorous thinking at a more basic level, which I believe has more to do with a lack of confidence. This makes the average person a rickety building of poorly planned walls on top of uncertain foundations.

    And yes, the supremely confident person can be easily over-confident as well, but this doesn’t make confidence in and of itself a negative.

  • Ray, I recall there being a study that showed that techniques used to increase students’ confidence, while they did increase self-reported confidence, actually hurt performance a little bit. I don’t think that the causal relationship between high confidence and success has been established.

    In any case, Robin’s point is that the overconfidence bias is only exacerbated by higher self-confidence. (This has been established by studies.) There are no biases that are decreased by higher self-confidence.

    “Vigorous thinking would lead people to consider other viewpoints, and even question themselves. This will eventually lead to a more accurate set of assumptions”

    It’s not clear that people can consistently increase the accuracy of their set of assumptions by considering other viewpoints.

    “Taking your reply at face value, the more bias free I become, the less confident I would have to be.”

    It looks like you might be conflating self-confidence with credulity. “Less confident” might mean “more skeptical”, or “more humble”. I do think that, on the balance, biases tend to dispose people towards higher belief in propositions rather than lower, but I don’t think it follows at all that less biased = more humble. If anything, it would be the opposite.

  • Ray, if we are clever enough then for any X and Y we can make up a story whereby more X leads to more Y. But since there are many other processes going on, it is important to actually look at data to see what the net effect of X on Y seems to be.

  • Martin Lafferty

    In the statement “To make assumptions about an individual based on a stereotype is wrong”, I meant wrong in the sense of “unjust” rather than “false”.

    Long established and valuable principles such as presumption of innocence and equality under the law are based on the idea that an individual, possessed of free will, should be given the benefit of the doubt.

    This does not in any way imply that the formulation or observation of stereotypes is not useful.

  • Martin, how can it be just to ignore relevant information about a person?

  • What about the potential for the application of a stereotype to obfuscate relevant information about a person? A stereotype isn’t relevant information about *a person*, it’s information about a statistical correlation between one or more characteristics of a person that are apparent, with other characteristics which might not be immediately apparent.

  • Anne, almost all information is “just” about statistical correlations between characteristics. If you propose that humans are prone to ignore certain info when presented with stereotype info, and that this justifies discouraging the use of stereotype info, then until we have data showing this tendency, I’d have to classify this as a seen bias justified by an unseen one.

  • Well, here’s an example:

    The author of this article recommends, “…that we test our interpretations for bias by peeling off the labels, as I’ve done here. If our interpretations make little sense, then our science is biased.”

    If “labels” are similar in this context to “stereotypes”, then this article could be relevant to this discussion. When it comes to experimental science, stereotyping of groups of people (according to psychological diagnosis and gender in this case) can influence the interpretation of results in ways that can be detrimental to a clearer scientific understanding of a given phenomenon.

    You might suggest that peeling off the labels, as the aforementioned author suggests, is actually a form of ignoring relevant information — however, I would disagree that this is necessarily the case, since one cannot entirely discount the chance that leaving the labels applied could lead to a misinterpretation of the results (because at least some of the information associated with a given label could be obsolete). Perhaps ideally, scientists should be able to evaluate the data in full knowledge of what the applicable stereotyping labels suggest, but at the same time understand that the labels could be misleading them and interpret accordingly. But perhaps that’s what the initial post was suggesting in the first place.

  • Martin Lafferty


    “how can it be just to ignore relevant information about a person?”

    Well I don’t suppose it is; but I don’t think my position is particularly controversial. A stereotype does not concern a person, it concerns a group. (I am assuming that we agree that by stereotype we mean a general observation applied to a definable group that is true given some statisical qualification)

    Such an observation might be useful in some context (for example consideration of some general policy) but it might be entirely inappropriate in another context. It is almost impossible to discuss this in the abstract: as so often, it all depends on the numbers and the specific circumstances. There is a very strong correlation between being male and having testicles. I am a male, and it is therefore reasonable to assume that I have testicles, but there might still be circumstances (perhaps best left to the imagination) in which it was appropriate to make a specific enquiry as to the facts in my case. My gender also makes me more likely to be a violent criminal. If I was on trial for battering my fishmonger, this would be a revelant, but unjust, consideration at my trial because the other facts (notably whether I committed the offence or not) would be overwhelmingly more relevant.

    A general observation may be true, but once you have narrowed your field of observation to a specific individual there becomes available much better information upon which to base a conclusion – at which point the stereotype is unlikely to assist in the objective assessment of that information, and in practice, seems to have the opposite effect.

  • Martin said: “A stereotype does not concern a person, it concerns a group.”

    Exactly. And I don’t think it’s somehow more honest to ignore relevant data about *an individual* in favor of applying a stereotype.

    Stereotyping might be an acceptable tool in some cases (for instance, the case in which a large man is running toward you wielding a knife — you really don’t want to be giving him the benefit of the doubt at that point!) but it is a mistake to think that stereotypes somehow have priveleged access to truth. Expediency and utilitarian value in a survival situation is not the same thing as greater coherence with reality.

    In some cases, groups of people are stereotyped inaccurately because they’re in a position of lesser power than another group and because the group-in-power has a vested interest (though not necessarily one even known to all group members) in stereotyping an outgroup.

    Slavery would be an example of this; it was important, for instance, to maintain the lie that people belonging to the “slave race” were intellectually inferior or not really people at all. If we were having this discussion pre Civil War, would someone making a sincere effort to overcome bias really suggest that we should, by default, believe that members of an enslaved race really were inferior?

    Stereotyping as a habit most certainly compels people to ignore relevant information about individuals *and* groups when people insist on holding onto obsolete and/or tainted information — which is why we have things like the Civil Rights movement, feminism, and the men’s movement.

  • Anne, yes of course it is fine to ignore an *inaccurate* stereotype.