17 Comments

Mother of God. Just take up meditation and look deeply into the mind instead of trying to figure out how society can be fixed. You'll learn more.

Expand full comment

Let's distinguish the unclear sense in which this form of objection might apply to the project of overcoming epistemic bias, and the sense in which it is widely held to count against using consequentialism as a decision procedure. This view assumed we cannot be relied to overcome certain ingrained biases even if we wholeheartedly endorsed consequentialism. The claim isn't that we should follow certain non-consequentialist moral rules of thumb unless we're confident enough that we've overcome these biases, in which case we SHOULD use consequentialism as our decision procedure.

As I said, this piece of philosophy involves some very large empirical claims that are usually made in an extremely sketchy way and without real evidence (not that there isn't much supporting evidence out there). One would have expected that, instead of feeling comfortable in the consensus that Hooker describes, consequentialists would take the question of whether these biases might be overcome more seriously.

Expand full comment

OK, sure, we each prefer that other people follow a rule that says to treat people similarly, instead of giving people who are biased toward themselves the flexibility to treat themselves differently. This isn't the same as thinking that we shouldn't try to correct any bias we see in ourselves.

Expand full comment

Here’s a typical statement with a long list of further references:

“When confronted with that criterion of moral wrongness, many people naturally assume that the way to decide what to do is to apply the criterion, i.e.,

Act-consequentialist moral decision procedure: On each occasion, an agent should decide what to do by calculating which act would produce the most good.

However, no serious philosopher nowadays defends this decision procedure (Mill 1861, ch 2; Sidgwick 1907, pp. 405-6, 413, 489-90; Moore 1903, 162-4; Smart 1956, p. 346; 1973, pp. 43, 71; Bales 1971, pp. 257-65; Hare 1981; Parfit 1984, pp. 24-9, 31-43; Railton 1984, pp. 140-6, 152-3; Brink 1989, pp. 216-7, 256-62, 274-6; Pettit and Brennan 1986; Pettit 1991; 1994; 1997, 99-102, 156-61). There are a number of compelling consequentialist reasons why the act-consequentialist decision procedure would be counter-productive.

First, very often the agent does not have detailed information about what the consequences would be of various acts.

Second, obtaining such information would often involve greater costs than are at stake in the decision to be made.

Third, even if the agent had the information needed to make calculations, the agent might make mistakes in the calculations. (This is especially likely when the agent's natural biases intrude, or when the calculations are complex, or when they have to be made in a hurry.)

Fourth, there are what we might call expectation effects. Imagine a society in which people know that others are naturally biased towards themselves and towards their loved ones but are trying to make their every moral decision by calculating overall good. In such a society, each person might well fear that others will go around breaking promises, stealing, lying, and even assaulting whenever they convinced themselves that such acts would produce the greatest overall good. In such a society, people would not feel they could trust one another.”

(Brad Hooker, Rule-Consequentialism, Stanford Encyclopedia of Philosophy)http://plato.stanford.edu/e...

Expand full comment

Guy, could you give a cite of someone who argues that act consequentialism is self-defeating due to pervasive human bias? It would be interesting to see which biases in particular concern them.

Expand full comment

When people object to consequentialism on consequentialist grounds, they usually mean something like: "Consequentialism with short-term horizons will produce poor long-term effects" or "When fallible human beings try to implement a consequentialist system as cognitive reasoning, this produces poor long-term effects." Such objections accept consequentialism as a criterion over cognitive ethical systems. (Contrast to an objection that consequential reasoning is too cold-blooded and should therefore be forbidden to human beings as non-virtuous, even if it produces pleasant real-world results.) The notion that consequentialism is actually self-contradictory or self-defeating seems to me to rest on poor phrasing.

Similarly, regarding: "There are many cases where we would better reduce bias overall if we adopt policies that don't aim to directly minimise bias in each particular case."

I would say something like: "There are policies for reducing bias that are consequentially superior to the policy of (a) enumerating each individual cognitive bias and asking (b) fallible human beings with (c) strong emotional stakes in the debate to point out what (d) seems to be a match between some argument and a known bias with (e) a strong social convention that (f) any such apparent match is deontologically forbidden and (g) constitutes a crushing blow in the debate."

On the other hand, I'm not sure there's any case where, as a matter of minimizing your own, individual biases - trying to cast out the log in your own eye - it doesn't make sense to try to minimze, in each case, anything that you judge to be a bias; provided that you do exercise some discretion in judgment of yourself and do some (but not too much) reasoning about what is or isn't a bias. In a heated public debate, I worry that arguments about what is or isn't a bias will cripple any benefit that might be derived. In the silence of our own minds we might do better, given a humanly realistic amount of self-honesty.

Expand full comment

Just to add something further on this point. Those who DO hold that pervasive human bias make direct act consequentialism self-defeating should be particularly interested in the project of overcoming bias. Since this claim about consequentialism is a contingent, empirical claim, the success of such a project could make this objection obsolete. (There are of course other objections to direct a-consequentialism that are untouched by this -- e.g. the claim that having such a direct aim is incompatible with valuable human practices and relations.)

Expand full comment

Guy, to pursue your analogy, I'm not very convinced that there are many good cases where the consequences are better if we do not pursue consequentialism in each case. So by analogy I'm not sure there are many good cases where we reduce bias overall by not reducing bias in each case.

Expand full comment

If I may artificially revive the analogy to consequentialism, the general thought here seems to be that, given certain pervasive biases, there are many cases where we would better reduce bias overall if we adopt policies that don't aim to directly minimise bias in each particular case. But the second-order project of reducing bias in the project of reducing bias is obviously in danger of leading to a regress.

Systems of ideas that allow people to easily generate charges of bias against anyone who disagrees with them have always been popular -- both marxism and psychoanalysis owe some of their enduring appeal to such a feature.

Expand full comment

Nick and Eliezer, your suggestions that certain kinds of conversation may admit only certain kinds of consideration in order to help progress, contingent on human nature, are not crazy, but they are also not obvious either. This topic would make a great one for a separate post.

Expand full comment

I hate to link to the same essay of mine twice on the same blog in the same week, but my post here[1] has so much to do with Nick's comment above--a list of guidelines meant to apply to informal but serious debate (that probably largely applies to many formal situations) to restrain some of the common mistakes people make in arguments motivated by psychological shortcomings. One of my prime targets was motivated skepticism, as in the paper Eliezer links to above.

http://pdf23ds.net/implicat...

Expand full comment

PS: The point being that, in scientific process especially, we may want to adopt the rule of not being able to accuse someone of a bias, per se. You can believe for yourself that the incongruous arguer has a bias, and be justified in doing so; but by social convention, you must use this personal belief to hunt down the location of possible flaws in the one's arguments, and then point these out as flaws, rather than directly challenging the one's psychology. In some cases, you may need to explicitly state the bias - for example, state the conjunction fallacy to help explain why you're concerned about a belief with a lot of extraneous detail; or state the overconfidence fallacy to explain why you're concerned with an expert's 98% confidence interval that was produced without numerical calculation. But even so, one may have, as a convention, that it is only ideas and assertions that are to be attacked, not motives or states of mind.

With respect to getting the beam out of your own eye, of course, anything is fair game - you can accuse *yourself* of the conjunction fallacy to your own heart's content. And I would say that this is the primary use and importance of knowing about biases - to debug ourselves, not to win arguments.

Expand full comment

"So the ideal system for public deliberation might differ from the ideal system for deliberation among honest truth-seekers... I think some of the divergences between actually used scientific methodology and the Bayesian ideal can be explained and partly justified on the basis of this observation."

I agree. The analogy I usually use is that the chief of police in a city may know perfectly well who is the local criminal boss, and yet be unable to prove it in court. From a purely Bayesian standpoint, the police chief has sufficient evidence and due justification for the belief. But, socially, society has made a decision to admit only certain kinds of evidence before the court. If we started arresting people based solely on the word of the police chief, then, while the initial sweep might indeed net a few crime bosses, it wouldn't take very long at all before the word of the police chief ceased to be good Bayesian evidence.

Similarly, science, as a social process, should not be identified with rationality. Science chooses to admit only certain kinds of evidence. It is a fact that, as I write this, I am wearing white socks. I am justified in asserting this fact, and if you know me for an honest person, you are justified in believing it. But it is not a *scientific* fact because it is impossible for you to verify it for yourself - by the time you read this, the time at which I write this will be past. That I was wearing white socks, at that moment, is a historical fact, and a rationally knowable fact, but it is not a scientific fact. The fact that white socks fall at the same rate as black socks is a scientific fact because you can, in principle, purchase some socks of various colors and experiment for yourself.

Science, I would say, is the publicly accessible knowledge of humankind. It may be that, at some later point, scientific society will decide not to regard as "science" any papers which are not freely available online, or any result for which the raw experimental data is not available online. On the theory that, if you charge for access, it's not really the knowledge of humankind; and therefore while it may be a rational fact, it is not yet a scientific fact.

Expand full comment

The following might be the same basic danger that Robin and Eliezer already points to above, but from a slightly different angle.

Consider the convention against ad hominem arguments in scientific and other serious intellectual discourse. From a purist point of view, this convention might seem unjustified, since information about who is making a claim, the personality of that person, her track record, her motives etc. could certainly be relevant to evaluating how likely the claim is to be true. However, experience shows that admitting ad hominem arguments generally tends to destroy the objective, truth-seeking character of a conversation, for reasons having to do with human psychology.

One danger is that admitting bias talk into our conversations could have a similar effect as admitting ad hominem arguments. Yes, for a sufficiently rational beings who are genuinely interested in getting at the truth, thinking and talking about biases is likely to help them better reach the truth. But bias-talk would open up many new possibilities for obfuscation and rationalization. If you can't refute your opponent's specific evidence very easily, you can always make up some hard-to-check claim about some alleged bias that should make us discount the whole case that has been made against our own opinion.

So the ideal system for public deliberation might differ from the ideal system for deliberation among honest truth-seekers. The ideal system for public deliberation should discount some types of argument, not because they are intrinsically weak or unreliable, but because they are comparatively easy to apply dishonestly. (I think some of the divergences between actually used scientific methodology and the Bayesian ideal can be explained and partly justified on the basis of this observation.)

Expand full comment

Robin, the main thing I worry about is disconfirmation bias aka motivated skepticism (applying more scrutiny to discongruent assertions and arguments) and the "sophisticated arguer" problem (where, if you already have a problem with motivated skepticism, it is exacerbated by having more ammunition with which to counter-argue incongruent assertions and arguments). This is one of the chief ways, I suspect, that smart people end up becoming stupid. See for example "Motivated skepticism in the evaluation of political beliefs" by Taber and Lodge: http://www.sunysb.edu/polsc...

Once upon a time I tried to tell my mother about the problem of expert calibration, saying: "So when an expert says they're 99% confident, it only happens about 70% of the time." Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: "Of course, you've got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with -" And my mother said: "Are you kidding? This is great! I'm going to use it all the time!"

Nowadays I never talk about calibration and overconfidence until I have first talked about disconfirmation bias, sophisticated argument, and dysrationalia in the mentally agile. In the book chapter I did for Nick Bostrom - http://singinst.org/Biases.pdf - the section on overconfidence comes long after the section on confirmation and disconfirmation bias, and contains an explicit warning against motivated skepticism. In fact, there are at least three major, emphasized warnings in the chapter - because I didn't want to leave people *worse* off for having been told of biases. "First, do no harm" and all that.

Expand full comment

It may well be that sometimes trying to overcome bias actually makes it worse. But it very hard to imagine that there is some fundamental law, like conservation of energy, which guarantees this outcome. So let us take this issue seriously and try to identify particular perverse processes which might tend to produce this outcome, in order to try to mitigate them.

One possibility is that we assume that we are less subject to bias than others because we spend more time than others talking and reading about overcoming bias. This might encourage overconfidence when disagreeing with others. Let us collect problems like this, so that we can watch for them and figure out how to avoid them.

Expand full comment