Norms of Reason and the Prospects for Technologies and Policies of Debiasing

I have so far found most discussions and debates about the correction of cognitive "biases" very confusing, including most of the posts on this blog. Why? Because I find the very idea of a cognitive bias confusing any time I really start to think about it. A bias is a bias only relative to some standard. The cognitive shortcuts and blind spots identified in the heuristics and biases literature may look like "failure" when laid against some idealized conception of rationality, but why should we care about such conceptions of rationality anyway? A hip hop dancer is making constant "mistakes" from the perspective of the formal norms of ballet, but why on Earth would you judge hip hop from the perspective of ballet?  You wouldn’t. I’m making a "mistake," in some sense, by failing to have abs like a Spartan in 300. But so what? And in the absence of normatively  binding reasons to conduct ourselves cognitively according to the principles of idealized Rationality, cognitive "biases" may not be biases at all. Indeed, they may well be optimal relative to some other standards we have reasons to care about.

I have become convinced, from reading contemporary cognitive science, neuroscience, and Hayek, that Reason is no part of our biological endowment, and that Rationality is an "unnatural," culturally-transmitted set of cognitive norms. As I’ve analogized before, Rationality is to the mind as ballet is to the body. Failure to adhere to the standards of the canons rationality — decision theory, game theory, formal logic, Bayes Rule, etc. — is a failure to cognize balletically, but is not a mistake unless one was trying to cognize balletically, or was trying to accomplish something that requires that kind of highly polished cognition as an instrument. Reason, it turns out, is damn good for a huge number of things, has made our live unimaginably better, and deserves nothing but hymns of praise. 

Nevertheless, I think it is important to acknowledge that the project of this blog — the project of debiasing-as-making-Rational — is about adjusting our cognitive behavior to live up to a particular set of cultural ideals, not about living up to our "nature" as putatively rational beings. People guilty of cognitive biases are failures in the sense that people who don’t have the discipline to hold down a job are failures. Shame! But if we look at this kind of "failure" from an outside perspective, as detached but interested consumers and critics of our own cultural norms, then we’ve got to ask: so what? The cultural project of debiasing is about cultivating norms that prevent people from shifting to or feeling comfortable in the "so what?" perspective. What can we say in favor of this cultural project?

Libertarian debiasers favor technological and market debiasing techniques. Paternalist debiasers favor elite-managed policy debiasing techniques. Both, I think, need to face up more fully to the particular cultural construction of Rationality standing behind the desire for these techniques, and the lack of intrinsic normative oomph therein. It simply isn’t obvious that this cultural ideal about the refinement and deployment of our native capacities is one reasonable people cannot reasonably (or even Rationally) reject. So I’m "biased." So we’re all "biased." So what?

If there is some cost to debiasing, then maybe I don’t want to buy very much of it. Why should I? What’s in it for me? It’s seems that the answer might be: not much, unless enough other people coordinate on the cognitive norm. If the individual advantages of being less "biased" are contingent on many other people debiasing first (or at the same time), what kind of problem is that? I think it’s a rather diffuse and maddening <em>cultural</em> problem (if I’m convinced it counts as a problem at all). There is little demand in our democratic society for political representatives inclined to appoint competent debiasing bureaucrats, and so unless there is some kind of bloodless coup of behavioral economists, we’re not going to get any. Paternalist hopes are misplaced in the absence of a cultural shift, at least among elites. Libertarian debiasers hoping for new institutional technologies, like betting markets for ideas, face a similar kind of political problem. Legislators and/or bureaucrats have to act to make these kinds of markets legal. Why would they want to do that?  Where does the demand come from? We probably need a bit of cultural ferment before we get there.

So, how do we catalyze this cultural ferment? Blogging? Actually, I think that’s partly the way. New technologies that do not require political approval, but which dramatically decrease the cost of communication among elites can be create openings for the transmission of new norms. So I think we’re actually doing something useful here. And if new technologies create increasing economic returns for certain kinds of debiased individuals (can anyone name such technologies?), we should expect to see more of them, and we should expect those individuals to demand cultural norms that help them rationalize, justify, and psychologically sustain their economic interest in debiasing. If there is reason to be bullish on the adoption of the cultural norms of Reason, then we have reason to be bullish on the eventual adoption of debiasing institutions, paternalist or libertarian.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Wilkinson, you assert that rationality in the Bayesian sense is a culturally contingent construction. It would seem that you must therefore assert one or more of the following:

    1) Truth is culturally contingent: there is no single reality; or beliefs and reality are incomparable.

    2) Desire for truth is culturally contingent: rationality is a culturally contingent procedure because it rests on a culturally contingent desire.

    3) Bayesian mathematics fails to attain the goal set for it by its own advocates: it is not a mathematically optimal albeit incomputable method for arriving as close to the truth as possible.

    4) Computations can only make “mistakes” compared to other realistically performable computations: There is no sense in which, for example, assigning a probability P(A) < P(A&B) is a "mistake" just because it violates the Kolmogorov/Cox axioms for probability, unless we can exhibit an actual computation that says differently and does better. Furthermore, humans cannot be mistaken compared to Bayes-method computers, only compared to other procedures that a human could realistically perform.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    What is the point of overcoming bias to get beliefs closer to truth? How natural is that? What benefits could outweigh its costs?

    Some of us may want to, and that would be enough reason for us to gather in a place like this. Those who don’t can go elsewhere.

    What is completely natural and human is to claim to want to believe truth. Most every group likes to feel superior by believing that its beliefs are less biased than other groups’ beliefs. Consider today’s “reality-based politics” or frequent Christian references to TRUTH.

  • http://profile.typekey.com/willwilkinson/ Will Wilkinson

    Eliezer,

    2) Is the closest to my own position, although I don’t think is a very helpful way of putting it. I don’t in fact think that the desire for truth is culturally contingent, since a certain kinds of true beliefs are necessary for everyone at all times. It would be clearer to put it in the negative: the desire to weed out ALL or ALMOST ALL FALSEHOOD is culturally contingent. When you put it that way, I think it’s obviously true.

    I also meant “culturally contingent” in the plain sense that Bayes’ Rule was discovered in a particular culture at a particular time in history. Additionally, also meant that I do not believe that Bayes’ Rule, or other principles of good thinking we learn in school, is implicit in the very structure of human cognition in the way that, say, Kant thought that certain principles of theoretical and practical reason were. That is to say, according to Kant, if you are in the business of thinking or acting at all, you are already committed to certain normative principles that you MUST follow lest you somehow imperil your humanity. Bayes’ Rule isn’t like that. It is more human than not to violate it. To keep up the Kantian language, its normative bindingness comes from it’s part in certain hypothetical imperatives. If you want to identify the probability of the truth of a proposition as accurately as possible, use Bayes’ Rule. A large part of what I’m going on about is that accurately identifying the probability of the truth of propositions is an aim that individuals and cultures can prize more or less, and that if you want individuals and cultures to prize it more, you have to provide reasons they already care about as to why they ought to bear the cost of pursuing it. You may also need to dramatize and romanticize the enterprise as well.

  • http://profile.typekey.com/willwilkinson/ Will Wilkinson

    Robin,

    You say:

    “Some of us may want to, and that would be enough reason for us to gather in a place like this. Those who don’t can go elsewhere.”

    But you’re interested in making it so that more people want to, right?

    “Most every group likes to feel superior by believing that its beliefs are less biased than other groups’ beliefs. Consider today’s “reality-based politics” or frequent Christian references to TRUTH.”

    Absolutely. And part of the debiasing project is to create an increasingly large group who is able to feel superior, and who a great many people acknowledge are RIGHT to feel superior, for its commitment to cognitive norms that really are more reliably truth-tracking than the more biased alternatives. No?

    “What benefits could outweigh its costs?”

    I’m not sure what you intend as the reference of ‘its’ in this sentence. Could you clarify?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Will, “it” referred to overcoming bias. We would like more people to share our goals, but we have no assurance of how successful we can be. Probably our greatest lever is shame and hypocrisy, i.e., the fact that most people pretend to want what we (say we) want. If forced to choose between what they pretend to want and what they usually want, many may choose their pretensions.

  • http://profile.typekey.com/willwilkinson/ Will Wilkinson

    Robin, Ah! Got it.

    Would it be a kind of victory if people who now say that care about truth, but who really don’t, started admitting that they really don’t? Or, if more people stopped complimenting virtue through hypocrisy, would that actually damage the cultural prestige of truth? I’m not sure.

  • http://agoraphilia.blogspot.com Glen

    Will, I see your point, but allow me to argue devil’s advocate. To justify Bayes’ Rule, we needn’t say you have to value truth for truth’s sake. We need only recognize that knowing the truth (or rather, true estimates of probability) is instrumental to the best achievement of your other goals, whatever those goals might be. So in this sense, Bayes’ Rule is not culturally contingent at all, because it does not dictate your goals.

    Now, it might also be costly to apply Bayes’ Rule, so you might reasonably choose to conserve cognitive effort by applying Bayes’ Rule only some of the time. That would be consistent with a rational allocation of scarce cognitive resources. But the non-culturally-contingent value of Bayes’ Rule is demonstrated by the fact that if you could have its output provided to you for free, advancement of your own (possibly culturally contingent) goals would dictate using that output.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Wilkinson, I addressed many of these issues in Why Truth? and What’s a Bias? If you find fault with my reasoning there, feel free to post it in a comment here. Incidentally, I aspire to dramatize and romanticize rationality because I believe Rationality is a dramatic and romantic endeavor, one of the great high melodies sung in the unfolding epic poem of Humankind.

    I agree with Glen that truth is a subgoal of nearly any goal that requires cognition, up to and including walking across a room. When you successfully locate and sit down on a chair, you are committing an act of truthfinding no less than believing that humans evolved by natural selection. The politics and arguments and surrounding verbal bibblebabble are more complicated in the second case, but the math is the same.

    Robin, we’ve differed on this before, but I still object to your calling people who deliberatively endorse truth yet engage in self-deluding behavior “hypocrites”. Traditionally, a “hypocrite” is someone who verbally advocates a morality which they do not privately believe. Many people claim to believe in a morality, and internally believe they believe in the morality, yet commit some acts not in accordance with it; these people are traditionally called “sinners”. People who say they believe in truth (honestly, without knowing intent to deceive you) and then self-deceive are sinners, not hypocrites.

  • http://rafefurst.wordpress.com/ Rafe Furst

    For the first time on this forum, I find myself agreeing almost entirely with a blogger’s original post. Just thought I’d encourage Wilkinson and others who would follow his lead that he has struck a chord.