Bias, Well-Being, and the Placebo Effect

In their classic 1988 paper ‘Illusion and Well-Being’, Taylor and Brown surveyed  a wide range of experiments in social psychology that suggest that self-deceptive illusions are not only the norm, but also seem to promote well-being—only depressive people were largely free of such self-deception (let’s set aside the question of causation vs. correlation).

         

A recent post, and several comments, have considered ways in which bias may sometimes actually promote true belief in the long run. But cognitive biases may be good for us in a more straightforward way, by making us happier. Unless possession of true beliefs is intrinsically valuable, why should we want to have truer beliefs if this will only make us less happy?

These findings raise big questions. In this post I want to consider only a specific example, that of the placebo effect. There is now plenty of evidence that the placebo effect is not only very real but also powerful. If you can get yourself to believe that something would cause your pain to be better, your pain is likely to get better. For millennia the placebo effect was virtually the only way people could deal with suffering and many forms of illness. But there is something profoundly unfair about the placebo effect. It rewards credulity and superstition. To enjoy it, you must get yourself to believe falsely, and without justification, that various remedies or rituals will make you feel better. Those of us who are naturally more skeptical, more on the alert for error and bias, are not likely to do this very well. Being epistemically conscientious will make us suffer more. This seems unfair.

         

I’ll admit that I would prefer not to have to accept this conclusion, but the following ways of resisting it are the best I could come up with:

         

(1) We live in a world were we can deal with pain and suffering well enough without needing the placebo effect. This, I think, is only partly true.

         

(2) The credulous who benefit from the placebo effect also tend to suffer from the nocebo effect, so this benefit balances out. There is, however, no evidence for this.

         

(3) The epistemically conscientious are also pretty good at self-deception. They simply have more sophisticated ways of getting themselves to believe that things are going to be just fine. This, again, seems to me only partly true when it comes to the placebo effect.

         

(4) Credulity is a general epistemic vice. You can’t decide to be credulous only when this has beneficial effects. So the benefits of the placebo effect (and parallel psychological mechanisms) is greatly outweighed by the many negative consequences of sloppy belief formation.

         

I would go for (4). But it raises an interesting question. To the extent that there are areas where bias is beneficial, is it possible to correct bias selectively? 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • michael vassar

    Do we have data that tells us that teh epistemically non-credible have worse medical outcomes?

  • http://amethodnotaposition.blogspot.com Matthew

    You guys might be interested in this relative of the placebo effect.

    http://michaelprescott.typepad.com/michael_prescotts_blog/2006/12/hypnotized_by_s.html

    All I can say is “wow”!

  • Guy Kahane

    Michael, my remarks on the placebo effect (unlike the data listed in the linked article) are admittedly speculative. General *susceptibility* to the placebo effect doesn’t, apparently, correlate interestingly with any personality trait, possibly because there is no one such effect but a variety of psychological mechanisms. Furthermore, whereas the reality of the placebo effect on pain is well established, the evidence for such effect on other conditions is, I think, weaker.

    If we focus only on pain levels and not on health in general, and ask whether, over time, the credulous tend to suffer less than sceptics, then we find ourselves with an empirical question that isn’t that easy to answer. It would be nice to have some hard data.

    This empirical question is important, but I was more interested in the structure of the problem: what to do when truth-seeking is also harmful. I phrased this problem from the first-person perspective, as a problem facing a self-conscious thinker, but it can also be put in the third-person. Think about the ethics of the scientific study of various alternative treatments and remedies. If one makes it public knowledge that, say, homeopathy is no better than placebo, then isn’t one harming many credulous believers in thus correcting their false beliefs?

  • tweedledee

    If 4 is true, it would seem to undermine some theories of rational irrationality–the idea that people are as irrational as they can afford to be; they believe whatever they want when there is no significant cost, but are otherwise rational when the cost of not being rational is high.

    It seems more plausible, and interesting, to me that 4 is not true. That is, that there are psychological filters inside our heads that know which evidence to bias and which not to bias. If the specific filters are hard-wired, perhaps there are evolutionary explanations for why some evidence gets filtered and some does not. However, perhaps we actually all have a true Bayesian reasoner inside that subconsciously selects which evidence to bias and which not to bias so as to maximize well-being.

  • Anders Sandberg

    The classic example of harmful truth seeking is Ibsen’s play Vildanden http://en.wikipedia.org/wiki/The_Wild_Duck , although this may be more a story about emotional rather than epistemic biases. The problem in the play seems to be that the uncovered untruths about the character’s lives are not replaced with useful truths. Here it seems plausible that a selective or compassionate removal of bias would have led to a better situation.

    Admittedly, arguing from a fictional and rather extreme example may not hold much power (cf my post in the sf thread).

    Presumably the truth-seeker could use his current knowledge to estimate what biases could most likely be removed with the greatest expected happiness as result. Experience would help develop an increasingly better estimate – as long as the interindividual or intersituational variations are not too large. Maybe this practice is only possible when the “fitness landscape” of beliefs is smooth enough, and fails in rugged regions of life.

  • http://profile.typekey.com/halfinney/ Hal Finney

    I have sometimes found that I can get placebos to work pretty well for me merely by pretending to believe in them. I don’t think the degree of belief has to be particularly deep or honest, to get some benefit.

    Last weekend I went to the doctor because of a persistent cough. I’d put off going because I kept figuring I’d get better, but it had been two weeks and after a particularly bad night I went in. It was just bronchitis but he prescribed an antibiotic, which they usually do. I’ve read many studies which say that this is medically useless, the cough is caused by a virus. But the doctor said it would make me better after three days.

    Sure enough, three days later I woke up with the cough greatly diminished and since then I’m all better. I didn’t believe him intellectually but I didn’t go out of my way to deny that it would work. I took the medicine and just let myself assume I would get better on schedule, and I did.

    I’d suggest that a lesson in terms of overcoming biases is that we should not feel obligated to take this task too seriously. We should approach it in the spirit of curiosity and self awareness, but not with a dogged (or hedgehogly!) determination to root out every source of error and strap our minds into a rigid straightjacket of truth-seeking. I think you can be of two minds: to be aware of your biases, even while accepting them; to see the truth, without being obligated to act on the truth. It’s something of a delicate mental balancing act but I believe this should allow the truthseeker to avoid most of the debilitating problems that Guy mentions.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I feel horrified and violated in the most intimate way to discover that my mind cannot be trusted, that it lies to me constantly. But I seem to be an outlier, even relative to this group. My colleague Tyler Cowen and I organized a workshop on self-deception a few years ago, and we were both surprised that world experts on the subject didn’t seem much bothered by their own self-deception. Tyler and I had wanted to expore our angst with them. And even Tyler says he’s not as concerned as I.

    Our minds do seem to adjust their self-deception to circumstances, and while this probably does bleed to some extent across subjects, our self-deception is probably roughly personally optimal, relative to our ancestors’ environment and preferences.

    However, our world has changed, making it likely that such deception pollution is a bigger problem than our evolved tendencies anticipated. And even if not, the world could use a minority of people who deceive themselves less, in order to explore the truth and report back to the rest on where danger lies. The natural candidates for such exploration would be the people most horrified by their biases, who would then be the most willing to suffer the personal costs of overcoming their biases. I feel like I am such a person. Does anyone feel similarly?

  • James W.

    Robin:

    Hmm, no, I think the fact that you are the most horrified person you know actually makes you a bad candidate to do the exploration. May it not tend to make the whole experience painful for you in a way that may drive you to give up, or to be biased against perceiving your biases (against your will)?

    Someone who strogly doesn’t want to be biased but has no emotional response (other than perhaps mild displeasure) to each example of bias she or he uncovers in himself or herself would seem to me to be the ideal candidate for exploring personal bias. Such a person due to a strong desire to overcome bias would be relentless, but wouldn’t be terribly troubled by whatever facts emerge, and therefore could cheerfully pursue the research.

  • http://areasonableman.com/ Gil

    I suggest that the lesson is not that we should be more tolerant of credulity; but rather of optimism.

    Optimism (having unrealistically positive estimates of our chances of success) does seem to have benefits, and not just for medical outcomes.

    Of course, this can be overdone and reach counter-productive levels. So, the goal should be to be optimistic; but not too much.

    And, maybe, it isn’t really a bias at all. Maybe it’s an unrealistic estimate of one’s chances, absent the optimism. But, when the optimism is recursively taken into account, the rosy estimate is fairly accurate.

  • Carl Shulman

    Robin,

    I share your extreme repugnance for bias. In my case, there’s a strong likelihood that some specific negative personal experiences with extremely intelligent people holding delusional beliefs played causal roles. I am also quite low in agreeableness, which probably reduces the emotional cost of being an ‘odd man out’ for certain types of belief, and have been impressed by the personal benefits of certain efforts to counter bias, even when there are costs in doing so:

    1. Overconfident people may be more likely to earn high incomes, but overconfidence also leads to excessive active trading of financial assets, particularly among men (women, who are less confident about their investing ability, stick more closely to the market portfolio rather than banging their heads against the wall of the EMH) and lower returns.
    2. Rational assessment of personal risks of death from statistical data rather than availability-based impressions, substituting activities to reduce the risk of death, and taking cost-effective precautions. This can draw negative social responses, e.g. when one is blase in the face of risks of terror attacks and sharks, while
    3. Avoiding various comforting beliefs about mortality (such the idea of an afterlife on the one hand, and the idea that effective life-extension techniques have an ~0% probability of development within my lifespan on the other), leading one to take increased efforts to avoid death (recognizing the full costs thereof, and factoring in an increase in expected lifespan on account of the probability of life-extension of one form or another).
    4. Taking into account hedonic adaptation, and the larger body of happiness research, leads one to reallocate efforts efforts away from goals (such as higher incomes per se) for which we typically overestimate the utility they will bring, and towards other activities such as survival and charity.
    5. Evaluating one’s charitable giving for its effectiveness, rather than its fashionability, and allocating all of one’s donations towards the target with the highest expected return amplifies one’s expected positive impact on the world enormously. This leads to a reduced total hedonic reward vis a vis the giver who feels warm after each of dozens of small donations to various causes-of-the-moment, but I prefer to actually have an impact rather than take more pleasure from falsely believing in such an impact.

    After identifying such significant perils of irrationality, I have something of a visceral fear of suffering negative practical consequences in realizing my goals (or figuring out what my goals should be): I am haunted by lemmings falling off cliffs, moths flying into flames, religious ‘martyrs’ sacrificing their lives, homeopaths refusing medical treatment, and the like.

    Awareness of these psychological facts about myself raises several personal concerns:

    1. My desire for consistency in beliefs might make me too much of a Hedgehog. http://www.overcomingbias.com/2006/11/foxes_vs_hedgho.html I try to compensate for this by explicitly recognizing (to myself) the existence of ‘black boxes’ and the sub-1.0 subjective probabilities of my beliefs.

    2. I may overvalue the practical value of general rationality and true belief, relative to compartmentalized, domain-specific rationality, in accomplishing practical objectives like creating Friendly AI. A physicist can do excellent work despite believing that Jesus rose from the dead, and I may overweight information about such an apparent lack of truth-orientation in one domain as evidence in assessing reliability in other domains. I wonder if I give Eliezer too many brownie points (certainly he deserves some!) for actively attempting to be rational and changing significant beliefs in a publicly visible fashion.

    3. I may be biased towards adopting beliefs when, if they are true, doing so would seem to indicate resistance to cognitive bias and offer personal benefits. For instance, I have expended several hundred dollars in cash and opportunity costs of time taking precautions for a potential H5N1 avian flu outbreak. This can be characterized as a rational reaction to a low-probability, high-impact risk, justifying precautions comparable to (the risk of a pandemic)*(the likelihood of otherwise-severely-harmful exposure for me personally)*(the probability that precautions would avert death or severe damage). I worry that I might overestimate the probability of a severe pandemic because such an event would seem to confirm both the rationality of my decision, and the superiority of my rationality to those who did not take such precautions.

    4. As a heuristic to work against confirmation bias, I hope to change beliefs at least occasionally, and I worry that I may overcompensate, i.e. I worry that self-serving bias may lead me to reject true beliefs in order to confirm a view of myself as relatively rational, and further worry that these rejections may follow along the lines of other biases. For instance, I have recently deviated from my (modally) libertarian views on the subject of immigration into developed countries by uneducated low-IQ populations that are ethnically distinct, on the grounds that democracy and tacit threats of violence let these groups push for destructive anti-market policies and ethnicity-based redistribution (see Amy Chua’s ‘World on Fire,’ among other works).

    In my view, maintaining separate states for populations with very different levels of human capital, but encouraging free trade (including outsourcing/trade in services) and skilled migration, as well as guest worker programs that do not lead to citizenship (if enforcement measures to ensure the guests leave are applied) can substitute for low-skill migration without the negative effects of empowering low-human capital populations to govern high-human capital ones.

    But this belief is in accordance with anti-outgroup bias, since as a high-IQ graduate student of Jewish ancestry I identify with market-dominant minorities more than low-IQ uneducated populations, and is widely rejected by economists I otherwise give high credence to. I can defend my disagreement with them to some extent by the fact that the argument relies on taboo subjects of IQ and group differences, so that reliability is likely to be lower, and Steve Sailer’s arguments that the professional biases of economists favor free international labor markets, while economists who have studied low-skill immigration most closely tend to be less enthusiastic about it:
    http://www.vdare.com/sailer/060702_economists.htm

    Nevertheless, Bryan Caplan, whom I respect enormously, is aware of all the important facts that I am in the U.S. case. His own work shows the impact of low education and IQ on economic beliefs, and the negative effects of bad beliefs in the electorate. He is aware of the low IQ and education of illegal immigrants in the United States, and their increased hostility to markets and support for outright socialism relative to natives. He is aware of the current revitalisation of socialist ideas, linked with anti-Spaniard, pro-Amerind racial slogans, throughout Latin America. Nevertheless, he very strongly favors unrestricted migration and amnesty, granting the vote to tens of millions, perhaps hundreds of millions, of new voters, while I disagree (less intensely). Did I make this change out of a desire to emulate the flexibility of a reasoner unhindered by confirmation bias, channeled by another known bias, and have I sufficiently modified my estimate to take his disagreement into account? These meta-meta-issues are enough to make me wish for an Archimedean point…

    [P.S. I fear but am NOT certain that Caplan is ‘Lewis to my Inwagen,’ (http://www.overcomingbias.com/2006/11/beware_of_disag.html) because of my knowledge of his personal sympathies http://econlog.econlib.org/archives/2006/05/my_short_class.html and because I am not sure exactly how much effort he has put into analyzing the political effects of large-scale granting of citizenship and open migration, although I know he has at least been exposed to all the relevant ideas. So please don’t savage me too ferociously for committing the Inwagen error.]

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Jamess, consider a person with an unusually strong dislike of bugs, who learned that there are many hidden bugs in his house. He might prefer to seek out and exterminate those bugs, even if that meant a few unpleasant encounters with new nests of bugs, rather than to try to pretend the bugs did not exist.

    Carl, you have many thoughtful and legitimate concerns, though to be clear to everyone we don’t want comments to be this long. The supposed bias you refer to, of economists for immigration, would be an interesting one to explore in more detail (though not in this comment section).

  • Guy Kahane

    Just to return to a remark Robin made earlier. He writes ‘I feel horrified and violated in the most intimate way to discover that my mind cannot be trusted, that it lies to me constantly.’ What claim are you making here?

    This might be an expression of a strong preference. Some people (not many) just strongly prefer not to be biased, so they can be expected to invest greater resources than others to overcome it — even at a cost to their overall well-being. Those that don’t, can’t be.

    It sounds, however, like something stronger, since the vocabulary used is that of ‘deontological repugnance’: self-deception is taken to be intrinsically bad, and a significant evil. If so, it can perhaps outweigh (or even override) any expected loss in subjective well-being.

    It is self-deception that is so horribly bad, or simply not being in control of the truth/justification of one’s belief? Many biases we have are not due to anything that could be properly called self-deception. And I suspect Robin wouldn’t be all that happy about an arrangement where doctors would routinely lie to him to exploit the placebo effect.

  • http://profile.typekey.com/nickbostrom/ Nick Bostrom

    How does Robin feel about compartmentalization? As Hal suggests, done well this might offer the best of both worlds.

    I don’t mean that one would allow oneself to be biased on some topics and not on others. Rather: that one might be able to create two modes, one earnest, ruthlessly truth-seeking mode, and another mode which one can sometimes enter in which one is more playful, naive, and trusting. That way, truth and life can coexist, and maybe both can flourish more if they are not put into direct confrontation. Is there any more sin in this than there is in sleeping and dreaming?

  • Yan Li

    Constant truth-seeking may be counterproductive. In a world of budget and time constraints, a person could either burden herself down by disillusioning every moment of time; or simply move on happily, and correct some self-deceptions when she could afford to. After all, reality is subject to different interpretations. One might say “I have got half a glass of wine, and I will drink it and have a ball.” The other may say “I won’t even get a buzz with 2 glasses, how could I possibly have a good time with this little?” The first may be vastly overestimating what half a glass of wine may do to her. The other becomes the victim of her own predicament. As to any other path in the world, truth seeking comes with its own costs and benefits.

  • James W.

    Robin:
    I see your point, but I think the word you used — “horrified” — implies a feeling different from dislike. It implies also fear.

    Rather than your bug example, I would perhaps analogize your task to winning a chess game against a formidable opponent — in this case your own biases. The opponent is seeking to create weaknesses in your position. To the extent you fear or experience strong emotions of revulsion towards those weaknesses, I would imagine that such feelings might both serve as a motivation to do well and as a distraction from the kind of calm, concentrated effort necessary to win. A strong desire to win does not seem to me to depend or even to be helped by a horror at the thought of losing.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I confess to a feeling of moral indignation that people are even discussing compromising with bias, the kind of indignation that the economically naive feel when a medical economist tries to estimate the dollar value of a human life. Apparently to me the truth is a “sacred” value – and I’m not sure there’s anything wrong with that. (It would become a problem if I refused to put a dollar value on fighting a bias, in a case where I have limited resources.)

    I don’t see why “But it makes people happy!” should be a knockdown argument in favor of anything. Cocaine makes people happy too, and the analogy between cocaine and happiness-producing lies seems like a strong one to me. (Including that lies are bad, but it’s still unwise to illegalize them.) There’s more to life than being happy. Being happy is a legitimate part of a well-lived life; but there can be such a thing as happiness that comes at too high a cost.

    There are *numerous* other objections that can be leveled against the notion of ever deliberately selecting, for yourself, to be stupid. Such as: Are you going to make that life-changing choice based on a maximally veridical view of the consequences of being biased, or based on a biased and distorted view of the consequences of being biased?

    Yes, there are happy stupid people out there. I once said, to a friend who’d lived in both worlds, “I’ve often suspected that the happiness of stupidity is greatly overrated.” And she shook her head, and replied, seriously: “No, it’s not.” But a happy stupid life can be destroyed by a *single* mistake. One decision not to go to the doctor. One moment of weakness in the face of an addictive drug. A ten-second decision not to sign up for cryonics may end up outweighing the rest of your short life. Maybe the happiness of stupidity is real – but it’s fragile.

    “Bias X can sometimes produce some happiness” is not a knockdown argument, even if happiness is your sole criterion of value. That’s a greedy local search, and greedy local searches get stuck in local optima. You have to consider whether there are third alternatives that would produce even more happiness. It’s like the dilemma between being poor and healthy, or rich and sick; “rich and healthy” trumps both if it’s available as an option. Maybe the happiness of stupidity is real, but I seriously doubt that it’s the *global* optimum. (As for whether it’s *really* the global optimum, do you want to think about that rationally, or dysrationally, in making the decision?)

    The Malaysian monkey-trap is a sturdy jar with a narrow throat, containing a nut as bait; the monkey inserts his hand in the jar, then finds he cannot withdraw it so long as the hand clutches the nut. The monkey may howl in distress and try frantically to pull free, but they don’t let go of the nut. (Would it be better to have your hand stuck in the jar and *not* have a tasty nut?) Not *everything* that produces a bit of extra happiness, when incrementally added to your current state, is the correct path toward the global optimum.

    I do not know the maximum power derivable from rationality. Perhaps the Way has no end… but at any rate I have not hit my local optimum; I can see the gradient wends upward yet. The happiness of stupidity is the end of a path, not its beginning.

    It is written: “The more errors you correct in yourself, the more you notice. As your mind becomes more silent, you hear more noise. When you notice an error in yourself, this signals your readiness to seek advancement to the next level. If you tolerate the error rather than correcting it, you will not advance to the next level and you will not gain the skill to notice new errors.” Wherever you compromise, that’s where you stay. That’s where you stop moving forward. Does the happiness of stupidity satisfice you? Would it be enough for you, even if you were promised that it would last forever? Would you never wish for anything more or better?

    Not that it really matters. By the time you ask the question consciously, you know too much to ever go back to being happily stupid. You won’t be able to believe unquestioningly, even if you try. If you stop there at the *beginning* of the road to rationality, I doubt you’ll ever be as happy as someone who never considered the question. The start of a gradient is a poor place to stop climbing.

    In my life the question doesn’t even arise. Among the three positive reasons I listed for seeking truth (in “Why truth? And…”) was the instrumental value of truth. In analysis of global catastrophic risks, the information value of correct reasoning is derived from the value of Earth and everything in it. And Kahane’s (4) is quite correct, I think; I don’t see how someone could pick and choose. Once you learn to see a bias, you see it, as a matter of perception, not deliberation.

    This whole business of “Let’s be clever and decide when we ought to be biased” seems to me more reminiscent of Stockholm Syndrome than anything else, like asking if smallpox might have hidden benefits for society and ought to be encouraged in selected cases, or like trying to ally with a supremely treacherous enemy so as to get as much use out of them as possible. I realize that I am using the language of moral indignation, but, as a matter of fact, truth *is* a sacred value to me and I *am* morally indignant about this.

  • Guy Kahane

    I certainly didn’t mean the ‘bias can sometimes make us happier’ (or even ‘bias WILL make us happier’) as a knockdown argument of any kind. But it certainly is AN argument. How much of an argument depends on the value we place on true belief, and the value we place on happiness. By ‘happiness’ here we mean something like what psychologists call subjective well-being. But there may well be much more to well-being than this. What I think is true is that if avoiding bias and self-deception ISN’T intrinsically or instrumentally valuable from the perspective of a particular person — if it doesn’t make his life better in some significant way (and, again, by ‘better’ I don’t mean ‘subjectively happy’) then I think the case for avoiding biases gets weaker. There has to be something in it FOR US. It’s not plausible, I think, that we need to sacrifice our own good for the sake of true belief in the way we’re expected to do for the sake of morality. Now I do think there’s something in it for us, and I was hoping the discussion to help bring this out.

    The parallel with morality is actually interesting here. The relation between morality and happiness is an old problem, and many religions invent heaven and hell to deal with this problem. I was interested in exploring the empirical possibility — certainly no more than a possibility — that there’s a similar problem about devout truth-seeking. There’s certainly no heaven waiting for such truth-seekers to compensate them for their earthly troubles, and I suspect many wouldn’t go for a bias-free life if it would also be a severely depressed one, even if Eliezer would.

    Finally, I am more sceptical than others here about CHOOSING to be biased or self-deceptive. But if we could, then this would be a different kind of problem. It’s not as if the morally good could choose to become a bit more evil if this would get them a bit more reward. They can’t (not if they’re really morally good) and that’s exactly the predicament. The same, I’d say, with being stupid or self-deceptive.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Guy, I’m not sure how to distinguish strong personal preferences from beliefs about moral truths; I find myself thinking about it both ways. I’m not sure if anyone knows how to distinguish them in practice, though many claim to do so. But I am pretty sure that we are mostly talking about choosing to be more biased or self-deceptive. Not in a direct way, “I think I’ll choose to believe I’m better than I am,” but in the “second order” way you mentioned in a previous post, choosing habits of life the encourage or discourage them.

    Nick, I agree that I’ll have to accept some degree of compartmentalization just to get along in the world, and (as Yan indicates) I’ll have to accept some common biases and self-deceptions too, as it takes more resources than I have to overcome them all. To continue with my bug analogy, I can’t check every place in my house every few minutes to see if bugs are back. As a practical matter, I’ll be more vigilant in the places I care the most about, such as my bed and kitchen, and accept there being bugs more often in other places.

    Eliezer, my emotions are similar to yours, but I’m less comfortable with claiming that the goals you and I choose are good for everyone. You and I may just have odd preferences, or be in odd situations. Surely you see the evolutionary argument that most of us are probably programmed to have roughly the right amount of bias, at least for our ancestors’ environment.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I gave three reasons for pursuing truth – curiosity, instrumental value, and morality – and either (2) or (3) alone would be sufficient for me. (I don’t know whether I could make it on pure curiosity – I hope I would, but I don’t know.)

    The title of this blog is “Overcoming Bias”, not “Should we overcome bias?” I’m not saying that a blog title is an argument, but the title is a good summary of my own stance on the subject. I’m not saying that “Should we overcome bias?” is an illegitimate question or an illegitimate matter for discussion, but my day-end answer is “I want to overcome bias” and then I go on to consider specific technique.

    If you agree that (2) and (3) are sufficient to give me just cause for overcoming bias, then I’m posting to the right blog. And Robin Hanson is posting to the right blog, and Carl Shulman is commenting to the right blog. (Where “right” is here to be interpreted not in terms of our bald personal opinions, but genuine rationality given our preferences.) Given that the existence and charter of this blog is thereby justified, can we get on with the project?

    Robin, I agree that the right path for me, in my current situation, may not be the right path for every human being, in their current situations. (Infants would have trouble, for example.) In the short term, I’m just here to collect technique and allies, in the service of a certain instrumental goal to which I assign overriding priority. In terms of what humanity should do with itself, and the long-term future, I confess I do think we should strive to overcome bias in ourselves, and that happy stupidity is not a satisficing long-term outcome. But even if this were not so, my current rational strategy would be the same.

    The evolutionary argument applies only to biases that are actual adaptations, and moreover, only where maximizing your hedonic or other interest happens to coincide with evolution maximizing inclusive reproductive fitness. Should we trust the clever inventor of the hedonic treadmill effect to look out for our psychological well-being? Evolution has no interest in your being happy, just in your having lots of kids. Or consider an altruist who attains the position of tribal chief; his maximum reproductive fitness may lie in abusing his power, but his personal, psychologically professed interests run directly counter to this. Moreover, the modern world is a long way from the ancestral environment. We’re long past the point where we can trust blindly to evolution. If there’s a solution, we’re going to have to think to find it.

  • Carl Shulman

    A thought about the placebo effect: in double-blind clinical trials patients KNOW that they face a chance of receiving the placebo rather than the treatment. Nevertheless, the effect still occurs. But is it 50% less than if you told patients receiving a placebo that they were receiving a scientifically validated treatment? I suspect it is not, and that placebo-type effects can be garnered as long as subjects have substantial uncertainty about the objective evidence. If so, then rationalists should sign up for more double-blind clinical trials rather than trying to convince themselves of the efficacy of homeopathy.

    One way to study this within ethical research rules would be by examining clinical trials with different proportions of placebo: some trials include only the drug being tested and placebo, while others also include benchmark treatments. How much does the placebo effect for a particular disease increase when patients know that they have a 33% chance of receiving placebo rather than a 50% one?

  • http://videogameworkout.com Glen Raphael

    As I understand it, the evidence for the existence of a real Placebo Effect is weak. After discarding the usual suspects such as regression to the mean, much of the remaining apparent effect might best be ascribed to politeness. Pain relief, in particular, is highly subjective. If you ask a patient who thinks he has received treatment “how do you feel”, they are likely to say “a little better” just to be nice. When you look at /measurable/ outcomes, placebos don’t work. When you look at unmeasurable self-reports, one can’t be sure the response is truthful.

    Several studies and meta-studies that compared placebo to a no-treatment protocol found no significant difference in results. A 2001 NEJM study called “Is the Placebo Powerless?— An Analysis of Clinical Trials Comparing Placebo with No Treatment” concluded: “We found little evidence in general that placebos had powerful clinical effects.”

    http://content.nejm.org/cgi/ijlink?linkType=ABST&journalCode=nejm&resid=344/21/1594

  • Guy Kahane

    Glen, I’m familiar with that article. As you noticed, I’ve restricted my discussion to the effect of placebos on PAIN. Although I think there is some good empirical evidence that placebo is effective in other conditions, there is very strong scientific consensus that it is effective with pain. In fact, this is one of the central areas of investigation in contemporary pain science. Your worry that such claims about pain involve untestable ‘subjective reports’ is a bit out of date. There are plenty of objective measures of pain level, and the effect of placebo on areas of the brain associated with pain, such as the ACC, has been imaged with fMRI in many studies. For a short accessible summary of some of this work, check
    http://www.ucl.ac.uk/Pharmacology/dc-bits/placebo.pdf

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    In biology, “evolution” is *defined* as being the process involving changes of the heritable characteristics of a population over time.

    Corporations pass all manner of things on to other companies – including resouces, employees, business methods, intellectual property, documents, premises, computer programs, etc. We are not talking about just a few bits of analog information here – often vast quantities of digital resources are involved.

    Corporations form a population. Frequencies of instances of the above listed items in that population varies over time.

    Therefore the population of corporations evolves – in the spirit of the classical biological sense of the term.