Again, here is a place to discuss Overcoming Bias topics that have not appeared in recent posts.
I’ve never seen any mention of Thomas Gilovich’s How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life on this blog. I suppose it’s an old book and quite possibly not sophisticated enough for many of the readers here… Nevertheless it was my first introduction to biases, and I found it to be an excellent read.
I’d be interested in reading about people’s recommendation on similar or better books that coincide with the overcoming-bias-theme.
Thanks for doing this for us. I’ll post as appropriate.
Why to Overcome Bias
Can any of the various altered states of consciousness, such as often occur during meditation or the use of certain psychoactive substances, teach us anything important about reality that ordinary waking consciousness cannot?
Is it important for someone who wants to “know reality” to experience these kinds of states, or can all of reality be understood strictly through objective conceptual thought? Is it a kind of bias for someone who has not explored his or her consciousness in these sorts of ways to pass judgment on their merits?
And even if altered states are uninformative about external reality, do they offer useful information about the contents of one’s own mind?
Psychedelic drugs? Sounds like a fun topic, maybe I’ll post about it. I certainly think they are valuable.
Is this a good place to remark that the position of the dashed lines gives the casual viewer a wrong impression of who posted what?
One idea, straight from Vic Neiderhoffer’s Practical Speculation: The FDA is biased against Type 1 approval errors. The one flipper-baby ends-up on Oprah, but rejecting beneficial drugs can doom thousands, silently. What is seen and unseen..
Also, regarding Michael’s question, the closest things are sometimes the hardest to see, so if there is a way to gain some critical distance from one’s normal, ossified, habitual self this would seem helpful towards overcoming bias, but i don’t think it involves abandoning “objective conceptual thought”.
Anton, I don’t have the time to figure out how to reformat this blog; volunteers would be welcome.
The Listerine effect
I believe that there is a bias that I call the Listerine effect where people assume if it tastes bad it is good for and if it tastes good it is bad for you. Further as far as I can tell beyond getting enough vitamins and proteins people real do not know what food is good for you. The studies are often contradictory. So I say take a one a day vitamin and eat what you like.
Robin, what I have in mind would take a one-line change in styles.css:
border-top: 1px dotted #999999;
change “border-top” to “border-bottom”.
Robin, Anton is correct. I looked at your stylesheet and was about to post the same thing.
What d’ya know …
I’m surprised that this never got a mention:
Since it is one the most obvious examples of fudging the data to appear in the popular media in recent years, it got published in the Wall Street Journal (of all places), and it made rounds through the blogosphere. It was topical, timely and salient.
This is something that I’d be interested in seeing discussed by either Robin or Eliezer, or someone else:
The quantum suicide experiment should allow someone reckless to establish strong anthropic evidence for the many-worlds interpretation. The subject, observing herself to be alive, can conclude that MWI is true with high probability (or the universe is infinite, or something else implying all possible events occur); but a person standing by shouldn’t change his belief in MWI one decibel, no matter what the result. Is this (the horror!) an irreconcilable disagreement between two rationalists? How can the first person have evidence that only applies to her?
Also, possibly relevant: is the Doomsday Argument non-Bayesian?
I’ve been browsing this site for a while, and I have a few questions. I apologize for the length, but this is a sort of response to everything I’ve read so far. Most of you seem to value reason, usually in terms of probabilities and the scientific method, while at the same time taking occasional politely vicious jabs at supernaturalism. The universe, according to you, is a closed system with little to no evidence of a supernatural creator. Even if such a being exists, the facts do not allow us to responsibly believe in that being’s existence. In this naturalist version of the universe, everything affects everything else in a sort of cosmological democracy, so that no one thing is above (or below) the seamless reality of the physical. We can discover “truths” or “facts” about the world through experimentally supported inference (for example, we see other human couples giving birth to human babies and infer that our own child will not have an elephant hide or a snout). However, reason can be muddied by any number of grubby biases, whether cultural, emotional, self-centered, or a horrendous, gloopy mixture of these.
The focus of this blog, overcoming bias, is to try and strain our thoughts of irrational causes that would rob them of validity. Marxists, for example, often explain away a thought as being a product of class warfare, while Freudians are quick to point out an underlying psychological complex. Even if a man who cried wolf because his brains were addled actually reflected the truth, and a wolf happened to be near, we wouldn’t say that he was any closer to having uttered a true thought.
However, if nature is democratic, with everything interlocked with everything else, how can any thought claim to be true under this premise, since every thought is produced ultimately by the firings of irrational neurons and the irrational grouping of atoms which forms our brain? Some would say, well, we may not be able to say a thought is true in that sense, but as evolution has grinded along through the eons the brains that produced thoughts which better enabled them to survive were naturally selected. At this point we still would only be able to say that our thought-producing process sustains life, not that it is able to make a claim to rationality. But even this spawns the dreaded circle of reason, since we are using evolution, which we have inferred from observing the fossil records, etc., to justify inference. You could brazen this out and say, “We know we can’t justify reason and that all thought is ultimately a byproduct of irrational nature, and we’re fine with that.” But these are exactly the people who then turn around and start making all sorts of claims about the origin of species and what our duties to mankind are, as though they were talking about true things. My question is, how can naturalists hope to avoid hypocrisy while wagging their heads over other people’s biases? According to them, sentences such as, “We should seek to preserve our race” merely arise from impulses that have evolved along with all our other impulses. I may follow it for a time, but if another impulse becomes stronger, there is no reason to favor the one over the other.
The only other option is to simply accept reason as rational, but this means looking for a source other than irrational nature’s processes. Although everyone here seems to consider even the mention of God as a greasy stain from a historical orgy, I don’t understand all this rational and moralistic posturing without at the very least a supernatural mind.
Tarleton, that one has been bothering me for a while.
Eric, I would recommend that you read:
If a man cries wolf because his brains are addled, and there is a wolf there by coincidence, then the statement is true but not rational. You can make occasional true statements by coincidence; rationality is required to make true statements systematically.
The Simple Truth will explain why no supernatural mind, or anything else particularly complicated, is required to make statements true.
An Intuitive Explanation of Bayesian Reasoning (the second link) will show how mere matter, with no supernatural properties, can be rational. If rationality is an intrinsically mentalistic property to you, and you don’t see how minds arise from mere matter, then of course it will seem to you that a non-supernatural account of the universe deprives all mere matter of rationality. But, if you understand rationality as a cognitive system, then there is nothing at all odd about rationality being embodied in a complex system made of simple parts.
Quantum Suicide obviously won’t work — if you expect 90% of worlds to be destroyed, you should anticipate with 90% probability that you will stop experiencing anything. If you don’t think it’s possible to anticipate not experiencing anything, what do you do if you expect 100% of worlds to be destroyed?
The Doomsday Argument has both a Bayesian version (Carter, Leslie) and a non-Bayesian version (Gott).
I’ll try to shut up and calculate. Say Alice is the victim, Bob is the observer, 4 rounds of the experiment are performed with a 1/2 chance of death on each round, and the prior probability of MWI is 1/2.
AL = a randomly chosen living Alice observes ‘Alice lives’
BL = a randomly chosen living Bob observes ‘Alice lives’
M = MWI is true, or universe is infinite, or generally all possible events occur
P(M) = 1/2
P(AL|M) = 1
P(AL|~M) = 1/16
P(M|AL) = P(AL|M)*P(M)/(P(AL|M)P(M) + P(AL|~M)P(~M)) = (1/2) / (1/2 + 1/16 * 1/2) = (1/2) / (17/32)
P(BL|M) = 1/16
P(BL|~M) = 1/16
P(M|BL) = P(M) (BL and M are independent)
The math seems to work, when cast this way. The living Alices increase P(M), and the Bobs don’t change it. Initially I tried to use terms of the form ‘some Bob observes that Alice lives’, but that became confusing because P(someone observes X|M) = 1 for all X. Fortunately this seems unnecessary.
The disagreement seems to come about because Bob can’t observe AL. Or can he? If Bob sees Alice alive, doesn’t he know this instance observes that Alice lives? Introduce another conditional probability: P(randomly chosen Bob observes AL|M) = 1/16. So, given M, AL is true but the probability of Bob knowing this is only 1/16, same as if ~M; so it can’t be evidence for M; so AL being true is evidence to Alice but not to Bob. Huh?
I have a feeling I’m missing something really simple.
Steven, even if you can’t anticipate not experiencing anything, P(AL|M) = 1 because all the Alices observe that Alice is alive.
I have a feeling that this paradox indicates a problem in the whole notion of ‘anticipated experience’, relating to deep misconceptions about personal identity.
“P(AL|~M) = 1/16”
Where is this coming from? Surely, if there are no living Alices, it is not the case that a randomly chosen living Alice observes not existing; rather, there is then no such thing as a “randomly chosen living Alice” and so your entire framework breaks down.
If ~M, there is a 1/16 chance that there is still a living Alice, and therefore that AL occurs. ~AL doesn’t mean “Alice observes not existing”; ~AL just means “AL doesn’t happen”. But you’re right, that is confusing.
I’ve been thinking about quantum suicide too. It intuitively feels plausible to anyone who has been around a little while -we can think of all the possibly mortal accidents we avoided. Since quantum immortality is a default option, like the “we’re all doomed” default option, I think we should probably focus on the but-for option, namely but for our best efforts to maximize our persistence odds, we’re all doomed. Of course, one could perhaps argue that an equally likely scenario is but for our best efforts to maximize our persistence odds, we’re all immortal, but if we engage in those exact efforts, we’re all doomed, but I think that’s probably recursive to the “we’re all doomed” default option. So I’d go for the positive but-for option in terms of guiding our actions.
Combine MWI and a theory of conservation of subjective conscious experience, and it becomes rational to always put desired outcomes over preservation of life. Also, it becomes irrational to devote any effort into minimizing existential risk or maximizing personal persistence odds. One could essentially free ride off of an infinite (or a large number) of other subjective conscious selves. If only we were 100% certain that this particular model was accurate. *sigh*
Also, it becomes irrational to devote any effort into minimizing existential risk or maximizing personal persistence odds.
Not really. What if existential risk is so likely that my most likely subjective continuity is in a world with no other humans? That’s no fun.
Nick, I’ll put my further replies to this on my anonymous blog, so as not to overrun overcomingbias with it.
Nick, what you’re doing feels strongly like cheating, but I can’t quite put my finger on how. If you’re allowing a 15/16 probability that self-sampling *isn’t even applied*, why not do the same thing in many-worlds?
… be a charity angel.