Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://byrneseyeview.com Helpfully Anonymous

    Is it ‘overcoming bias’ or ‘accepting bias’ if one times decisions to minimize biased thinking? Example: I’ve found that I have more self-control when I’m deciding what to buy in a grocery store than in deciding what to eat once I buy it. I reliably eat the least healthy food first. So is it a legitimate bias-overcoming tendency not to buy such food, or is that just admitting to a lack of self-control, and thus an acceptance of bias? Similar actions might include someone with a drinking problem never taking more than $10 with them when they go drinking, or a procrastinator unplugging the Ethernet cable.

  • Unknown Healer

    Robin wrote:

    “high status people usually ignore low status people, no matter how good or bad their points.”
    How good is this heuristic in general? When and how should high and low status people overcome or circumvent it?

  • Joshua Fox

    “Eloquence Bias”:
    1. If debater X speaks clearly and eloquently (no trickery, just good presentation of her thoughts), while debater Y gives an opposed position in a confused, badly phrased way, I am more likely to think X’s position is right. Perhaps I should always try to peer into obfuscated, fuzzy-sounding arguments to see if they might be better than straightforward, well-expressed arguments.
    2. I wonder if the scientists who are the best scientific popularizers are also the deepest thinkers. Perhaps there are obscure scientists, writing in technical jargon for an audience of their peers, without the ability or desire to write in a popular style, who are far better than the best-known representatives of their field. To take a few examples among many, Pinker, Dawkins, Hofstadter, and the late Feynmann, Gould, and Sagan are among the best known in explaining their fields, but are they really the leading scientists in their fields?

  • http://sti.pooq.com Stirling Westrup

    There is much mention in this blog about Bayesian rationality, or the use of Bayes’ methods in decision making. Now, I studied Bayes conditional probabilities in Statistics class in University many years ago, but my knowledge of the theory ends there. Can you recommend any good books on the subject?

    In fact, do you folks have a recommended reading list (other than this blog, of course!) for those trying to identify and overcome their own biases?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I second Westrup’s question. I have suggestions, but I would very much like to hear yours.

    My suggestions are:

    Robyn Dawes, “Rational Choice in an Uncertain World”, great intro for a popular educated audience.

    The edited volumes “Judgment Under Uncertainty”, “Heuristics and Biases”, and optionally “Choices, Values, and Frames”, in that order, for a survey of the research in heuristics and biases.

    Probability theory for complicated problems that can be solved by calculus: E.T. Jaynes, “Probability Theory: The Logic of Science”

    Probability theory and the structure of the real world exploited by tractable cognitive algorithms: Judea Pearl, “Probabilistic Reasoning in Intelligent Systems”

    Some other books I found important on my journey:

    “The Moral Animal” by Robert Wright, popular intro to ev-psych

    “The Adapted Mind”, especially “The Psychological Foundations of Culture”, by Tooby and Cosmides (less popular ev-psych)

    “Adaptation and Natural Selection” by George Williams (how to stop anthropomorphizing evolution)

    “The Tao is Silent” by Raymond Smullyan (correct action does not have to be effortful or rigidly controlled)

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Actually, on second thought, this deserves its own post. Hold on a second…

  • http://rolfnelson.blogspot.com Rolf Nelson

    Topics I would find like to see:

    1. Is there empirical evidence that critical thinking, or other strategies for overcoming bias, can be learned? (Or, more to the point, is there a strategy that’s robustly effective?) The studies seem to be all over the place, and I don’t know which studies are correct. For example, see the chart at the bottom of < http://www.arts.monash.edu/phil/research/thinking/08peer.html>, showing Melbourne and Monash studies get dramatically different results for “Reason!able argument mapping.”

    2. I adopted Actively Open-Minding Thinking < http://www.upenn.edu/almanac/v42/n24/teach.html> after reading Baron’s “Thinking and Deciding”, and found it useful. For example, twice a day I think about an aspect of a current plan I have for the day, month, or lifetime, and then I briefly try to search for alternative strategies, or for reasons why my current strategy might be wrong; on multiple occasions this has caused me to adopt new courses of action.

    However, Monash showed poor results for AOMT. In addition, since “AOMT Mania” hasn’t swept the nation in the past decade, my positive experiences are probably atypical. Is it that few people want to try to apply AOMT in their lives, or is it that the people who have tried to use AOMT didn’t find it useful?

    And, some unsolicited advice:

    1. I love the posts that are about a robust and general bias, and include practical, generalizable advice for how to partially overcome the bias.

    2. For hooking new readers: add a ‘top 10’ list of the best past posts on the sidebar. Each of these posts should be, on its own, useful, interesting, and compelling for a potential new reader.

  • http://rolfnelson.blogspot.com Rolf Nelson
  • http://www.leebeck.com Lee

    Related to the “eloquence bias,” might we esteem silence too little because its practitioners don’t defend it? The smart people you know are not silent, but that could just be selection bias: you know they’re smart because they aren’t silent.

  • http://www.aleph.se/ Anders Sandberg

    When we are suggesting additions, maybe a search function in the bar would also be useful?

  • Tom Breton

    Robin, some years ago we corresponded about an idea and paper of
    yours, futarchy.

    One of the points we seemed to disagree on was this: In the course of
    addressing possible problems, you predicted (1) that unclear proposals
    would be thinly traded, and you also predicted (2) that side interests
    could not prevail because interested parties could not budge the
    price. I felt this was a contradiction, and that an opaque proposal
    could be used to suppress general trading (by 1) and thereby allow
    side interests to escape the consequences of (2).

    Having given the matter some further thought in the meantime, I
    believe the problem might be tamed by requiring that proposals be
    expressed in controlled language and by making the threshhold of
    enactment a function of a proposal’s complexity.

    That fact that a proposal is expressed in controlled language makes
    measuring its complexity more reasonable. That said, I don’t think it
    will be simple because vocabulary is an issue. I have some further
    thoughts on that, but they can wait.

    For reference, ACE (Attempto Controlled English) is an example of a
    controlled language. I’m not neccessarily proposing using ACE.

  • Pete

    I dislike cats. I think a lot of them are nefarious and mean-spirited. Dogs are much, much more benign and approachable.

  • Laura

    Is it ever rational or ethical to knowingly promote false or overly simplified arguments as a means to an end (ie politics, public health, etc.) For example, exaggerating the health risks of smoking to prevent children from starting. If so, what kinds of situations is this form of false argumentation acceptable in and what are its limitations? If it’s unethical, then elaborate.

  • Psy-Kosh

    Not sure if this is the right place to ask, but I’ve seen the overall topic come up, so may as well ask in this thread: Could someone help clear up a bit of confusion I have about the agreement theorem?

    Specifically, I’ve come up with a scenario which _seems_ to violate it… But if it’s a theorem, proven, then I’m pretty sure I’m doing something wrong. I don’t know where though, so, here it is:

    Assume a few Mad Scientists (Mad, for our purposes here, means having sufficiently bizzare utility functions that they’d do what I’m about to describe.) But they are skilled Bayesans.

    They set up a “quantum suicide” experiment as a way for at least one of them to know whether or not MWI is true with reasonable confidence.

    One person is the experimenter, the one that “pushes the button”, and the second one is the experimentee, the one that potentially dies.

    Their priors for believing in MWI or whatever are the same (but irrelevent for our purposes here. I just want to show why post experiment, their likelihoods ratios would be different, or at least appear different to me)

    Now, let’s say that the system is set up so that a quantum event would cause instant death in person B with probability R

    Button is pushed by person A, yada yada yada… now, let’s say person B percieves he’s still alive, fine, concious, etc…

    P(percieving a continued existance | MWI) = ~1, and P(percieving a continued existance | single reality) = R

    So the likelihood ratio when considering those two hypothesies will not be 1.

    From the perspective of the experimenter, the likelihoods would both be R though, and thus the ratio = 1, right?

    And both A and B fully understand the other’s reasoning and perspective here.

    What did I do wrong here?

  • Psy-Kosh

    To clarify, from A’s perspective, the probability of seeing B survive = R both given single universe and given MWI…

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Psy-Kosh, I also think that anthropic situations may end up violating Aumann.

    Here’s an even stranger question. Suppose MWI is true. Suppose you are A, the experimentee, who will discover the results of the experiment, and suppose that the likelihood ratio from your perspective reaches a googol to one. This shouldn’t take more than one afternoon of experimenting.

    So you get up out of the chair, crack your knuckles, and turn to the experimenter to say, “Guess that’s settled, then -”

    But the experimenter, as a good Bayesian, is in the corner going “Weeble-weeble-weeble” because what the hell is he supposed to think?

  • Michael Rooney

    The term “rational” and its derivatives gets tossed around a lot on this blog, but I don’t recall a clear definition of what exactly is intended by the term. More often than not, it seems to be used more as a value judgment than any sort of objective descriptor.

  • http://www.acceleratingfuture.com/steven steven

    I’m confident that MWI is true. I’m also confident that quantum suicide doesn’t work. If you know you’ll be vaporized in 50% of worlds, you should anticipate a lack of conscious experience with probability 50%. If you don’t think anticipating a lack of conscious experience makes sense, what do you do if you know a vacuum transition wave is coming at c (perhaps iff the googolth digit of pi is even)? Go “weeble weeble weeble”?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Rooney, by convention in the field of heuristics and biases, “rationality” in the way of belief is Bayesian probability theory, “rationality” in the way of decision is expected utility maximization. Anyone who means something other than this by “rational” should specify what it is.

    If you don’t think anticipating a lack of conscious experience makes sense, what do you do if you know a vacuum transition wave is coming at c (perhaps iff the googolth digit of pi is even)? Go “weeble weeble weeble”?

    According to Greg Egan: waking up in a hospital and being told that you were just cured of the schizophrenia causing you to have such extraordinarily realistic hallucinations.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Bleh, brain fell out. For “Suppose MWI is true”, substitute “suppose quantum immortality is true”. We already know many-worlds is true.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Anders: When we are suggesting additions, maybe a search function in the bar would also be useful?

    Quick fix: Bookmark the Overcoming Bias Search Engine (powered by Google).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Psy and Eliezer, I’m confident Aumann works just fine under indexical uncertainty. If your model suggests otherwise, I doubt your model.

  • http://bobvis.blogspot.com Bob V

    I have always been under the impression that quantifiable criteria get overweighted when choosing among options in a selection problem. (If I remember right, even Warren Buffet makes this point.) Non-quantifiable criteria get ignored relatively speaking.

    When I tried looking for research on this issue though, I didn’t find anything. Does anyone know whether this bias actually exists? Thank you!

  • http://omniorthongal.blogspot.com mtraven

    What does “violating Aumann” mean?

    Scott Aaronson has argued that since MWI and quantum suicide would allow you to solve NP-hard problems in linear time, that implies there must be something wrong with MWI.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Bob V, try googling evaluability+preference.

    Robin, the problem isn’t indexical uncertainty, but the kind of reweighting that occurs in quantum immortality problems.

    For example, I hook myself up to a quantum suicide device and find that, rather than dying, I have won the lottery. From my perspective, the quantum immortality theory is pretty much confirmed. From the standpoint of the tiny fraction of the experimenter who lives in the same branch I do, a completely inexplicable and confusing coincidence has taken place. It doesn’t confirm quantum immortality, because the likelihood of observing this coincidence, for the experimenter, is exactly the same whether quantum immortality is true or not. It seems to me that I have gained an unshareable belief. If not, then how does Aumann resolve this?

    “I don’t believe in a quantum immortality” is a reasonable response, but then what happens to your mangled-worlds hypothesis? Should we all be constantly expecting to die, every second?

  • Nick Tarleton

    Eliezer: Is there some new evidence I need to hear about? Or are you saying MWI is the only consistent and (capital-T) Technical interpretation of QM? Copenhagen sounds like nonsense, and MWI seems much more plausible than any alternative I’ve been able to understand, but are all the alternatives really that bad? (Is this just me not wanting to admit that the scientific establishment could fail as badly as it would be failing if you’re correct?)

    Bob V: What’s an example of a “non-quantifiable criterion”?

  • anonymous

    Eliezer, the results in this paper [1] would seem to suggest that “quantum reweighting” is not well defined. (HT: slashdot)

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, I don’t see why the subject and the experimenter in your scenario have different evidence. But if there is any different evidence, it is indexical, and Aumann should apply.

  • Psy-Kosh

    Eliezer: I haven’t really studied the Agreement Theorem, but if anthropic type situations can get around it, then what specific assumption(s) does the theorem rely on such that anthropic situations violate those assumptions?

    And as for the wibbleness, that was more or less my point. Though I was assuming A would understand fully the reasoning B used to get to the “guess that’s settled” then, and B would fully understand the reasoning A was using to result in no conclusion at all (other than the equipment is faulty, or “wibble” 🙂 but no shift in the relative confidence of MWI vs single universe)

    Robin: what’s indexical uncertainty mean again? IIRC that does more or less mean this sort of thing, but I just want to verify that… Anyways, if so, well, where am I going wrong? That was more or less my initial assumption, that I was going wrong somewhere, just that I’m absolutely stumped as to where.

  • Nick Tarleton

    I tried to do the math on the Aumann-MWI paradox in another comment, in case it clears anything up. I agree that it’s confusing as hell.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Tarleton, MWI is the only credible Technical interpration of QM. Trying to make alternate branches disappear as soon as the local version of you can no longer see them violates Occam’s Razor, unitarity, CPT symmetry, and formality. It’s entirely unnecessary, it contradicts the observed character of physical law (unitarity and CPT symmetry), and it adds an informal and mentalistic component to the theory by having other branches vanish at the exact point where “you can no longer see them”. The scientific establishment is falling down because many physicists have no quantitative grasp of how to measure the simplicity of a hypothesis, whatever their facility with physics.

    Robin, the subject and the experimenter have different evidence because, conditioning on the truth of quantum immortality, the subject expects with probability 1 to win the lottery and the experimenter expects the subject to win with p=1/100,000,000. Conditioning on the falsehood of quantum immortality, the subject expects to win with p=1e-8 and the experimenter expects to win with p=1e-8. Hence, if the subject wins, he has seen something that has a large likelihood ratio for QI over ~QI, but the experimenter (in that branch) has seen an absurdly improbable event that does not favor QI over ~QI. This violates the assumption of shared priors, but does not obviously require an origin dispute.

  • http://amnap.blogspot.com

    We already know many-worlds is true.

    It is knee-jerk statements like this on questions of great controversy which cause me to have a great deal of skepticism about whether this blog can possibly live up to its name.

  • http://www.acceleratingfuture.com/steven steven

    Conditional on 1 being equal to 1000, I should expect the sun to come up with probability about 100000%. Conditional on 1 being unequal to 1000, I should expect the sun to come up with probability only about 100%. So every time the sun comes up, that’s strong evidence that 1 = 1000. 😛

    QI seems to me to violate the principle that your evidence depends on your physical state only, and not on the way you got there. Maybe that amounts to the same thing as violating shared priors, I don’t know.

    I second anonymous’s link to the Wallace paper. There’s more here and here.

  • http://www.acceleratingfuture.com/steven steven

    It is knee-jerk statements like this on questions of great controversy which cause me to have a great deal of skepticism about whether this blog can possibly live up to its name.

    Either that, or we know something you don’t. 🙂

  • Floccina

    Why not a follow up to cut medical spending in half, of cut school spending in half.

  • Psy-Kosh

    Eliezer: I’m a tad confused how this situation violates the assumption of shared priors… heck, I assumed they shared priors and merely analyzed the likelihoods they computed.

    Experimenter computes same likelihood to observe living subject given both MWI and single universe. (R, specificaly)

    Subject computes a likelihood R for continued subjective existance given single reality, and ~1 given MWI

    But the Experimenter computes that the Subject has ~1 likelihood for continued subjective existance given MWI, and R given single universe.

    And the Subject computes for the Experimenter a likelihood of R for observing the Subject to continue for both MWI and single universe.

    And each knows that the other knows that the other knows, etc… ie, I’m pretty sure the likelihoods are official Common Knowledge.

    So shared priors, and relevant Likelihoods are Common Knowledge… So something is scewey here, it looks like…

    (and yeah, MWI and single universe aren’t the only two options… there’s also the “super MWI” that goes beyond QM… ie, Moravec/Tegmark/Egan type cosmology… but I’m not throwing those into the mix here at the moment…, though one is free to and replace MWI with “MWI or ‘Super MWI’ or or or…”)

  • http://www.acceleratingfuture.com/steven steven

    Philosopher David Papineau has argued against quantum immortality here and here.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Matthew C, is that you? I thought you weren’t going to visit this blog anymore.

  • Gray Area

    “MWI is the only credible Technical interpretation of QM.”

    These sorts of statements are dangerous if you are interested in what’s actually the case since you leave yourself open to failure of imagination.

  • http://michaelkenny.blogspot.com Mike Kenny

    One thing I just thought of that I’m not sure exists or has been handled here, or is worth handling: do you think it might be useful to have a sort of table of statistics that reflect how confident an average person should be of certain common experiences. For example, “On average, a person telling you something is true is likely to be right 60 percent of the time,” et c. Something along these lines might be a great practical tool for overcoming bias. I can imagine having a little card with some of the particularly useful stats in my pocket.

  • Nick Tarleton

    How about “The Proper Use of Arrogance”, to follow up on “The Proper Use of Humility”?

  • Recovering irrationalist

    We already know many-worlds is true

    Even if no credible Technical interpretation of QM currently exists other than MWI, that doesn’t prove MWI.

  • Tom Breton

    Followup to the futarchy post: Robin and I exchanged a few emails in
    followup. This was a frustrating conversation, and I had thought
    better of Robin. For the curious, a summary and transcript of the
    emails may be found here.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Tom,

    Unless other market traders believe Alice possesses a magic wand they will simply bid down her encrypted proposals, confident that, if whatever-it-is is implemented, it will not increase national welfare.

    If lots of people submit encrypted proposals that actually increase national welfare, other market traders will stop betting against them, and the encrypted proposals will be implemented, and national welfare will go up.

    Bear in mind that not everyone has time to explain everything. Such is life.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Tom, it is rude, if not more, to publish private emails. You say that your concerns cannot be expressed in game theoretic terms, and I say this suggests you don’t understand what information means. As Eliezer notes, I have devoted as much time to you seems affordable. Note that you offer no other signals of competence in related areas which might tempt me to devote more time to you. Your resume, for example, lists no education or employment whatsoever.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Anders: When we are suggesting additions, maybe a search function in the bar would also be useful?

    Done.

  • bw

    It is extraordinary that people who even slightly disagree with you have their posts deleted. Have never seen anything like this.

  • http://www.mccaughan.org.uk/g/ g

    I’ve seen plenty of people disagree with plenty of other people (including, e.g., Robin and Eliezer) and not have their posts deleted. If yours have been, perhaps there’s some other reason besides disagreement?

    (For what it’s worth, I don’t think anything should be deleted other than spam and potentially illegal material; if someone’s comments are useless enough that deleting them might be worth while, better to warn them and then ban them, rather than actually falsifying the record, as it were. Also for what it’s worth, I don’t recall your comments being such as to merit either treatment.)

  • http://www.mccaughan.org.uk/g/ g

    Er, that’s “then ban them if they don’t improve or stop”, of course.

  • Recovering irrationalist

    My impression from outside this blog is Eliezer actually feels more comfortable with reasonably polite disagreement than agreement.

  • bw

    One post in a 71-post thread (Pascal’s Mugging) is over-posting? As you wish, not that big a deal.

  • Doug S.

    We should have a “greatest hits” collection on the main page for new readers to look at, so they don’t have to filter through a year or so of archived posts to find the best stuff. Any suggestions for such a list?

  • Frank Hirsch

    Randomize the list using a quantum source. A few worlds are bound to come up with an optimal selection & reading order… =)

  • Abramovicl

    Hi , i have some questions about you desing
    maybe you can give designer contacts?