Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Recovering irrationalist

    How about having ‘live posts’ (recently commented) in the sidebar instead of ‘recent posts’, which dominate the front page anyway. (This is not the same as ‘recent comments’, which often has most of the comments on the same 2 posts.)

    That way, slightly older posts still being discussed are still visible rather than hiding in archive, which tends to be a death sentence.

    Also, if an older post gets an update, that will be noticable for longer than 30 minutes!

  • Tiiba

    My suggestion is to open a forum.

    I understand the advantages of blogs. No matter how you format them to focus on the owner, they still feel like forums. But lo, here we have a blog, and if I have a subject that doesn’t fit anywhere, I can only post it once a month…

    Also, I would like someone to explain to me the advantages of mailing lists. Eliezer once told me to browse through sl4. I followed the link, opened the post at the very top, and saw that it quotes another post. Where is that post? Broken threads suck, and so do mailing lists.

    Not as applicable to the people here, but I also want to mention PDF bias. This is a bias that is highly prevalent among academics. I was surprised and pleased that Eliezer published his three books as HTML, which greatly added to my estimate of his IQ. But then I noticed that he does, in fact, have some things published as PDF 🙁 PDFs have their uses, but I believe that anything published as a PDF should be accompanied by an HTML version.

    This bias also affects many corporate sites, along with the Flash Intro bias, the Unrequested Noise bias (the worst thing in the world), and the Stupid Cliche Smiling People bias.

  • http://www.memespace.net Sebastian Hagen

    In How to Seem (and Be) Deep, Eliezer Yudkowsky wrote:
    If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.

    If true, this suggests that there might be a lot of useful knowledge to be had by learning about the basic and strongly empirically supported claims of economics.
    I would like to acquire this knowledge without intesely studying the literature of the field for several months.

    So, to those steeped in the field of economics:
    Do you agree with the claim that the public is dreadfully unfamiliar with standard economics? If so, does this cause them to make important mistakes in everyday life? If so, what’s a good way to quickly learn about the absolute basics of the field? For the case of self-study, what’s the recommended reading list for picking up Econ 101?

  • josh

    “Do you agree with the claim that the public is dreadfully unfamiliar with standard economics?”
    Yes.

    “If so, does this cause them to make important mistakes in everyday life?”
    It makes people make a lot of claims that are demonstrably wrong, but that’s more of a problem for those of us who know that they are wrong and have to decide whether to correct them.

  • http://michaelkenny.blogspot.com Mike Kenny

    Posts about what posters believe is their biggest problem bias.

    Anonymous lists created by the overcomingbias writers in which overcomingbias writers each list one big bias they perceive in each of the other writers of the website, and lists are then displayed. Lists could be updated. Readers could vote on which bias they think is most valid.

  • Tom

    I want an Overcoming Bias theme tune. I think ‘Bayesian Rhapsody’ to the tune of Queen’s Bohemian Rhapsody would be good, with Hanson on drums and Yudkowsky wailing on electric guitar. Bostrom could do the vocals.

  • Floccina

    Is a drive to escape poverty ever motivational?
    Is great inherited wealth every de-motivational (See Paris Hilton and Denise Richie).
    Very good atheletes are given tremendous opportunities to learn, what are the results?

    What would this say about schooling for the poor?

    It seems that our biases in America say thay poverty is motivational to athletics and de-motivatioal to school.

    Could it be that once one has lived in poverty in America one sees that it is quite liveable and so not so scary and so the middle-class fears poverty more than they should and sp work hard and study more.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Tom, there’s always this song from 1:50-2:20.

  • Recovering irrationalist

    @Tom

    @Tom (from 2:10 on)

  • http://www.saunalahti.fi/~tspro1/ Kaj Sotala

    Every now and then, I toy with constructing a personal, coherent utilitarian ethical system for myself. I’ve made some progress, but it’s still, well, a work-in-progress.

    Are there any good, comprehensive books devoted to the subject of utilitarian ethical systems, their different types, possible pitfalls and solutions?

  • http://byrneseyeview.com Byrne

    Do you agree with the claim that the public is dreadfully unfamiliar with standard economics? If so, does this cause them to make important mistakes in everyday life?

    The big ones I can think of are 1) thinking about total benefit rather than marginal benefit, and 2) paying attention to sunk costs.

    #1 usually shows up when people are reluctant to support the best of two bad outcomes. #2 is why people complain about wasting two hours on a movie they should have walked out of after ten minutes.

  • http://www.nancybuttons.com Nancy Lebovitz

    Are there circumstances or mental states which make it easier to be less biased? Is becoming less biased in a particular way a matter of a sudden realization that one has been thinking nonsense, or is retraining needed?

  • Clay Woolam

    Has there been a major discussion on futures markets for academic research? There’s the Intrade market for Adult Talkativeness, how about a market for results of semantic web research?

  • Douglas Knight

    The two errors Byrne mentions seem very similar to me. Can someone suggest a unification? If so, does it suggest other errors in economics? in other areas?

    If I have to suggest a unification: they reflect a refusal to do accounting; a refusal to list pros & cons; but I’m not sure this is psychologically accurate.

  • Doug S.

    We should have a “greatest hits” section. Somewhere that a newcomer to the blog can look at to see the best and/or most important things that have been discussed on this blog.

  • Daniel Yokomizo

    This may be of interest to the bayesian crowd here: Can a spam filter play chess?
    A bayesian spam filter was used o play chess and learn good sequence of movements from a database of games. The resulting chess engine is quite simple, but interesting nonetheless.

  • http://dl4.jottit.com/ Richard Hollerith

    Nancy, psychologically depressed people more accurately estimate the probabilities of success of certain ventures — non-depressed people are too optimistic. Of course, there are also significant handicaps to being depressed e.g. an unwillingness to expend your physical or mental energy.

    One of the worst cognitive biases is not having enough doubt about one’s moral rightness, and certain experiences seem to counteract that bias. For example, conquered populations, e.g. Germany and Japan after WWII, seem to contain more individuals who are contrite and spend more effort to search themselves for ethical flaws. Being part of the power elite tends to have the opposite effect. It seems to me that some chronic infections drastically reduce the bias though I warn that I have yet to come across anyone else who sees that connection. Darwin for example claims to have had a chronic infection for almost all of his adult life, and Darwin seems to be significantly better than most scientists at not holding onto flawed concepts and hypotheses just because he originated them (though of course there is group selection). I.e., he seems to spend more time subjecting his ideas to doubt than other scientists do. Conservative Christianity seems to produce both individuals with a delightful lack of certainty in their own moral rightness and individuals with an increased certainty in their own moral rightness who derive pleasure from feeling morally superior to their neighbors.

    It is said that you cannot convince a man of the correctness of some fact if his livelihood depends on his not understanding it. I sometimes call that the bias towards self-interest. Some careers have much less of that hazard than others. At first glance, one would think that academia has little of the hazard because of tenure and other mechanisms to ensure academic freedom, but I doubt it is possible to be e.g. an unbiased climate scientist because of the immense interest in the debate over global warming from political ideologues on both sides. Climate science is probably best left to researchers who because of e.g. personal savings need not draw a salary from the institutions that fund climate science. The philosopher Spinoza refused to accept a stipend from a prince or monarch because he believed a stipend even from an admirer would have biased his philosophical research: he made his living grinding lenses and lived frugally.

  • http://zbooks.blogspot.com Zubon

    I’m not sure how off-topic this is, but I thought readers here might be amused in a way that other populations would not:
    Eliezer at LOLsingularity

    They do not have lolSPECKS yet.

  • Nick Tarleton

    If SPECKS is worse than TORTURE (which I agree with) is the Repugnant Conclusion true? I’d love to see that discussed on OB.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Is it just me or is Overcoming Bias attracting a lot of trolls recently? Do we need an official “don’t feed the trolls” policy?

  • http://www.mnuez.blogspot.com mnuez

    I’m not sufficiently familiar with the taxonomy to know who or what positively qualifies as a troll, but having been called one myself on blogs of every sort I think I’d have to vote against it.

    My experience has generally been that anyone who doesn’t subscribe to the general opinion of most of the commentors should properly be considered a troll and booted off the show. Now personally, I actually only show up to comment in situations where I DISAGREE with the pervading opinion. Dog-piling on with an “I agree and you’re all SOOOO right!” comment is something that I see little value in which is why I hardly ever contribute such a comment – anywhere.

    So, to recap, I only end up commenting on blogs when I DIS-agree and am often therefore branded a heretic – oops, I mean “troll” – and therefore unworthy of consideration.

    From that standpoint at least, I’d have to say that trolls are good. When I dislike someone’s free speech on my own blog, I generally counter with the proverbial MORE free-speech, rather than opting for the more-simple ban/censor policy.

    And in a blog actually NAMED “Overcoming Bias” I’d hope that you would agree with such a policy.

    Cheers,

    mnuez
    http://www.mnuez.blogspot.com

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Mnuez, you can’t simply write a long string of insults with no supporting arguments and expect our desire to hear differing opinions will protect you.

  • http://dl4.jottit.com/ Richard Hollerith

    FWIW I agree with Robin’s call here. Intemperate and insulting.

  • Stuart Armstrong

    Have a few thoughts while reading about the Repugnant conclusion (which basically states that most systems of population ethics lead to the conclusion that a huge population of unhappy people is better than a small population of happy people). Key quote:

    The suggestions in the literature on how to deal with the Repugnant Conclusion…

    Stop right there. How can we “deal” with the Repugnant Conclusion? We have either accepted the moral system that lead to it (in which case we accept the R.C.) or we don’t, and there is no problem. But what seems to be happening is that we have an intuitive collection of moral values, that seem to be compatible with an intuitive understanding of population ethics, but not with the R.C. Their moral values are these intuitions, not their systematic formulation. In this context, wanting to keep population ethics but “deal” with the R.C. makes sense.

    People never start with a whole full-blown system of moral ethics; those who want such a system start with their intuitive values, and either build up, or go looking for, a system that fits. In no particular order, here are a few questions on the subject:

    1) If a formal moral system will contain situations incompatible with our moral intuitions (which is nearly certain), should we bother to build such a system? After all, our intuitions are our only initial basis for accepting and rejecting moral values, but a formal system would imply that (some of) our intuitions are wrong. So we are accepting a formal system, on the basis of values we know would be wrong if we accepted the system.

    2) If we have embraced a formal moral system, should we let our other moral intuitions whither and die? If we do so, we gain consistency; but the cost is that we have nailed ourselves to a particular formal system, while their might be other systems out there (maybe still undiscovered, maybe too complicated for the human mind) that fit better with our initial moral intuitions. Maybe our current system is inconsistent – if we have nothing but that system left, how would we decide what to do if that inconsistency was pointed out to us?

    3) If we have embraced a formal moral system, and it leads us to a morally repugnant conclusion, what should we do? Should we conclude our moral intuition is wrong, or that our formal system is incomplete? Since our choice of formal system was based on our moral intuitions, I don’t see any easy conclusion here.

    4) If we have a formal moral system, and extend it to a new framework it was not intended for – is this a good thing, or a bad thing? Should we be trying to extend our intuitions instead? There is a time dependency issue here: take a boy from a primitive New Guinea tribe (or a fundamentalist family in rural USA), who ends up living in New York for the rest of his life. If he formalises his moral intuitions before the move, he will end up with a very different system than if he formalises them after he has adjusted morally to New York life.

    5) Finally, in practice, how do people deal with these dilemmas?

    The spark that flashed up all these questions was reading Eliezer’s post on 3^^^3 specks of dust versus torturing a man for 50 years. I hadn’t (haven’t) fully formalised my moral system, so I was easily able to argue myself to my preferred intuitive conclusion (choose the dust specks, rather than the torture). My moral system is now clearer than it was; but this process felt very arbitrary to me -had I not read Eliezer’s post, I would probably have ended up at a different conclusion.

    Does anyone have thoughts on this issue, or links to book/websites where it is already discussed?
    Cheers.

  • Nick Tarleton

    Zubon, that gave me an idea.

  • http://groups.yahoo.com/group/futarchy_discuss Tom Breton

    Interesting observations and questions, Stuart. I’ll make a stab at answering some of them.

    1) If a formal moral system will contain situations incompatible with our moral intuitions (which is nearly certain), should we bother to build such a system?

    That’s not unique to morality. Formal math systems give some results incompatible with our intuitions. ISTM most people (of those familiar with them in the first place) would say it’s worth building them.

    One could also argue that the alternative to building them neccessarily means refusing to be consistent or logical. That could be considered embracing the absurdity or repugnance we are supposedly avoiding.

    3) If we have embraced a formal moral system, and it leads us to a morally repugnant conclusion, what should we do?

    Of course for both moral andother formal systems we’d start by examining the logic that derived that conclusion from the formal system. If we find that the logic is flawed, there may not be a problem at all.

    We should be a little careful not to use that as what Eliezer calls “motivated continuation”, though. As we’ve seen, when they want to avoid a conclusion, even many bright people have a way of wading into unfamiliar logic and “finding” problems that really aren’t.

    If after we examine the logic, the flaw is still there, what then?
    Drawing again on the parallel with non-moral systems, I’d say we should give weight both to the intuitions that led us to accept its axioms in the first place and the intuitions that make us question the conclusion.
    How much relative weight depends on how satisfactory the system is in general. If the system has produced useful results and withstood criticism well, it should be given more weight.

    We should then re-examine it from both ends.

    • Should we in fact accept the conclusion we don’t like? A seemingly repugnant or absurd conclusion may be the only alternative to even worse axioms. Also, if it followed rigorously from axioms we like, maybe it’s not as bad as our intuition first thought it was.
    • Is there a reformulation that avoids the conclusion without causing a worse problem? Careful not to short-change that condition. Comparing a new-born reformulation to a mature formal system is hazardous. All sorts of problems could be lurking and just not known yet.

      Finally, in practice, how do people deal with these dilemmas?

      Mostly by pretending they are being more consistent than they are.

  • http://dl4.jottit.com/ Richard Hollerith

    Stuart, fifteen years ago I came to believe that the moral environment was vastly “simpler” than I had previously believed — “simple” in the way that the laws of physics are simple. A formal system took shape and since then I have been refining it in my spare time. When I say, “formal system,” I do not mean I have reduced it to mathematics but rather that it resembles mathematics more closely than most moral systems I know of. I rely on the formal system almost exclusively when making my most morally important decisions, which for me so far consists mainly of deciding what new knowledge to acquire and where I should try to contribute to technical or scientific developments. My system is far from mainstream, though.

  • http://dl4.jottit.com/ Richard Hollerith

    Tom slipped in (with a very nice comment).

  • Michael Rooney

    Tom Breton’s post describes a process that sounds an awful lot like Rawls’ notion of reflective equilibrium.

  • Stuart Armstrong

    Tom, thanks for the comment.

    Richard, was there any key insight that started your process 15 years ago?

  • Tiiba

    I have a question. Your homepage says:

    “Most of my old writing is horrifically obsolete. Essentially you should assume that anything from 2001 or earlier was written by a different person who also happens to be named “Eliezer Yudkowsky”. 2002-2003 is an iffy call.”

    Well, as far as I can tell, most of your important writing on AI is “old”. So what does this mean? What ideas have been invalidated? What replaced them? Are you secretly building a robot?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Tiiba,

    It wasn’t precise enough. And when I tried to write things to replace it, I bogged down in the slow-writer problem. “AI as a positive and negative factor in global risk” is still current, so is “KnowabilityOfFAI”.

    If I were secretly building a robot, and I told you, it wouldn’t be a secret, now would it? So I think I’ll answer “No” for this one occasion, then refuse to answer this question on all future occasions on a general policy of maintaining plausible deniability with respect to questions on which I would have a legitimate (ethical) reason for secrecy given at least one of the possible answers.

  • Unknown

    It would be nice to see a disagreement case study on the differences between Robin and Eliezer. This could involve their differences regarding agreeing to disagree, or their differing probability assignments for various possibilities, such as the success of cryonics, the event of a world-changing singularity within the next 30 or 40 years, or even the existence of God. Eliezer seems to believe the first two are fairly probable, while Robin seems to think them possible but quite improbable. Both think the last improbable, but Eliezer seems much more extreme in this regard, seemingly assigning it a probability more or less equivalent to the probability of the Teapot hypothesis or the Flying Spaghetti Monster hypothesis.

    The differing probability assignments in fact seem to be a result of their differences regarding agreeing to disagree; Robin takes into account expert opinion on cryonics and the singularity, while Eliezer does not consider this necessary. Likewise Robin takes into account the common opinion about the existence of God, by which the hypothesis differs greatly from the Flying Spaghetti Monster hypothesis, while Eliezer considers the common opinion irrelevant.

    According to this, either Robin is biased towards the opinions of others, or Eliezer is highly overconfident in many respects. It would be good for the readers of the blog to know which of these is the case, so that they could put more confidence in the one who turns out to be more trustworthy.

  • Constant

    It would be good for the readers of the blog to know which of these is the case, so that they could put more confidence in the one who turns out to be more trustworthy.

    It is useful only if something is being accepted on their authority, as opposed to being accepted on the strength of their arguments. That they present their arguments for examination suggests that they would themselves prefer that their arguments be accepted on the merits, rather than on the authority of the speaker, so even if a reader starts out depending on their authority, he is directed by their authorial wish back to their arguments.

  • Unknown

    Constant, we have a great ability to persuade ourselves that we are accepting something on the strength of the arguments, when in reality we are accepting it on authority.

    For this reason, and also because the argument from authority does have somewhat more than zero force for a Bayesian, it still would seem useful for the readers of the blog to know who is more biased and in what way.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin and I both know we are destined to do this, at some point; but there are more blog posts I wish to write first.

  • Constant

    we have a great ability to persuade ourselves that we are accepting something on the strength of the arguments, when in reality we are accepting it on authority.

    Some of us. It is possible to learn the difference. For example, if you are in a class or career in which you must demonstrate your conclusions or fail, then you quickly learn to distinguish what you can demonstrate from what you can’t. The latter will include things that you are taking on trust, on someone’s authority. Or, if reliance on authority is part of the demonstration, then you’re likely aware of this as well. You will furthermore get a sense of the weakest points of your demonstration, assuming it is not mathematically rigorous.

    the argument from authority does have somewhat more than zero force for a Bayesian

    Depends on what you’re trying to glean. If your purpose is to understand a mathematical proof (say), then even if the person presenting is a known infallible and truthful being, his assertion of the conclusion does not teach you the proof of it. This blog isn’t all that different.

  • Unknown

    It’s not too difficult to try to demonstrate a point, fail, and fail to notice any failure. And if someone is very intelligent, this is sometimes even easier, because it’s easier to dismiss the opposition as stupid.

    There also may be some blog readers who don’t post their ideas. If so, no one is forcing them to demonstrate anything.

  • Constant

    It’s not too difficult to try to demonstrate a point, fail, and fail to notice any failure.

    Oh, you’ll notice, if you’re doing it in front of an audience of smart and critical people other than your sycophants. And if nobody at all notices, that’s a whole different problem. That’s not really your personal failure any more. It might even be a problem of the times.

    And if someone is very intelligent, this is sometimes even easier, because it’s easier to dismiss the opposition as stupid.

    But one does see their replies. However one may personally assess those replies, one does receive them and file them away. One therefore develops a map of the logical territory of the claim. That map is there whatever one may feel about it.

    There also may be some blog readers who don’t post their ideas. If so, no one is forcing them to demonstrate anything.

    Like I said, “Some of us.”

  • Floccina
  • Hernan Bruno

    I would to know the evolutionary explanation of why we feel envy. In economics concepts such as fairness, reciprocity, inequity aversion are sometimes called social preferences and have been shown to lead to seemingly non-rational behavior. Fairness and reciprocity can be understood in terms of their evolutionary value. Societies in which inviduals share rewards are more likely to prosper as the risk of dying from a bad outcome is lowered. Jealousy can also be explained in terms of keeping our mating partner from leaving and having someone else’s child.

    But envy? What is the advantage of being envious. Yet, people are envious (to different degrees). If you get 100$ your happiness depends on whether your neighbor gets 50$ or 1000$. In extreme cases, this feeling can lead to trying to destroy our neighbor’s stack of dollars. And please notice that this example is not zero-sum, so competition for resources is not a complete explanation.

    The same goes for Schadenfreude, which is a form of reverse-envy, but equally strange.

    I would like the contributors of OB to enlighten me on this topic. Thanks!