Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • anonymous

    I’d like a feature that allows me to search archived posts by author’s name.

  • http://www.mccaughan.org.uk/g/ g

    I have a question for Eliezer, purely out of curiosity: Could you give us an idea of the mix of things you do at SIAI? Traditionally, AI research was all about the question “how can we make machines intelligent?”; you’ve been much concerned with the formerly neglected question “if we’re going to make machines intelligent, how can we make it not be a disaster if we succeed”; are you in fact entirely focused on the latter question or do you also spend any time on design or implementation aiming at the former?

    More on-topic: It seems to me that the fact that we have limited cognitive resources gets neglected somewhat here. Let us agree (if only for the sake of argument, though I think it’s more or less true) that an ideal agent with unlimited time and powers of concentration would arrive at all its beliefs by something very like Bayesian updating starting with something like a minimum-description-length prior, and at all its decisions by something very like maximization of expected utility. We, of course, are very far from being ideal agents. We simply *couldn’t* proceed as those ideal agents would, and there’s no reason to think that approaching what they’d do as closely as possible is our best strategy. Our evolutionary (and to some extent cultural) history has provided us with a set of useful heuristics, which lead to a lot of the cognitive biases discussed here. Evolution is slow and imperfectly reliable and doesn’t have our best interests at heart, so there’s no reason to think that those heuristics are close to optimal either. So how should we choose how to think?

    An ideal agent with unlimited time and powers of concentration, choosing how a limited agent like me should think, would presumably do it by maximizing expected utility on the basis of beliefs obtained by Bayesian updating of a minimum-description-length prior, or something like that. But we’ve been here before; we aren’t such ideal agents, so we have to use heuristics to select our heuristics. And so on, recursively.

    It seems like the best we can do is a kind of iterative process: start with whatever way of thinking we have, try to work out what way of thinking is best (on the basis of what we have, which is the best we can do), try to get ourselves thinking that way, and repeat. We might hope, at least, that this process converges (in the long run, longer than we really have, but never mind) to a fixed point, a consistent way of thinking. But:

    Suppose I have a consistent set of thinking patterns, in the sense that if I use them to design an “optimal” set of thinking patterns for myself I end up with the same ones I actually have. Does that guarantee anything like practical optimality? Nope. (Someone told a lovely joke here a little while back: on a distant planet, we encounter aliens who adopt reverse induction, expecting the future to be unlike the past, and accordingly have wretched lives. Asked why they continue to do this, they say “well, it’s never worked for us so far …”.)

    I adopt, or at least try to, an extremely simple heuristic when deciding how to think: I ask “do I have evidence that thinking this way works well?”. A thinking process that on the face of it doesn’t have much to do with rationality — trusting one’s hunches in some field, say — might pass this test; something very rational might fail, if in practice it runs up against my cognitive limitations.

    But I’m at the mercy of my prior criteria of evidence. Someone who has decided that “true” means “consistent with the Bible, as interpreted by my church” may find that fundamentalist thinking produces answers that consistently check out, and that observation and reason don’t. Vicious circle. (Even without a Happy Death Spiral, though those are always a danger too.) I’d like to believe that every way of thinking that’s as wrong as fundamentalism is unstable under the iteration I’ve described, that a serious attempt to arrive at the truth will always break out in the end and land up with something that works better, but it’s far from obvious that that’s true.

    Maybe there’s nothing to say about this beyond “yup, you can never know you’re doing things right, so the best you can do is to do the best you can do, and the assumptions you need to justify rationality and empiricism seem pretty modest”. (Which is also pretty much what I’d say to the venerable problem of induction.) But maybe someone has a more satisfying answer?

  • http://www.mccaughan.org.uk/g/ g

    Oh, and I agree with anonymous above. There’s already a list of contributors; a link next to each name that provides a filtered set of posts would be a win. (One way to do this using only the existing machinery, though I suppose it wouldn’t be guaranteed to work reliably: stick something like “ob_by_Robin_Hanson” on each post by Robin Hanson, etc., and make those links use the search facility.)

  • burger flipper

    It also appears that archives only go back one year, what gives? I am with anon and g. I’m pretty new to this joint and I’d like to start w Eli’s first post and move forward. I’d also like to hit some of the early standard biases posts.

    Any help would be appreciated.
    This blog may be winding down for you, but even if it does run its course as a dialog, it should remain as a document.

  • AO

    Robin,

    If predictions markets are accurate, then why do different futures markets often move in completely different directions in response information in the short run? Perfect example, today Intrade has Hillary down a sharp 2.1 points to win the Democratic nomination, but newsfutures has her up 4. Thats a sharp diversion, what gives? See the link for a summary of today’s action
    http://specials.slate.com/futures/2008/democratic-presidential-nominee/

  • http://www.rossparker.com Ross Parker

    It would aid the accessibility of this blog to have a glossary for some terms: ‘superhero bias’, ‘halo effect’ etc. Yes, you have posts that explain them fully. But a glossary would be handy too.

  • http://www.mccaughan.org.uk/g/ g

    burger flipper, the archives go back to the start of OB. I don’t see any reason to think OB is “winding down”; Robin H has cut his posting back a bit but is still active, and Eliezer continues to post an enormous amount. (Not all of which is exactly about “overcoming bias”, but it’s interesting anyway.)

    AO, I think your question can be rephrased as a statement of fact: if two prediction markets make different predictions, then obviously that puts an upper bound on how accurate they can both be. (Unfortunately it’s difficult to tell how the inaccuracy is distributed between the two.) It might be interesting to find as many propositions as possible covered by multiple prediction markets and study the extent of divergence and whether it correlates with anything interesting about the propositions.

  • http://www.mccaughan.org.uk/g/ g

    Ross, the search facility does a pretty decent job of providing a glossary. Stick “superhero bias” or “halo effect” (with the quotes) into it, and cast your eye down the results looking for either the original posting or one that links to it.

    Perhaps the search box should be more prominently placed.

  • milieu

    Hi,
    I am a newbie to ‘overcoming bias’ and I find it to be a great and extremely thought provoking blog and liked quite a few of the blog posts. But I had a question (which might look a bit dumb) which has been bothering me. Isn’t the constant attempt at ‘overcoming bias’ itself a bias. I mean a bias can creep in if we are always trying to remove bias and finding the bias becomes a bigger priority. So we might give a heavier weightage to small biases and reject a solution because we found some bias in it. I just wanted to say that this might be a criticism of too much of bias finding behavior

    Thanks

  • Silas

    Recently two peer-reviewed studies came out:

    -One found that women dress more attractively when more fertile, and explained this through evolutionary psychology.

    -One found that women walk *less* attractively when more fertile, and explained this through evolutionary psychology.

    (I can post the links later today.)

    I was interested in Eliezer_Yudkowsky et al’s reaction to this in light of “The strength of a theory is in what it can’t explain” and its implications for the current state of research.

    And I’d like to echo g’s request for Eliezer_Yudkowsky’s AI progress.

  • Mason

    G & AO, the movement in opposite direcetions doesn’t mean the bets are becoming more inaccurate. If one market had Hillary to high, and the other too low one would expect to see moves like this. We would also expect the moves toward reality over time, so yes, movement does indicate current inaccuracy, but post movement positions should be more accurate.

    I don’t think anyone ever said they were perfect, only better than all others.

    What I’d like to see: Robin tell us why he hasn’t started a decision market of his own. There was talk somewhere else (MR I think) about economist not being good businessmen, is this the case? Why haven’t you bet on DMs being a valueable viable product? Seems to me that providing businesses with good decisions would be very lucrative.

  • Mason

    “I don’t think anyone ever said they were perfect, only better than all others.”

    they = Decision Markets

  • Brandon Reinhart

    John Bolten mentions the bias in government toward the most recent information:

    http://www.washingtonpost.com/wp-dyn/content/article/2007/12/05/AR2007120502234.html?hpid=opinionsbox1

    “Fourth, the NIE suffers from a common problem in government: the overvaluation of the most recent piece of data. In the bureaucracy, where access to information is a source of rank and prestige, ramming home policy changes with the latest hot tidbit is commonplace, and very deleterious. It is a rare piece of intelligence that is so important it can conclusively or even significantly alter the body of already known information. Yet the bias toward the new appears to have exerted a disproportionate effect on intelligence analysis.”

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    All: Yes, I should write surveys of previous work… just remember, I’m not the only one who can do that! All these recent long posts have been consuming time to the point where I don’t have the energy left to write surveys or much of anything else.

    g, regarding my AI work: Virtually all really useful Friendly AI insights are Artificial General Intelligence insights. It’s not so much that I’m developing a bolt-on Friendly AI module, but rather going in search of an AGI theory that can obey the strict requirements of FAI. This has turned out to be a difficult but healthy experience, and I would say this even if I were only interested in understanding the mind rather than saving the world.

    Silas, that’s an interesting problem in ev-psych. Men usually give women more attention than the women want, but women also compete to attract the long-term resources of high-status males and the short-term sexual favors of males with good genes. Women also try to appear trustworthy – there’s a difference between dressing/acting sexy and dressing/acting beautiful. The Madonna/whore dichotomy, I believe it’s called; unfair but very widespread. In a case like this, I’d expect women, during their fertile periods, to exhibit increased competition for men with good genes, but not necessarily for men with long-term resources. So they should tilt toward more sexy rather than more beautiful / high-status. I don’t know how the studies measured “attractiveness” in walks and dress, but intuitively I’d expect walking to be attractive as in sexy, and dressing to be attractive as in high-status. So the studies you just described have pretty much the opposite result of what I think I’d expect, unless they defined “attractiveness” differently from above. But this kind of reasoning, in ev-psych, is always tricky.

  • http://www.mccaughan.org.uk/g/ g

    Mason, I didn’t say that diverging changes mean a reduction in accuracy. I said that divergent predictions mean a lower bound on inaccuracy.

    Silas, I wasn’t asking about Eliezer’s progress, I was asking what things he works on. (In particular, I’m not suggesting that he’s failing or slacking if he isn’t actively implementing AI.)

    Eliezer, sure, FAI insights are useful AI insights, and I certainly wasn’t suggesting that FAI would be a bolt-on module. (That would be like trying to make a piece of software not have security holes by bolting on a module; that kind of thing scarcely ever works.) But solving FAI-specific problems is (at least so it seems to me) quite a different business from solving traditional AI problems — more like philosophy and less like software development — and I was curious about how your work is spread across that spectrum. Sounds like it’s mostly at the philosophical end.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    AO, my main accuracy claim would be “no less and often more accurate than other co-existing institutions with comparable resources.”

    Mason, I face substantial and obvious legal and economic barriers.

  • AO

    Robin,

    Brian Caplan wants you and Tyler Cowen to debate:

    “Who wouldn’t want to see Tyler Cowen publicly debate Robin Hanson? Well, aside from the masses? I think they’d both be willing, if they could only pinpoint a good topic”

    I think a lot of people would love to see this. 3 questions:

    1.Would you accept the challenge?

    2.What topic would you like to debate him on?

    3.Would you win?

  • http://websites.cybersoup.com/eblincow/ Eric Blincow

    I think it wouldn’t be a bad idea to start marketing some OB paraphanalia, i.e. a T-shirt with Eliezer’s face on it. It could have some clever slogan taken from one of his posts on the back. This would also be a great way to increase OB’s public visibility, if everyone is wearing a “Kiss me, I’m Bayesian” hoody as the hottest autumn fashion, we would all have a lot more opportunities to explain these concepts at dinner parties. “Who is that on your shirt?”, etc.

  • Nick Tarleton

    some clever slogan taken from one of his posts

    Like this one? 🙂

    But, I would definitely buy such a thing.

  • Tiiba

    “3.Would you win?”

    Win what?

    I think that treating debate as a contest is the mind-killer. If you treat it as a search for truth, everyone wins.

  • anon

    1. Both Eliezer and Robin have mentioned that they are bothered by an apparent bias toward skepticism in Wikipedia entries. I also see a tendency towards skepticism, but the tendency contributes to my respect for Wikipedia. I would be bothered more by consensus, especially in controversial topics.

    2. Extending Ross Parker’s comment: How about starting a moderated, but Wikipedia-like project to create a taxonomy of bias and an associated encyclopedia of bias which provides ongoing examples and analysis of bias in the world.

  • steven

    Not sure this was useful, but anyway:

    http://del.icio.us/fhtagn/YudkowskyOnBias?setcount=100

  • Pete

    Might be worth having Eliezer consider his own blog. While his posts are interesting and thought-provoking, let’s just say “there’s a lot of wind in those sails”… (;-))

    On a blog with upwards of 40-50 members, it would be great to see more diversity. This might be surprising, but I am mostly interested in the topic of….”overcoming bias” (but will keep reading Eliezer when time permits… though it is pretty tough when I’ve lost the thread and even the post to describe the thread (fake, fake) gets too obscure to regain the thread).

  • AO

    Tiiba,

    Someone will make the better overall argument, or they will tie, or it will be unclear, that is guaranteed. Which is why I ask ‘Will you win?’.

    There will also be smaller point by point winners along the way

    http://www.overcomingbias.com/2007/10/precious-silenc.html

    There is no guarantee either debater will learn anything, and so will be no nearer the truth, and have “won” no edification.

  • Silas

    Eliezer_Yudkowsky: As promised, here are the links:

    Women dress more attractively when fertile:
    New Scientist
    Mainstream press
    Slashdot discussion

    (“Attractive” := men shown photos believed women were trying to look attractive)

    Women walk less attractively when fertile:
    Mainstream press
    Crooked Timber discussion
    Slate post

    (Attractive := “wide hip movements”. Note: scientists explain by saying women want to avoid sexual assault when fertile — the “bad boy” genes they supposedly want.)

    Oh, and by “tricky” you of course mean “unscientific”, right?

    You’re not holding back to avoid giving ammunition to creationists, are you? :-/

  • Silas

    Eliezer_Yudkowsky: As promised, here are the links:

    Women dress more attractively when fertile:
    Slashdot discussion with links to New Scientist and mainstream press

    (“Attractive” := men shown photos believed women were trying to look attractive)

    Women walk less attractively when fertile:
    Slate post with links to Crooked Timber and mainstream press account

    (Attractive := “wide hip movements”. Note: scientists explain by saying women want to avoid sexual assault when fertile — the “bad boy” genes they supposedly want.)

    Oh, and by “tricky” you of course mean “unscientific”, right?

    You’re not holding back to avoid giving ammunition to creationists, are you? :-/

    (Btw, I was going to post six links, but I got flagged as spam, so I just posted a link that contains links to the others.)

  • Z. M. Davis

    I have a question for Eliezer (sort of touched on already in g’s comments on rationality with limited cognitive powers, but I’ll go on anyway): is a good rationalist supposed to be able to apply probability theory in real-life situations not involving things like cancer tests or baskets of toy eggs? Suppose I think there’s going to be a bus at ten o’clock. I arrive at the bus stop exactly on time according to my watch and wait for ten minutes, but the bus does not come. I want to know why. Should I hypotheticaly be able to give values for P(no bus from 10 to 10:10), P(bus is late), P(bus was early), P(watch set incorrectly), P(misremembered schedule), P(no bus from 10 to 10:10 | misremembered schedule), and the like, so I can apply Bayes’s theorem? Or am I totally missing the point?

  • Anonymous

    Davis, I’d say you’re missing the point. The idea is that when you update your mind in a perfectly normal way, you’re approximating Bayes’s Theorem whether you know it or not. You should only start making up “probabilities” for things if you think that the numbers you make up are going to be smarter than the unspoken feelings-of-likeliness that your mind uses as the native representation of probability. If you can get a probability of 0.77 in some calibrated way, great, but otherwise pulling “seventy-seven percent” out of thin air won’t necessarily help you.

  • Z. M. Davis

    Commenter at 7 December 2:42 AM: Of course, that makes sense. My concern, though, is that when I update my mind in a perfectly normal way, I might be doing it wrong. When I say, “Eh, the bus is probably late,” how can I tell that I’m not making the same sort of mistake as all those doctors who supposed that a positive mammography means there’s a ~.75 probability that the patient has cancer, when the right answer was .078? I can try to keep in mind principles like conservation of expected evidence, frequent updating, &c., and that will probably help me somewhat. But I’m wondering: if I can’t actually use the equation, then what’s this Bayescraft I keep hearing about?

  • http://www.cmp.uea.ac.uk/~jrk Richard Kennaway

    Two basic claims about Bayesian reasoning are the foundation of OB:
        1. It is the best way to reason in the presence of probabilistic data.
        2. We are bad at it, in many different ways.
    But I see a third one from time to time, and not just in comments such as the anonymous reply to Davis above:
        3. It is how our neural machinery actually works.
    I have not been able to craft a search string to unearth an OB article claiming (3), so I might be mistaken. But if someone does assert (3), what is their evidence, and how do they reconcile it with (2)?

  • J. Hill

    Davis, as for the bus problem (and those like it) Bayes would help you gauge the true probability of the bus showing up at a certain time given probabilistic information. However, your brain doesn’t care so long as it can roughly guess that the probability is above .4 (I pulled that number out of nothing because it isn’t important and I don’t know the real one) for the bus showing up. Thats because if the bus has come and gone there is nothing you can do; but if it is late, waiting a few minutes will help you.

    The brain does that even to unreasonable odds in some situations because the benefit of making sure is greater than the risk of being wrong. With regard to the bus, if you wait 10 more minutes to be sure you lose 10 minutes if the bus doesn’t show. However, if it does show you make it to work.

    Also to Richard, coincidentally this is how I would reconcile 2 & 3. The brain on some level may know it is a long shot, but the risk is worth it. Personally I like to call this phenomenon “hope”. We will go to extreme measures and attempt to surmount incredibly poor odds in order to attain something better. I would say thats a large part of why we are as advanced as we are.

    Numerous studies (sorry don’t have links but if asked for I can try to find some) have shown animals to be better at logic problems than humans. I remember one such study, chimps and humans were sat in front of a screen with a line in the middle. 80% of the time a dot flashed on the left, 20% on the right. They were rewarded for predicting which side the dot will be on. The chimps hit left every time and were guaranteed 80%. Humans tried to spread it 80/20 which resulted in lower scores (around the 60s). But the reason this is helpful is because the human group had a probability of getting 100% >0 and the chimps was 0. In the long run the possibility of perfection outweighs the likelihood of predicting incorrectly.

  • Z. M. Davis

    “[…] the benefit of making sure is greater than the risk of being wrong. […] In the long run the possibility of perfection outweighs the likelihood of predicting incorrectly.”

    J., if the benefit of making sure outweighs the risk of being wrong, I’ll take that into account when deciding what to do. (That includes deciding whether or not to gather more evidence.) But when deciding what to believe, wouldn’t you rather maximize accuracy?

  • http://www.mccaughan.org.uk/g/ g

    J Hill, if the dot-flashing was random, with each dot chosen independently, then the chimps were more likely to get 100% than the humans. (Easy way to see this: if the dots are all independently random then P(agree) for a single dot is 0.8 for the chimps and 0.8^2 + 0.2^2 = 0.68 for the humans, and *those* are all independent too.)

    I wouldn’t assert Richard Kennaway’s #3, but I would assert something like “in ordinary situations it somewhat resembles what the brain actually does”. I’m not sure why it should matter much whether that’s true, though.

    Z M Davis, since your introspection (like everyone else’s) is imperfect, you won’t necessarily get better results by quantifying your expectations and doing an explicit Bayesian update. (If you have a big enough pile of data concerning a small enough number of propositions, doing that probably would be a win.) But, whatever your neural machinery is doing, at any given time there’s some (perhaps vague and fuzzy) answer to questions like “how likely do you think it is that you misremembered the bus schedule?, and ideally you’d update your beliefs in a manner similar to doing Bayesian updates on those (implicit) answers, and in cases where people demonstrably do something far from that (as revealed by work like Tversky-and-Kahnemann’s) it’s interesting to ask whether there are things we can do to make our mental processes do something nearer to that.

    (But, note, “nearer” is a vague and fuzzy term too, and some things we might try to do to get “nearer” to our cognitive ideal might be counterproductive. For instance, suppose you systematically overestimate the probabilities of certain kinds of disaster, and suppose you discover that in a particular family of situations you can compute those probabilities exactly. If you “patch” your estimation by doing the computations when they’re applicable and otherwise doing exactly what you did before, then you’ll steer yourself towards situations in which you can do the computations (because you get lower disaster probabilities there) even when that actually makes things worse. I don’t know to what extent this sort of thing happens to real people, but it can certainly happen to computer chess programs :-), and it seems somewhat reminiscent of the overoptimizing-for-measurable-criteria that happens in some bureaucracies.)

  • J. Hill

    Davis, I can agree on that. The problem is the human brain has issues deciding which is which so it generalizes, often too much. You have to consciously force it a lot of the time.

    Also, sometimes the two cannot be separated. Suppose all the evidence shows a .99 probability that there is nothing you can do to prevent the sun from going super nova in 10 years. You shouldn’t believe that, because its harder to work towards the .01 where you live if you believe you can’t. “I think I can”, or better “I know I can” will make you work harder. Hopelessness is yet another mind-killer. The brain is slanted towards believing in the best possible outcome regardless of the odds. It causes us to work harder towards that outcome, and our intervention is often what makes the probability shift. That isn’t to say that this is true of all beliefs, in a lot of cases the most accurate belief is the best.

    Maximizing accuracy is what Bayesian thought is all about, everything is evidence. Back on the bus problem, if you notice that no one else is at the bus stop that would push toward you missed it. How much it pushes depends on if there is usually someone catching the bus at that time.

  • J. Hill

    g,

    If I remember correctly it was set so that the end result would be an 80/20 split and random within that constraint. I may have misinterpreted the study, and if it was just an 80% chance to be on the left and randomness was allowed to run its course then the chimps would win hands down.

  • George Weinberg

    Interesting historical article in this month’s Physics Today, “The Copernican Myths”. Among other things, it points out that Copernicus’ system at the time wasn’t much less cumbersome nor more accurate than Ptolemy’s. The problem wasn’t just that Copernicus didn’t conceive of elliptical orbits, there was just a lot of bad data before Brahe.

    Money quote for OB: “At the start, the new theory rarely gives convincingly better better results than its predecessor. What usually happens is that it has some appeal, often aesthetic, that attracts others to work within the new model.”

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Robin, could you e-mail me my unpublished comment so that discussion could shift to my blog?

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Sorry, I should have posted that in the baby selling thread.

  • Psy-Kosh

    Something I’ve been kind of thinking about:

    (This post is meant as a combination of “is this possible? what’s the ethics of this? and if we should, how can we go about doing this?”)

    Biases, in the senense of these bugaboos in our thinking process, could be viewed as sort of analogous security holes in a system, right?

    Well, looking at it that way has gotten me thinking. Could we somehow engineer some sort of memetic virus or whatever, carefully designed to sneak past the defenses and bugs by using the Biases and various other bugaboos of human thought to act, well, not exactly as a full patch/hack, but something that can reach in and help weaken the biases… At least help someone actually look at their own thoughts and pause, to be able to have a chance to see the biases operating in their own mind, to at least give a pause before the automatic “no my mind is fine” or “sure, but still whatever conclusion I’ve come to is still correct” or whatever, and basically just pause that? Somehow sneak in and manage to delay that just long enough to at least give a chance someone to actually see some of the bugs, to at least have a chance to go “wait, I’ve been rationalizing insread of being rational” or some such?

    Maybe this idea is silly, but if there is all this info on these bugs and holes, maybe we can use them to at least partly patch, ourselves and others, at least temporarily? Enough to give people a chance?

  • Paul Gowder

    Whatever happened to the Bay Area Rationalists meetup E.Y. tossed out a few weeks ago?

  • Nastunya

    This isn’t a suggestion of a topic but a comment about the posting policy on this board:

    Robin, you’ve recently unpublished one of TGGP’s comments (I can’t remember in which thread) citing its excessive length and, I presume, taking issue with a digression on which it embarked. I haven’t managed to catch that comment while it was up, but I would like to have. Judging from TGGP’s comments (the recent one in the “When None Dare Urge Restraint” thread), he appears to bundle many of his thoughts in one long comment, as I presume he might have done in the deleted comment which I never got to see.

    While I agree with you that we should try not to digress from the main aims of this blog too much and attempt to be as brief as possible, I’ve noticed another posting policy that conflicts with this one: commenters are also discouraged from posting so often that their name appears more than one or two times in the Recent Comments section. I realized this after an emailed discouragement from one of the admins when recently my name happened to briefly be up there as four of the ten recent commenters.

    The reason my name was up there so much was because I did the opposite of what TGGP seems to do: I posted my initial thoughts in one comment and then posted some more thoughts in a few subsequent comments, for which I apologized in the email to the admin and promised to try to bundle my thoughts in the future to avoid such happenings.

    I think you can see my point: a posting policy can either discourage comment-bundling or multiple successive after-thought-type comments, but certainly not both.

    I don’t mean to be difficult but I would appreciate an updated re-articulation of the posting policy so that it avoids this conflict.

  • Eliezer Yudkowsky

    Hey, Paul – I located what looks like a good restaurant, in Millbrae next to the BART/Caltrain station, and my current thought is to meet up in mid-January after the holiday crush. More on this tomorrow.

    Nastunya, if you have that much to say, start your own blog and link there.

  • Nastunya

    Appreciate the dubiously sincere encouragement though I certainly don’t have that much to say. Part of my complaint was the missing out on others’ comments through the admins’ somewhat creepy (though very, very rare) editorial action.

  • http://entitledtoanopinion.wordpress.com/ TGGP

    Nastunya, I have reposted my comment from the “Baby Selling” thread here. You are right that I prefer responding to many different posts with one single post of my own. Sometimes typepad acts up when posts are attempted and minimizing the number of times a post is attempted minimizes the probability of that happening.

    Since I’ve got a blog of my own and Robin was willing to e-mail me my post back, I don’t have a problem with the editorial policy. Setting up a blog of your own is easy and I encourage others to do it.

  • Nastunya

    Tiiba, “treating debate as a contest is the mind-killer”: yes, exactly!

    AO, the search for “truth” and the zeal to “win” debates really are at great tension. A terrific way to determine the legitimacy of a position is to throw as many forceful challenges at it and see how it makes out. One way to do this is to get two bright, informed people agree to defend the opposite sides in a debate and fight it out until one “wins.” The problem is, when two are talking in that kind of setup and a good, useful challenge occurs to the defender of an idea, he or she will have an incentive to withhold it in order to “win” the debate and — I’m sure you see where I’m going with this, — lose the overall battle, the battle for truth or whathaveyou.

    That’s why it’s harmful to demand that in debates points of view and positions be encapsulated wholly and separately within each of the interlocutors. That kind of constraint cripples the whole enterprise.

    My favorite guideline for improving yourself as an interlocutor is to develop a willingness, an eagerness even, to help your opponent (“opponent”?) “beat” you if you have an insight into how he or she can best do it.

  • http://rolfnelson.blogspot.com/ Rolf Nelson

    Any chance of a post of (or a link to) a practical ‘Newbie Guide to the Prediction Market Scene’? The reason for asking is that I’d like to start participating in a ‘prediction market’ later this month, but don’t yet know where to start.

  • Doug S.

    I have a question regarding causality, statistical inference, and confounding factors:

    Is it reasonable to say that cigarette lighters cause cancer?

    (If you know of a formal mathematical model/definition of causation, what answer does that formalization give?)

  • steven

    Here’s a riddle that’s been bugging me. If I understand correctly, economists have some different methods they use to calculate how much people value a human life, and in the western world that ends up being several million dollars. If you did the same analysis in the third world, you would probably get a much lower number. So do we 1) value both western and third-world lives at the western amount? 2) value both western and third-world lives at the third-world amount (or something in between)?, or 3) value western lives at a much greater amount than third-world lives, so the life of 1 westerner is worth the lives of N third-worlders? 2 seems absurd and 3 seems morally wrong, so we’re left with 1, valuing both western and third-world lives at the western amount. But in a wealthier future, we will probably value human lives at much a greater amount of money still. Does that mean we should value today’s lives at future amounts of money (billions, say)? That doesn’t seem feasible either.

    I’ve probably mixed up “is” and “ought” a bit, and I suppose I could have added 4) stop attempting to think rationally about money/lives tradeoffs… but I hope you can see the riddle here.

  • Nick Tarleton

    Steven: who’s “we”? Empirically, I think those studies mean third-worlders value their own lives less than first-worlders value their own, at least monetarily (presumably the third-worlders value a given amount of money more than the first-worlders). You can bet that both groups value foreign lives considerably less (and are scope-insensitive about them). Normatively, it would seem we should value a person’s life as much as that person does, which supports #3. This isn’t as repugnant as it sounds, both because the difference in real value is less (possibly much less) when you consider the differing utility of money, and because a third-worlder genuinely can expect fewer QALYs than a first-worlder. However, in practice #1 may be better, at least because advocating #3 (a) sounds evil to most people and (b) could genuinely lead to evil behavior in people just looking for an excuse to assign third-world lives zero or near-zero value.

    (See also this comment by Michael Vassar, saying “we should value a particular human life at the lower of total preference for the continuation of that life and replacement cost for that life” and “because economical thinking is confusing or corrupting to people below a very high IQ threshold, we maintain a convenient fiction of infinite value.”)

  • http://www.mccaughan.org.uk/g/ g

    1. The ratio (value of one life) / (value of one dollar) may have very different values in (say) the US and Somalia, but I don’t see why you should assume that it’s only the numerator that varies.

    2. Typical human lives in very poor countries are arguably much worse than typical human lives in rich ones: they’re liable to be shorter, less enjoyable, less productive of things that other people value, and so on. It’s somewhat taboo to say that this means theose lives are “less valuable”, but I think the taboo is mostly the result of sloppy thinking. (Note that “this person’s life is less valuable than that person’s” and “this person’s interests count for less than that person’s” are entirely different propositions.)

    3. In very poor countries, quality and length of life are often very badly affected by things that could be fixed cheaply (measuring cost in dollars). You could save, or extend, or improve, many many lives in Somalia for $10000. Not so many in the USA.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Unit, try this old writing of mine. Roughly, self-organizing systems have to generate a little waste heat, but that’s all.

  • http://omniorthogonal.blogspot.com mtraven

    There is a very accessible presentation of Pearl’s theory of causality available here.

  • steven

    “self-organizing systems have to generate a little waste heat”

    If I run the game of Life on my computer, does it really generate waste heat that it wouldn’t have if I ran some cellular automaton with no self-organization?

  • Nick Tarleton

    If I run the game of Life on my computer, does it really generate waste heat that it wouldn’t have if I ran some cellular automaton with no self-organization?

    Does it matter? Organization->heat doesn’t mean no_organization->less_heat. No matter what you use it for, your computer will generate vastly more waste heat than is thermodynamically necessary for what it’s computing.

  • http://rolfnelson.blogspot.com Rolf Nelson

    Are there any useful interactive worksheets or online training programs for improving thinking, or calibrating your probability assessments? If not, maybe a Call for Volunteers to see if someone is willing to create an online training application for probability calibration. (Assuming we believe the claims that such training is useful; I have no particular insight whether these claims are true.)

  • Mason

    Robin, At the risk of sounding very ill informed I do not see all of the legal and economic barriers to creating decision markets of which you speak.

    Maybe it would help if I tell you what I do see: A successful Hollywood stock exchange, heavy trading in a variety of stock derivatives, and now betting on which CEO will be fired next at Paddy Power. To the extremely untrained eye it looks like combining these is possible and would give the result you’re looking for.

    I certainly have not put in the time you have, I’m not sure if anyone else has, and I think a post highlighting the barriers you’ve faced and still face would be interesting.

  • burger flipper

    Relatively new to the forum and just watched the 2 1/2 hour Yudkowsky video on Google. Excellent talk that really helped frame some of the posts here for me, though the audience questions were generally a distraction. My biggest disappointment was the one question that popped up in my mind while watching and was actually posed wasn’t answered because it would take about 5 minutes. The man who asked was told to pose it again at the end of the talk, but did not.

    This was the question about the friendly AI: “Why are you assuming it knows the outcome of its modifications?”

    Any pointer to the answer would be much appreciated.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Hopefully some of your question is answered by Knowability of Friendly AI, a (temporarily?) abandoned work-in-progress – my getting bogged down in this sort of document is why I now blog.

  • burger flipper

    Thank you.
    The topic seems like ideal blog fodder. It’d be pretty dense reading as a book. I do well to keep up with a few pages worth a day.

    Came aboard with the evolution topics a month or two back and had no idea what I’d missed or where it was all heading, so the Future Salon talk, the Singularity Summit audios, and forthcoming book chapters have helped bring me up to speed enough to put what I’ve read here in context.

    Still think I need to go back and read your posts here from the start to catch up though.

  • steven

    Is there any research on how well you can read off a person’s IQ and personality traits from his or her appearance?

  • http://profile.typepad.com/aroneus Aron

    I’m curious if anyone has any pre-formed riff they could give on the relationship of instinct to general intelligence. That is, I would assume by Occam, that the brain is primarily a repetition of a basic simple design, yet there is clearly some method of encoding design patterns that translate into fairly specific predictable skills in the organism.

    This would seem somewhat analogous to a kernel of friendliness controlling behavior for an AI as it passed from newborn to well-trained.