Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
Tagged as:
Trackback URL:
  • One of the most interesting comment threads I have read recently was of non-lawyer readers of Volokh Conspiracy saying who they were (without names, in most cases). Who are the readers of Overcoming Bias? Are you an academic, and if so, what field. If not, what’s your story?

    I am an accounting professor who studies financial reporting regulation, primarily through laboratory financial markets and games (a subfield of experimental economics), with a fair dose of behavioral econ/finance. I have recently gotten interested in enterprise uses of virtual worlds. I read the blog for my daily dose of contrarianism, futurism and signaling — usually a bit strong for my taste, but it gets my blood moving in the morning.

    How about you?

    • sociology phd student

    • Peter Twieg

      I’m an econ grad student at GMU, so I know Robin through those channels. I actually work for Kevin McCabe and do experimental work in virtual worlds, I believe that you should be familiar with that already, Robert. (In fact, I think I emailed you about it at one point but failed to keep up correspondence – sorry!) I have some interests (shared by Kevin) in developing virtual worlds as an experimental platform (right now Second Life is very bad for running good controlled experiments, but maybe that can change), but for now I’m focusing my research on personality and economic behavior. GMU is a very nice place to be for picking up and running with the ideas which our many smart professors come up with. 🙂

      I guess I fit most of the trappings of what Eliezer calls the “atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc.” crowd. I’m interested in prediction markets, seasteading, futarchy, singularity-related stuff… even if those don’t become major research interests of mine, they’re certainly fun topics to converse about and raise awareness of.

  • Abdullah Khalid

    Can you please have a separate ‘book’ page on your website, where you list by topic – relating by overcomingbias – the best books you have read in those topic.. Especially, books for an amateur in overcomingbias??

    • jedermaann

      agreed this would be very useful. up until now I have trawled some of robin’s papers like his Health Altruism essay for citations for general and background reading.

  • I’m a third-year undergraduate student studying business information systems (something of a hybrid between business management and computer science) with an interest in entrepreneurship.

    Why do I read this blog? It consistently gives strong opinions on controversial topics from a rational perspective, and I’ve always loved observing the thought processes behind out-of-the-box thinking.

  • Jamie_NYC

    Robin, I’d like you to elaborate on a couple of related topics where I find your (briefly) expressed positions strange:

    – AGI (“we’ll have a long period where AGI is at about the human level of intelligence”, if I understood you correctly. This to me sound like saying: “we’ll have horseless chariots just as fast as ordinary ones, but not faster);

    – future of technology / singularity: how best to establish beliefs about the future. I understand you are a skeptic when it comes to singularity. Do you see ‘technology’ forever remaining separate from ‘humanity’ – just one set of tools we use in life?


    • I understand you are a skeptic when it comes to singularity.

      My understanding of Robin’s position is (roughly) that he thinks that uploads may (p=50%, say) be developed this century, and that after that there will not be many calendar years (5-15) before what would reasonably be called an AI singularity, with all the usual implications of that.

      He’s hardly a “singularity skeptic”!

    • AGI (”we’ll have a long period where AGI is at about the human level of intelligence”, if I understood you correctly. This to me sound like saying: “we’ll have horseless chariots just as fast as ordinary ones, but not faster)

      If Robin said this, I also challenge him on it.

  • Jaime: Robin believes that the singularity will come first via “ems”, i.e. full-brain emulations of some human being, on a computer. Rather than, for example, being designed from first principles out of more primitive components.

    So, it’s like your horseless chariots, only instead of inventing an internal combustion engine, you invent a xerox machine which takes any regular chariot and magically produces a horseless copy.

    Such a xerox machine could change economics a lot … but it wouldn’t get you very far on the path to better/faster chariots (or AGI).

    • Such a xerox machine could change economics a lot … but it wouldn’t get you very far on the path to better/faster chariots (or AGI).

      I disagree; remember that uploads get faster over time, and as we enter the upload era, the bulk of the economy (including AI research) shifts to upload time. Calendar years start to stretch out, and we might see a century of AI research done in a decade.

  • Bryan

    I am a 24 year old former economics grad student. I work in industrial supply/sales. In the fall I plan to go back to school for a dual J.D./M.B.A. I live in a very square state in the middle of the country, and I read this blog and others for some intellectual stimulation. I’ve been reading this blog for about 3 years now, and obviously enjoy it.

  • I’m interested in folks’ take on President Obama’s least technocratic move so far: using the spoils systems to award ambassadorships to surprisingly unqualified people. I’m surprised by the lack of opposition by Republicans, and the lack of criticism by non-Democrat journalists and potential presidential candidates.

  • @ Hopefully,

    Names and links?

    • Raise Jimmy Wales and Eric Schmidt’s reputations yet again, because you have to go no farther than Wikipedia for an overview, and google news for the latest articles.

  • I am interested in being added to your blogroll 🙂

  • Abdullah, I don’t read many books.

    Jamie, Roko is right that the period of near human AI I envision is via hear human ems, which may only last a few years if growth rates are much higher then.

    Roko, you and I may disagree about the “usual implications” of non-em-AI.

    • Abdullah Khalid

      Online resources then??

  • Lee

    Robin, I’d love to hear more skeptical thoughts about war, its institutions, and the near-vs-far theme.

    I know you’ve discussed Randall Collins’s (and Dave Grossman’s) descriptions of the psychological costs of killing in war (near), and the way we have romanticized it (far).

    You might also have mentioned the bizarre moral reasoning that surrounds war and soldiers. Will Wilkinson has tweeted about it recently, and he spurred me to read *Killing in War*, in which the philosopher Jeff McMahan emphasizes how different our notions of liability, culpability, and justice are in familiar domains (defending ourselves from muggers, traffic accidents, etc.) and the far domain of war (respect for soldiers who knowingly fight unjust wars, the favorable obituary that ran in the NYT for Paul Tibbett, who flew over Hiroshima and committed what may be the most evil act in history, and so on).

    Also, along the lines of your prediction markets for terrorism, I wonder if there isn’t a tendency to dismiss inventive non-war solutions to our international political problems on the grounds that they are not morally satisfying. For example, say we’ll have spent $12k per Afghani by the time we wrap up, and their per capita income in the years before the war was $500/yr. Maybe this is laughably naive, but seems like there might have been a more economical way to incentive the Afghan people to flush out Al-Qaeda: monthly cash to everyone in every village conditional on certain things being done there, etc.

    It is just difficult to believe that war is the best social technology we have for these sorts of international problems. And isn’t it all the more suspect in light of our false beliefs about he psychology of killing in war and our strange moral reasoning re battle?

  • Eric Johnson

    > monthly cash to everyone in every village conditional on certain things being done there

    This would be because the USA was bombed from that nation — consider the moral hazard. Its like Danegeld.

  • Eric Johnson, you’re using one theoretical lense (moral hazard) as an epistemological stop sign.

    It reminds me of something I meant to post: where’s the place where Afghan/Iraq is being looked at training excercises to keep USA’s military forces competent? It still probably doesn’t hold up because there’s no rival nations whose fitness standard we need to match.

    • I have heard the argument that one reason for the initial advantage of the German army over Russia in WW2 is that they had practiced against France and Poland beforehand, but that the later gain of experience by the Russians enabled them to make a comeback.

      Robert Kaplan’s “Imperial Grunts” discusses how we’ve already got troops all over the world with our allies involved in their troubles. We could just lease some of our troops to allies if we wanted experience. It matters though what kind of experience is gained though. Martin van Creveld argues that fighting the weak makes strong forces weak.

  • Lee

    @Eric Johnson, I see your point. But an invasion is pretty unpleasant, and the moral hazard might be swamped by collective action problems if families are paid directly (rather than the gov’t or military, like tribute) $100/mo to help root out terrorists. Anyway, my idea’s probably silly for a lot of reasons. Maybe Darpa can put up some prize money for anyone who can think of a better way to use trillions of dollars to get terrorists out of Afghanistan.

  • Prakash

    From a slightly science fictional perspective, I have a question

    – Presently most third world elites are trying to grab their little bit of money through corruption of all sorts. Tomorrow, with improving life spans due to life extension technology, will there be a point in time where it will make more sense for them to become mancur olson’s stationary bandit and turn into a tax maximising despot? Or will increased life bring with it increasing uncertainty, hence there would never be a point where third world elites would look ‘long-term’ instead of ‘short-term’.

    My base point is that in the present circumstances, it is not personally profitable to third world elites to advance their country through standard application of first world rules. Will life extension make a difference here?

  • mjgeddes

    Some examples of near-far categories:

    Stories are far, project plans are near
    Domain models are far, program code is near
    Designs are far, blue-prints are near

    Analogies are far, probability distributions are near
    Narration is far, goals are near
    Signals are far, actions are near

    Sets are far, algebraic equations are near
    Art is far, morality is near
    Space-time geometry is far, Newtonian forces are near

    Spot the grand pattern yet? The items on the left (far mode) are more general and abstract than the items on the right (near mode). In fact a serious case could be made that the items on the left all subsume the items on the right. One example in particular stands out:

    Analogies are far, probability distributions are near


    • Constant

      Why lists of near and far things, rather than explanations? It is as if the concept of near/far is not yet clear in one’s own mind, and to clarify it requires a long list of examples. But if not clear, then perhaps not real. How do we know that the near/far distinction won’t go the way of the left brain/right brain distinction (which is now widely considered a pseudo-scientific mythology)? A maybe sometimes useful way to group disparate phenomena but ultimately more trouble than it’s worth.

      I’m not saying that’s the case, I’m just pointing out that the way it gets discussed, with lists of supposed examples as if attempting to hammer the concept into existence with repeated blows, is not terribly encouraging.

      • mjgeddes

        Well if something is not clear, the process of working through examples is surely the way to try to clarify further?

        My opinion is that the near/far distinction is very real and the principle is much more powerful and general than first thought. But it’s much more fun (and better!) for folks to see if they work things out for themselves (a blog is not the place for lectures!). Either folks will see a grand pattern, or they won’t. I’m waiting to see if any one else can trace the full implications of the near/far distinction.

    • I don’t think this fits with the many papers on the subject, so I think you are just making this stuff up.

      • Constant

        How does one know that “near/far” is something real, rather than something merely seductive like, say, the “yin/yang” distinction of eastern philosophy – which persist after thousands of years to this day as an item of serious belief but which is not, as far as I know, anything more than a monster family-resemblance concept.

      • mjgeddes

        Well, if ‘far’ mode is thinking in terms of abstract categories, and ‘near’ mode is thinking in terms of detailed recipes, then I think the concepts on my list are correctly classified under the modes they invoke.

        If it doesn’t fit known literature its because these are transhuman near/far modes rather than human ones 😉

    • mjgeddes

      Addendum (slight elaboration of my generalized near/far conception)

      If you pan out to a high enough level of abstraction then

      ‘far’ = signalling
      ‘near= action/potential

      I’m convinced this is a deep (fundamental) joint, cleaving reality, since these abstractions seem to run through multiple domains in a fairly striking way. To make this intuitively clearer, I whimsically choose these nicknames

      ‘far’= show
      ‘near’ = go

      Because far mode deals with signals (‘Show’) and near mode with action/potential (‘Go’). I borrowed the ‘Go’ term from a non-fiction book on the drug war, which explained that ‘Go’ is the slang word drug dealers commonly use to refer to money (because it gets things done – it’s a ‘social action-potential’).

      so, in cognitive science,

      Emotions are Show (far), Desires are Go (near)
      Love is Show (far), Sex is Go (near)

      Now I’m suggesting that the same principle can be generalized to other domains. For instance in physics,

      Fields are Show (far), Forces are Go (near)

      Because fields mediate signals, whereas forces actually lead to detailed observable results (exerting action/potential on matter),

      In logic I think that,

      Categorization is Show (far), Bayes is Go (near),

      Because categorization is concerned with knowledge representation and cross-domain sharing of knowledge between sub-agents (signalling or Show), whereas Bayes is for precise decison making (actions, or Go)

      But the big insight is that the Show (far concept) seems to include the Go (near concept). For instance sex (near/go) doesn’t include love (far/show), but love can include sex. Speaking whimsically again, I say that ‘Show wraps Go’. If I am right, Categorization must trump Bayes.

      “For every show there is a go, and that’s what makes the world go around” – MJG

  • Aron

    I believe the phrase ‘When AI achieves human intelligence’ is ridiculous, and sufficient to dismiss the speaker. Intelligence as a 1 dimensional measurement should be retired if we care to maintain the illusion of anticipating the future.

  • botogol

    Robin, it’s the start of a new decade, how about your 10 predictions for the coming 10 years.
    Yes, I know it’s a tired old trope…. but nevertheless it’s a good one, and I’d like to hear yours.

    BUT – 10 predictions *you’d put money on* (if an appropriate prediction market existed)

  • Hal Finney

    Peak Oil is a conspiracy-ish theory that says that we cannot practically increase the rate of extraction of fossil fuels much higher than it is today. It also claims that this will essentially eliminate economic growth for the foreseeable future, and possibly lead to war and further economic disasters.

    If true, it means that most conventional expectations about the next few decades are likely to be wrong. The theory has significant implications for both public policy and our personal planning.

    So is it right? I have tended to think not, but that is based on rather superficial outside-view considerations. When I look at the arguments and evidence in detail, they appear reasonably credible. But perhaps I am easily impressed…

    • Newerspeak

      From April 2009 WSJ on natural gas:

      Just three years ago, the conventional wisdom was that U.S. natural-gas production was facing permanent decline. U.S. policy makers were resigned to the idea that the country would have to rely more on foreign imports to supply the fuel…

      But new technologies and a drilling boom have helped production rise 11% in the past two years. Now there’s a glut…

      But who knows if oil and gas are comparable in this instance.

      I would bet that long-term tech trends present the maximally difficult prediction problem. Engineering education’s purpose, arguably, is to expose details that are otherwise too Near to see comfortably. Tech forecasting requires us to extrapolate all those details successfully over a Far timescale.

      There are reasons to underestimate the chance of discovering game-changing new technologies, though. Such predictions look like wishful thinking, and it’s better to prepare for the worst anyway. Issues like oil consumption, having already been enshrined as Official Objects Of Controversy, are probably especially hard to reason about. Making a public prediction that new tech will appear and neutralize one of the powerful constituencies maintaining such controversy sends many bad signals and few good.

      And then there’s the reporting problem. Reporters are probably the people responsible for the belief that we’d all be living in dome-enclosed cities by now. A construction like “highly likely eventual future technological development, given the stakes involved” is hard to parse, and even harder to explain to a lay audience.

    • Prakash

      As you get nearer and nearer to the reasons why peak-oilers believe in peak oil, it becomes clearer. I don’t think it is about getting easily impressed. Oil is a fossil fuel. Someday, there will be peak oil.

      About “peak oil” being in the times around us, what is the evidence?
      In general, the easier areas of extraction are all already mapped out. Most extraction in the future is going to be from pain in the butt offshore installations or other such extreme areas.

      When Matt Simmons visited saudi arabia, he saw them using advanced tech to withdraw oil, expensive installations that would not have been there had the going been easy. (Bayes theorem in action), he deduced that if the saudis are facing difficulties, everyone is going to have a tough time.

  • Newerspeak

    What’s the canonical way to get technically proficient with signaling theory as it’s applied on this blog? Is a good understanding of signaling theory isomorphic to an understanding of game theory? Of microeconomics?

  • Robin, I’d be interested in your take on the Google Threatening to Exit China story. There seems to be a lot of connections with topics you cover on this blog, like signaling, compromise, China, policy/politics, etc.

  • bcg

    In I thought you made some good points about old thinkers. My question is, are there better books to read than “The Prince” or “The Art of War” about conflict and strategy that are more recent and worthwhile? Specifically looking for a book that would apply those concepts to social life.

  • mjgeddes

    I caught a good episode of Derren Brown ‘The Mentalist – The System’. UK magician Brown had come up with a way to ‘beat probability theory’.

    Derren Brown

    The System, a Channel 4 special in which Brown shared his “100 per cent guaranteed” method for winning on the horses, was first shown on 1 February 2008.[12]

    The show was based around the idea that a system could be developed to “guarantee a winner” of horse races. Cameras followed an ordinary member of the public, Khadisha, as Brown anonymously sent her correct predictions of five races in a row, before encouraging her to place as much money as she could on the sixth race.

    In one scene a group of bookmakers was placed in a room and asked to use probability theory to calculate the odds of what would follow. Each person was asked to select a photo at random from a set of photos of people’s faces, and then to stand at a random location. But Brown correctly predicted the exact photo and location which each person would choose. The probability of this happening was calculated at less than 1 in a billion!

    Brown had found the big flaw in probability theory 😀