Test Near, Apply Far

Companies often ask me if prediction markets can forecast distant future topics.  I tell them yes, but that is not the place to test any doubts about prediction markets. To vet or validate prediction markets, you want topics where there will be many similar forecasts over a short time, with other mechanisms making forecasts that can be compared. 

If you came up with an account of the cognitive processes that allowed Newton or Einstein to make their great leaps of insight, you would want to look for where that or related accounts applied to more common insight situations.  An account that only applied to a few extreme "geniuses" would be much harder to explore, since we know so little about those few extreme cases.

If you wanted to explain the vast voids we seem to see in the distant universe, and you came up with a theory of a new kind of matter that could fill that void, you would want to ask where nearby one might find or be able to create that new kind of matter.  Only after confronting this matter theory with local data would you have much confidence in applying it to distant voids.

It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot.  When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near.  Far is just the wrong place to try new things.

There are a bazillion possible abstractions we could apply to the world.  For each abstraction, the question is not whether one can divide up the world that way, but whether it "carves nature at its joints", giving useful insight not easily gained via other abstractions.  We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby. 

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Considering the historical case of the advent of human intelligence, how would you have wanted to handle it using only abstractions that could have been tested before human intelligence showed up?

    (This being one way of testing your abstraction about abstractions…)

    We recently had a cute little “black swan” in our financial markets. It wasn’t really very black. But some people predicted it well enough to make money off it, and some people didn’t. Do you think that someone could have triumphed using your advice here, with regards to that particular event which is now near to us? If so, how?

  • http://hanson.gmu.edu Robin Hanson

    Eliezer, it is very hard to say what sort of other experience and evidence there would have been “near” hypothetical creatures who know of Earth history before humans, to guess if that evidence would have been enough to guide them to good abstractions to help them anticipate and describe the arrival of humans. For some possible creatures, they may well not have had enough to do a decent job.

  • michael vassar

    I would say that this advice is almost perfectly wrong on almost every point. Popperianism is, strictly speaking, wrong, but it is a useful abstraction form many scientists that came basically from thinking about what Einstein in particular did. Historically, it was easier for Newton and Einstein to test their theories far and apply near once measurments improved. If we had a math that specifically predicted the consequences that cause us to invoke Dark Matter on all observed distant scales and that math was fairly elegant we would be done and wouldn’t be that upset if it implied nearby experiments that we couldn’t realistically actually do. We are bothered by string theory because it doesn’t specifically predict our old observations, not just because it doesn’t make new nearby predictions. One reason that I have serious concerns about the FAI endeavor is that generating genuinely deep new abstractions seems to be not just hard, but damn near impossible. I don’t know anyone who has an obvious track record of doing it even once in the sense that Leibnitz, Aristotle or Hume did it many times.
    OTOH WRT testing prediction markets, so much the better for the theory if it works for the near term. Buffet’s models explicitly don’t work for outperforming markets in the short term, but I’m sure he’d be glad if they did. Outperformign futurists is a lower standard and maybe your techniques do.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Also, can futures markets really forecast distant events at all, if they tie up funds that aren’t indexed to the S&P 500 or T-bonds?

  • frelkins

    As someone who actually runs a prediction market, let me say that the farthest out I’ve gone so far is 24 months. The hardest part about far-future markets is 1 – ensuring you have chosen the correct and/or valid benchmark for judgement by that time; 2 – keeping traders interested enough in such a distant market.

  • http://www.scissor.com/ William Pietri

    Very interesting!

    It seems like this insight is being vigorously applied in the world of software development, where over the last 10 or so years a set of short-cycle, iterative methods — collectively known as “agile software development” — have come to prominence.

    Compared with more traditional approaches, with 18 to 36 month release cycles, agile methods release much more frequently. The slowest agile teams might release quarterly, but many release new software weekly or more often still.

    That relates here because it forces people, even ones with grand visions about the future, to look frequently at how their ideas work in practice. A habit of testing near drives continuous improvement in ideas, gradually winnowing the bazillions of possible ideas to those that really apply.

    Of course, seeing whether new software really is valuable is a lot easier than testing a theory of new kinds of matter. But I definitely agree that the problem isn’t idea generation, it’s idea testing.

  • http://profile.typekey.com/halfinney/ Hal Finney

    As far as modestly long term predictions, commodities futures markets (which are basically betting markets on the future prices of commodities) go out as far as nine years. Current betting is that crude oil will be $85 a barrel in December 2017. You can effectively lock in that price today, with a modest down payment, if you think the betting line is wrong.

  • Ian C.

    Some things change daily (the weather) and some things don’t change for years (gravity). I think that if you want to predict far in to the future you need to have evidence of constancy of time. A local test might mislead you, because it might tell you your prediction is correct when it’s only a short term phenomena/fad.

    But alternatively, If you can show a cause and effect chain from now to what you claim, then you don’t need any historical evidence. But to do that you need the design complete basically, so it’s not really a prediction any more.

  • Pingback: AI Foom Debate: Post 32 – 34 | wallowinmaya