Companies often ask me if prediction markets can forecast distant future topics. I tell them yes, but that is not the place to test any doubts about prediction markets.
Some things change daily (the weather) and some things don't change for years (gravity). I think that if you want to predict far in to the future you need to have evidence of constancy of time. A local test might mislead you, because it might tell you your prediction is correct when it's only a short term phenomena/fad.
But alternatively, If you can show a cause and effect chain from now to what you claim, then you don't need any historical evidence. But to do that you need the design complete basically, so it's not really a prediction any more.
As far as modestly long term predictions, commodities futures markets (which are basically betting markets on the future prices of commodities) go out as far as nine years. Current betting is that crude oil will be $85 a barrel in December 2017. You can effectively lock in that price today, with a modest down payment, if you think the betting line is wrong.
It seems like this insight is being vigorously applied in the world of software development, where over the last 10 or so years a set of short-cycle, iterative methods -- collectively known as "agile software development" -- have come to prominence.
Compared with more traditional approaches, with 18 to 36 month release cycles, agile methods release much more frequently. The slowest agile teams might release quarterly, but many release new software weekly or more often still.
That relates here because it forces people, even ones with grand visions about the future, to look frequently at how their ideas work in practice. A habit of testing near drives continuous improvement in ideas, gradually winnowing the bazillions of possible ideas to those that really apply.
Of course, seeing whether new software really is valuable is a lot easier than testing a theory of new kinds of matter. But I definitely agree that the problem isn't idea generation, it's idea testing.
As someone who actually runs a prediction market, let me say that the farthest out I've gone so far is 24 months. The hardest part about far-future markets is 1 - ensuring you have chosen the correct and/or valid benchmark for judgement by that time; 2 - keeping traders interested enough in such a distant market.
I would say that this advice is almost perfectly wrong on almost every point. Popperianism is, strictly speaking, wrong, but it is a useful abstraction form many scientists that came basically from thinking about what Einstein in particular did. Historically, it was easier for Newton and Einstein to test their theories far and apply near once measurments improved. If we had a math that specifically predicted the consequences that cause us to invoke Dark Matter on all observed distant scales and that math was fairly elegant we would be done and wouldn't be that upset if it implied nearby experiments that we couldn't realistically actually do. We are bothered by string theory because it doesn't specifically predict our old observations, not just because it doesn't make new nearby predictions. One reason that I have serious concerns about the FAI endeavor is that generating genuinely deep new abstractions seems to be not just hard, but damn near impossible. I don't know anyone who has an obvious track record of doing it even once in the sense that Leibnitz, Aristotle or Hume did it many times.OTOH WRT testing prediction markets, so much the better for the theory if it works for the near term. Buffet's models explicitly don't work for outperforming markets in the short term, but I'm sure he'd be glad if they did. Outperformign futurists is a lower standard and maybe your techniques do.
Eliezer, it is very hard to say what sort of other experience and evidence there would have been "near" hypothetical creatures who know of Earth history before humans, to guess if that evidence would have been enough to guide them to good abstractions to help them anticipate and describe the arrival of humans. For some possible creatures, they may well not have had enough to do a decent job.
Considering the historical case of the advent of human intelligence, how would you have wanted to handle it using only abstractions that could have been tested before human intelligence showed up?
(This being one way of testing your abstraction about abstractions...)
We recently had a cute little "black swan" in our financial markets. It wasn't really very black. But some people predicted it well enough to make money off it, and some people didn't. Do you think that someone could have triumphed using your advice here, with regards to that particular event which is now near to us? If so, how?
Some things change daily (the weather) and some things don't change for years (gravity). I think that if you want to predict far in to the future you need to have evidence of constancy of time. A local test might mislead you, because it might tell you your prediction is correct when it's only a short term phenomena/fad.
But alternatively, If you can show a cause and effect chain from now to what you claim, then you don't need any historical evidence. But to do that you need the design complete basically, so it's not really a prediction any more.
As far as modestly long term predictions, commodities futures markets (which are basically betting markets on the future prices of commodities) go out as far as nine years. Current betting is that crude oil will be $85 a barrel in December 2017. You can effectively lock in that price today, with a modest down payment, if you think the betting line is wrong.
Very interesting!
It seems like this insight is being vigorously applied in the world of software development, where over the last 10 or so years a set of short-cycle, iterative methods -- collectively known as "agile software development" -- have come to prominence.
Compared with more traditional approaches, with 18 to 36 month release cycles, agile methods release much more frequently. The slowest agile teams might release quarterly, but many release new software weekly or more often still.
That relates here because it forces people, even ones with grand visions about the future, to look frequently at how their ideas work in practice. A habit of testing near drives continuous improvement in ideas, gradually winnowing the bazillions of possible ideas to those that really apply.
Of course, seeing whether new software really is valuable is a lot easier than testing a theory of new kinds of matter. But I definitely agree that the problem isn't idea generation, it's idea testing.
As someone who actually runs a prediction market, let me say that the farthest out I've gone so far is 24 months. The hardest part about far-future markets is 1 - ensuring you have chosen the correct and/or valid benchmark for judgement by that time; 2 - keeping traders interested enough in such a distant market.
Also, can futures markets really forecast distant events at all, if they tie up funds that aren't indexed to the S&P 500 or T-bonds?
I would say that this advice is almost perfectly wrong on almost every point. Popperianism is, strictly speaking, wrong, but it is a useful abstraction form many scientists that came basically from thinking about what Einstein in particular did. Historically, it was easier for Newton and Einstein to test their theories far and apply near once measurments improved. If we had a math that specifically predicted the consequences that cause us to invoke Dark Matter on all observed distant scales and that math was fairly elegant we would be done and wouldn't be that upset if it implied nearby experiments that we couldn't realistically actually do. We are bothered by string theory because it doesn't specifically predict our old observations, not just because it doesn't make new nearby predictions. One reason that I have serious concerns about the FAI endeavor is that generating genuinely deep new abstractions seems to be not just hard, but damn near impossible. I don't know anyone who has an obvious track record of doing it even once in the sense that Leibnitz, Aristotle or Hume did it many times.OTOH WRT testing prediction markets, so much the better for the theory if it works for the near term. Buffet's models explicitly don't work for outperforming markets in the short term, but I'm sure he'd be glad if they did. Outperformign futurists is a lower standard and maybe your techniques do.
Eliezer, it is very hard to say what sort of other experience and evidence there would have been "near" hypothetical creatures who know of Earth history before humans, to guess if that evidence would have been enough to guide them to good abstractions to help them anticipate and describe the arrival of humans. For some possible creatures, they may well not have had enough to do a decent job.
Considering the historical case of the advent of human intelligence, how would you have wanted to handle it using only abstractions that could have been tested before human intelligence showed up?
(This being one way of testing your abstraction about abstractions...)
We recently had a cute little "black swan" in our financial markets. It wasn't really very black. But some people predicted it well enough to make money off it, and some people didn't. Do you think that someone could have triumphed using your advice here, with regards to that particular event which is now near to us? If so, how?