Alas Amateur Futurism

The Nov. 18 New Scientist features "leading scientists make their predictions" about the future.  (Our own Nick Bostrom is included.)  Nov. 30 Hawking made the news with our future in space.  My reaction here is similar my published reaction to a recent Wilson Quarterly special issue on the future: why can’t we hear more future specialists?   We hear these groups on the future:

  1. Famous Scientists – Science dominates their future, though they feel qualified to speak on any non-science aspect.
  2. Public Intellectuals – Possible futures help them illustrate the provocative ideas or trends they are pushing.
  3. Gadget Salesmen – They’re selling something, and it is going to be huge; get in early now. 
  4. Advocates – They have a cause, they care deeply, and the stakes just could not be higher.

Missing are analysts who actually spend a lot of their time trying to figure out what the future will, or could, be like.   The biases this causes should be obvious. 

Some say the reason is that one cannot be disciplined about the future; data comes too late.  But this seems just wrong to me; I’ve seen lots of disciplined work on the future.   Perhaps instead most people just don’t care much about the future itself; they mainly like the future as a dramatic backdrop for admiring impressive people and cool gadgets, or for taking sides in current ideological battles.  Alas.   

GD Star Rating
Tagged as: , ,
Trackback URL:
  • This blog is about bias, and one good way to identify bias is by comparing past beliefs to reality.

    I’d also like to see more summaries of past predictions by serious futurists, particularly those made several decades ago or earlier. We easily dismiss astrology by looking at previous predictions; why not give rationalist futurists the same respect?

    Is such a compilation available on the web?

  • Anders Sandberg

    The only regular summary of past predictions I know of is the Sceptical Inquirer one about psychic predictions:*
    But I guess few of us regard that as predictions by “serious futurists”.

    Perhaps a better approach would be to actually go through the back issues of The Futurist ( ) since 1967 and tabulate the predictions. But the problem is to make the predictions testable. This page actually has fairly definite predictions set to be true at a certain point in time that can be tested:
    But even here I think it is easy to debate whether (say) the topmost prediction is true or not.

    Maybe the WFS would be interested in setting up a kind of scoreboard for predictions? It might be an internal members only area (so that a bad record doesn’t affect public trust). Going to WFS meetings I have always felt that much of the activity was about creating a shared professional consensus so that forecasts point convincingly in roughly the same directions (beside the usual self-selling, of course). So maybe it would be an useful service for the organisation to see where it is approaching a consensus.

  • Bruce G Charlton

    Good point Robin – this may be why I found this piece in New Scientist so dull.

  • Via the Soft Machines blog, , I found this working paper by science historian Alfred Nordmann:

    He criticizes techno-utopian predictions that make their way into mainstream analysis of future technology, such as the American NBIC Convergence conference which looked at the impact of nanotech, biotech, information tech and cognitive enhancement on the future. Nordmann argues that science is structurally incapable of effectively criticizing and reining in extravagant speculations about the future.

  • If I recall correctly, the Delphi process was developed by the Rand Corporation specifically to facilitate forecasting by experts with the objective of minimizing the group biases that Cass Sunstein writes about.

  • TGGP

    Who, specifically, would you like to hear from?

  • David Brin has a page proposing a prediction registry and holding some pointers to related projects:

  • TGGP, the key word is “analysts.” There a few professional futurists who lean more toward analysis over advocacy. And I myself have done some of what I consider relevant analysis.

  • I share Robin’s frustration with these things. But I clicked on just a few of the entries in the New Scientist, and the ones I read seemed to be mainly by scientists making predictions about what would happen within their own fields. At least they have some claim to being the right people to ask about that.

    A fifth group that we sometimes hear from are professional “futurists”. Of course, the ones that we hear from in the media are also public intellectuals, so you could put them under category 2, but in that case the other categories would also tend to collapse into category 2.

    These futurists have different functions, it appears to me. For some, their only function is to draw attention to themselves. For some, the function is to speak at cooporate events, and there their task is to entertain, to help the audience become aware of the new buzz terms and concepts, and sometimes to condition the workers to embrace the new direction the management wants to go in and to be generally motivating. Yet other futurists seem to function mainly as fascilitators of group processes (scenario planning etc.) where the real goal often seems to be to bond the group to some common vision or project.

    To a large extent, professional “futurists” is also presumably not what Robin wants to hear more of.

    Perhaps venture capitalists who play with their own money would be a better pool to fish from, at least insofar as the question is about the commercial potential of new technologies and business models with a short time frame. They have an incentive to be right (in what they believe if not in what they say).

  • Nick, good point, yes an important fifth group is our-new-direction speakers and facilitators.

  • Futurism is a branch of the entertainment industry. Deal with it.

  • Eliezer, yes, it is useful to think of futurism as show, but what other areas of apparently “serious” idea institutions/groups are also substantially entertainment industries in disguise? Newspapers? Libraries? Universities? Think tanks?

  • It may be that predicting the future is essentially impossible. This is suggested by the striking rarity of accurate future predictions combined with the great rewards available to anyone who can succeed at the task.

  • Hal, predicting the future seems impossible. Not exactly of course, but better than the usual public consensus. Many people claim that no one predicted the world wide web, but I was part of a group that did predict it well before the public consensus was aware of it. That group did not much benefit from their prediction, and were probably hurt on average, which helps us to understand why people don’t do more such predicting.

  • ChrisA

    Surely trying to predict future events with probabilities is literally impossible, or at least highly unscientific. There is no way anyone can model the present in sufficient detail to develop testable predictions, the problem is far too complex, even ignoring the quantum effect. As an example of the difficulty could we feasibly predict the moves in a chess match that will take place a week from now? Predicting the general future is many times harder than that. When people present predictions, they are really presenting scenarios, possible events rather than probable or likely ones. That is why your “prediction” of the net was not profitable for you, for you are unable to calculate a probability around the event occuring.


  • Chris A, I’m much more interested in how accurate predictions are than in how “scientific” they are. Predictions *can* be expressed as probabilities, and they *can* be more or less accurate, even if they do not meet your standards for “scientific.”

  • ChrisA

    How can you be sure that an accurate prediction is not a lucky guess? Surely the point of being able to make predictions is to make use of them. The only way to be sure that even a string of successes are not lucky guesses is to understand the model that is producing the predictions, and if the model makes sense then the predictions are likely to be valid. But the complexity of reality makes such a model impossible. When you propose prediction markets for general predictions are you not just trying to “black box” the problem? We know that the people making the predictions don’t have a real model of reality (since it is impossible) so they are simply guessing, are an average of guesses really better than a simple guess?

    Now I don’t say that specific (as opposed to general) predictions of the future are impossible, for instance the oil and gas industry has got pretty good at predicting how long a project will take to build up to first oil five or six years in advance. There are literally millions of interactions involved in building an oil platform so it is not a trivial problem. However, through experience, oil companies have been able to build very good models of what it takes to do this. When a prediction is presented with probabilistic ranges on the start up date, we can say that these are likely to be representative of the real start up date. Note that even here there are frequent surprises when unexpected things occur.

    So I can understand that prediction market could be used to reveal information where the participants do have good models, such as the oil industry one. If someone does have a good model they have confidence in then they should be willing to bet more on the result so bringing the result closer to a real one. But how can we distinguish between the results where there is a real model driving the result and the ones where people are simply guessing? You could ask the experts if the problem was one that could be modelled – but in that case why not just ask for the model?

    This is a genuine problem for me on my projects. Since the budget is necessarily limited we have to decide what risks we should spend money to mitigate and which we should ignore. How do I get reasonable prediction of the probable risks if they have never happened before. I have thought of trying to use prediction markets, but can’t get around the issue above. (There is also the problem that if the problem does not occur, then was it because it was successfuly mitigated or it was a bad prediction).

  • Chris A, one can check that probability forecasts are not random guesses by looking at their calibration; for the group of forecasts at with 90% confidence, do 90% turn out to be true? One can certainly have useful information without have an exactly “scientifically” valid model.

  • Carl Shulman