Rah Simple Scenarios

Scenario planning is a popular way to think about possible futures. In scenario planning, one seeks a modest number of scenarios that are each internally consistent, story-like, describe equilibrium rather than transitory situations, and are archetypal in representing clusters of relevant driving forces. The set of scenarios should cover a wide range of possibilities across key axes of uncertainty and disagreement.

Ask most “hard” science folks about scenario planning and they’ll roll their eyes, seeing it as hopelessly informal and muddled. And yes, one reason for its popularity is probably that insiders can usually make it say whatever they want it to say. Nevertheless, when I try to think hard about the future I am usually drawn to something very much like scenario planning. It does in fact seem a robustly useful tool.

It often seems useful to collect a set of scenarios defined in terms of their reference to a “baseline” scenario. For example, macroeconomic scenarios are often (e.g.) defined in terms of deviation from baseline projections of constant growth, stable market shares, etc.

If one chooses a most probable scenario as a baseline, as in microeconomic projections, then variations on that baseline may conveniently have similar probabilities to one another. However, it seems to me that it is often more useful to instead pick baselines that are simple, i.e., where they and simple variations can be more easily analyzed for their consequences.

For example even if a major war is likely sometime in the next century, one may prefer to use as a baseline a scenario where there are no such wars. This baseline will make it easier to analyze the consequences of particular war scenarios, such as adding a war between India and Pakistan, or between China and Taiwan. Even if a war between India and Pakistan is more likely than not within a century, using the scenario of such a war as a baseline will make it harder to define and describe other scenarios as variations on that baseline.

Of course the scenario where an asteroid destroys all life on Earth is extremely simple, in the sense of making it very easy to forecast socially relevant consequences. So clearly you usually don’t want the simplest possible scenario. You instead want to a mix of reasons for choosing scenario features.

Some features will be chosen because they are central to your forecasting goals, and others will be chosen because they seem far more likely than alternatives. But still other baseline scenario features should be chosen because they make it easier to analyze the consequences of that scenario and of simple variations on it.

In economics, we often use competitive baseline scenarios, i.e., scenarios where supply and demand analysis applies well. We do this not such much because we believe that this is the usual situation, but because such scenarios make great baselines. We can more easily estimate the consequences of variations by seeing them as situations where supply or demand changes. We also consider variations where supply and demand applies less well, but we know it will be harder to calculate the consequences of such scenarios and variations on them.

Yes, it is often a good idea to first look for your keys under the lamppost. You keys are probably not there, but that is a good place to anchor your mental map of the territory, so you can plan your search of the dark.

GD Star Rating
a WordPress rating system
Tagged as: ,
Trackback URL:
  • Ely Spears

    Analytical or computational tractability also matter greatly, and always have. Assuming normal distributions, equal population means, limits of large data, etc. Scenario analysis seems one additional way to do this that also incorporates some consideration for what the consumers of the analysis will want to hear (both in terms of topical coverage and in terms of the recommendations they want to hear.)

    Think of climate modeling. Few people seem interested in cumbersome computational models which output probability distributions over a continuum of scenarios, even if this would be their best planning tool. Scenarios afford them more opportunity for short term gains and are more computationally tractable. Scenarios are also more politically galvanizing than distributions over continuums for whatever reason.

  • VV

    Nevertheless, when I try to think hard about the future I am usually
    drawn to something very much like scenario planning. It does in fact
    seem a robustly useful tool.

    How do you know? Did you make many non-trivial predictions using scenario planning that were later confirmed?

    For example, macroeconomic scenarios are often (e.g.) defined in terms of deviation from baseline projections of constant growth, stable market shares, etc.

    Note that “constant” growth actually means exponential growth, and market shares are known to fluctuate. One wonders how economists could possibly make good predictions starting from these ludicrous assumptions. Oh wait, they can’t.

    Yes, it is often a good idea to first look for your keys under the
    lamppost. You keys are probably not there, but that is a good place to
    anchor your mental map of the territory, so you can plan your search of
    the dark.

    But when the keys turn out not to be there you should move on, not insist that they are really there and if they aren’t it must be because the evil government moved them.

  • http://twitter.com/loveactuary Robert Eaton

    Insurers (esp Life insurers) are required to run stochastic interest rate scenarios to determine a range of outcomes against a baseline, as well as scenario-testing against other assumptions such as lapses and mortality changes. Nassim Taleb might argue that we are fooling ourselves into being comforted if these scenarios do not capture fat-tailed risks … 

    • http://juridicalcoherence.blogspot.com/ srdiamond

      The key inference being (it seems to me) that futurology involves risks that are fat-tailed–a distribution insurance actuaries can at least hope to often avoid.

      I think this is the crux of the matter. I wonder if Robin disagrees that futurological distributions are (necessarily?) fat-tailed and/or disagrees that futurological scenarios are unlikely to capture fat-tailed risks. (Also, if actuary Robert Eaton agrees with my unlearned intuitions about fat-tails and futurology.)

      There’s also the question of whether scenario-testing is possible in futurology.

  • Jess Riedel

    I don’t really like the discussion of the war example. The reason we don’t start from a baseline of war between India and Pakistan and instead choose peace is “simplicity”, I suppose, but that’s a confusing way to put it. The problem is that “war” doesn’t specify a scenario, it specifies a class of scenarios which are all completely different in important specifics. If instead you compared “peace” to “war in 2023″, “war in 2024″, “war in 2025″, etc, then it’s clear that “peace” is in fact the most likely scenario.

    I’m sure there a more elegant way to say this in terms of Bayesianism….

    • VV

       And also what do we mean by war? Some guerrilla/terrorist operations? A few skirmishes at the borders? Or an all-out war with exchange of nuclear warheads?

  • Siddharth

    Your physics background show: http://xkcd.com/793/

  • http://juridicalcoherence.blogspot.com/ srdiamond

    There seem to be two separate questions here: 1) using idealizations, which economists must use, as do all sciences and 2) using disjunctions–in situations where they are not exhaustive. You focus on the idealization aspect, but the real problem is with the disjunction, which ignores, among other things, the “black swan” problem.