8 Comments

The key inference being (it seems to me) that futurology involves risks that are fat-tailed--a distribution insurance actuaries can at least hope to often avoid.

I think this is the crux of the matter. I wonder if Robin disagrees that futurological distributions are (necessarily?) fat-tailed and/or disagrees that futurological scenarios are unlikely to capture fat-tailed risks. (Also, if actuary Robert Eaton agrees with my unlearned intuitions about fat-tails and futurology.)

There's also the question of whether scenario-testing is possible in futurology.

Expand full comment

There seem to be two separate questions here: 1) using idealizations, which economists must use, as do all sciences and 2) using disjunctions--in situations where they are not exhaustive. You focus on the idealization aspect, but the real problem is with the disjunction, which ignores, among other things, the "black swan" problem.

Expand full comment

Your physics background show: http://xkcd.com/793/

Expand full comment

 And also what do we mean by war? Some guerrilla/terrorist operations? A few skirmishes at the borders? Or an all-out war with exchange of nuclear warheads?

Expand full comment

I don't really like the discussion of the war example. The reason we don't start from a baseline of war between India and Pakistan and instead choose peace is "simplicity", I suppose, but that's a confusing way to put it. The problem is that "war" doesn't specify a scenario, it specifies a class of scenarios which are all completely different in important specifics. If instead you compared "peace" to "war in 2023", "war in 2024", "war in 2025", etc, then it's clear that "peace" is in fact the most likely scenario.

I'm sure there a more elegant way to say this in terms of Bayesianism....

Expand full comment

Insurers (esp Life insurers) are required to run stochastic interest rate scenarios to determine a range of outcomes against a baseline, as well as scenario-testing against other assumptions such as lapses and mortality changes. Nassim Taleb might argue that we are fooling ourselves into being comforted if these scenarios do not capture fat-tailed risks ... 

Expand full comment

Nevertheless, when I try to think hard about the future I am usually drawn to something very much like scenario planning. It does in fact seem a robustly useful tool.

How do you know? Did you make many non-trivial predictions using scenario planning that were later confirmed?

For example, macroeconomic scenarios are often (e.g.) defined in terms of deviation from baseline projections of constant growth, stable market shares, etc.

Note that "constant" growth actually means exponential growth, and market shares are known to fluctuate. One wonders how economists could possibly make good predictions starting from these ludicrous assumptions. Oh wait, they can't.

Yes, it is often a good idea to first look for your keys under the lamppost. You keys are probably not there, but that is a good place to anchor your mental map of the territory, so you can plan your search of the dark.

But when the keys turn out not to be there you should move on, not insist that they are really there and if they aren't it must be because the evil government moved them.

Expand full comment

Analytical or computational tractability also matter greatly, and always have. Assuming normal distributions, equal population means, limits of large data, etc. Scenario analysis seems one additional way to do this that also incorporates some consideration for what the consumers of the analysis will want to hear (both in terms of topical coverage and in terms of the recommendations they want to hear.)

Think of climate modeling. Few people seem interested in cumbersome computational models which output probability distributions over a continuum of scenarios, even if this would be their best planning tool. Scenarios afford them more opportunity for short term gains and are more computationally tractable. Scenarios are also more politically galvanizing than distributions over continuums for whatever reason.

Expand full comment