7 Comments

Eliezer, does that mean there's a serious problem with your model?

Expand full comment

In 1965, Herman Kahn and his buddies at Rand published a book called The Year 2000, in which they made a host of predictions for the millenial year. One of you who has the time might be interested is reading it and letting us know the %#@&* accuracy of their predictions.

Expand full comment

Boy, that study sure didn't turn out the way I would have expected.

Expand full comment

Seems to me the fact that software is "expensive and sophisticated" doesn't mean it's right, or even any good.

People wanting to make changes to 'make their mark' is nothing new - you know, "Why should I care what color the bikeshed is?"

Expand full comment

That last paragraph is critical, as it's a common error when discussing any prediction methodology. Judging effectiveness of a prediction can ONLY be done based on the goals of that prediction. Minimizing out-of-stock events is a far different goal than predicting actual sales.

When measuring the accuracy of the different forecasting methods, they should have used a measure that tracks the purpose of the forecasts. Average percentage error makes sense for a true forecast. Lost profit from out-of-stock less carrying costs for overstock is likely a better measure of the retailer's accuracy.

Expand full comment

I just read something great on this topic. It'll be stuck behind a pay gate for a while, though, so I'm going to post some excerpts here. (I'm cutting out the Magic: the Gathering related content, because I suspect that most people here don't play Magic and therefore won't understand it.)

[begin article]

It turns out that even very intelligent human beings are very bad at making optimal strategic decisions in a world of dynamic complexity.

For well over twenty years, the management gurus at MIT’s Sloan School of Management have been showing just how badly sharp undergraduates, brilliant graduate students, and experienced executives can be at making decisions with even a simplified model of the real world. When asked to participate in the "Beer Distribution Game," even the brightest among us find themselves frustrated, confused, and most importantly, wildly wrong.

The game is set up as follows. Participants are asked to divide themselves into four groups: retailer, wholesaler, distributor, and the factory, and told to minimize costs. The retailer is presented with customer demand for beer. Each team is eager to sell off its inventory because buildup costs money. After four weeks of depleting inventory and making orders to replace that inventory, the consumer demand spikes upward. At that point, chaos reigns.

In most cases, the participants are not allowed to see the basic "board" state, but here’s a good picture of what’s going on.

The retailer has a two-week shipping delay in receiving his orders from the wholesaler who has the same delay in receiving from the distributor and so on. There is also a two-week delay from orders place to orders made. Most importantly, there is an even longer production delay as raw materials are shipped into the factory and processed. Although demand is held constant at 4 cases of beer in the first four weeks and then 8 cases for the remainder of the game, the teams of wholesalers, distributors, and factories sketched a pattern of perceived consumer demand with huge amplitude fluctuations. The end result was that average costs were more than ten times greater than optimal.

When it was revealed that customer demand was in fact constant, many voice disbelief. According to Professor Sterman, "many participants are quite shocked when the actual pattern of customer orders is revealed."

These sorts of studies are well-known and easily replicated in business schools.

[TypePad's spam filter is acting up, so I'll have to split this into multiple parts.]

Expand full comment

That's interesting. I wonder what methods and information they used to adjust the predictions? Could any of them be used to improve the computer algorithms?

Expand full comment