Goals and plans in decision making

For years, Dave Krantz has been telling me about his goal-based model of decision analysis.  It’s always made much more sense to me than the usual framework of decision trees and utility theory (which, I agree with Dave, is not salvaged by bandaids such as nonlinear utilities and prospect theory).  But, much as I love Dave’s theory, or proto-theory, I always get confused when I try to explain it to others (or to myself):  "it’s, uh, something about defining decisions based on goals, rather than starting with the decision options, uh, …."  So I was thrilled to find that Dave and Howard Kunreuther just published an article describing the theory.  Here’s the abstract:

We propose a constructed-choice model for general decision making. The model departs from utility theory and prospect theory in its treatment of multiple goals and it suggests several different ways in which context can affect choice.

It is particularly instructive to apply this model to protective decisions, which are often puzzling. Among other anomalies, people insure against non-catastrophic events, underinsure against catastrophic risks, and allow extraneous factors to influence insurance purchases and other protective decisions. Neither expected-utility theory nor prospect theory can explain these anomalies satisfactorily. To apply this model to the above anomalies, we consider many different insurance-related goals, organized in a taxonomy, and we consider the effects of context on goals, resources, plans and decision rules.

The paper concludes by suggesting some prescriptions for improving individual decision making with respect to protective measures.

Going to their paper, Table 1 shows the classical decision-analysis framework, and Table 2 shows the new model, which I agree is better.  I want to try to apply it to our problem of digging low-arsenic wells for drinking water in Bangladesh.

Is vs. should

I have a couple of qualms about Dave’s approach, though, which involve distinguishing between descriptive and normative concerns.  This comes up in all models of decision making:  on one hand, you can’t tell people what to do (at best, you can point out inconsistencies in their decisions or preferences), but on the other hand these theories are supposed to provide guidance, not just descriptions of our flawed processes.

Anyway, I’m not so thrilled with goals such as in Krantz and Kunreuther’s Table 5, of "avoid regretting a modest loss."  The whole business of including "regret" in a decision model has always seemed to me to be too clever by half.  Especially given all the recent research on the difficulties of anticipating future regret.  I’d rather focus on more stably-measurable outcomes.

Also, Figure 4 is a bit scary to me.  All those words in different sizes!  It looks like one of those "outsider art" things: krantzmap.png

In all seriousness, though, I think this paper is great.  The only model of decision making I’ve seen that has the potential to make sense.

Need a better name

But I wish they wouldn’t call their model "Aristotelian."  As a former physics student, I don’t have much respect for Aristotle, who seems to have gotten just about everything wrong.  Can’t they come up with a Galilean model?

Relevance for "overcoming bias"

Dave’s work relates to "overcoming bias" because I think the classical decisions/utilities/outcomes model of decision making has serious problems (well-known problems such as decision-makers limiting their search of actions to what was already considered in the model, and downweighting the probability of unforeseen outcomes), to the extent that this model is itself introducing bias into our decisions.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.subsolo.org/gustibus/archives/2007/07/index.html#007620 De Gustibus Non Est Disputandum

    Isto parece bom

    O texto todo está aqui. Com os avanços das neurociências, só mesmo o preconceito explicará, no médio prazo, a falta de interdisciplinariedade entre os departamentos de Ciências Econômicas e: i. os de Biológicas, ii. os de Psicologia, iii. os…

  • Douglas Knight

    these theories are supposed to provide guidance, not just descriptions of our flawed processes

    That’s just not true. Prospect theory is very explicitly a model of what we do wrong. To compare prospect theory and decision theory is a very serious category error. I can’t tell if you’re making the error or accusing the paper of making the error; I find the possibility that both are true quite distressing.

  • Hopefully Anonymous

    Andrew,
    Great contribution, thanks. I’m curious about one line of this post: “you can’t tell people what to do (at best, you can point out inconsistencies in their decisions or preferences)” -it sounds like you’re giving us a shorthand here for some larger fleshed out/empirically demonstrated idea? I ask because on its face it seems quite possible in a variety of situations to tell people what to do. Does this line specifically to your situation regarding low arsenic wells and Bangladesh? More explanation would be appreciated.

  • http://profile.typekey.com/andrewgelman/ Andrew

    Douglas,

    I’m sure that any category errors are mine, not Dave Krantz’s. To respond to your distress, let me elaborate.

    I agree that prospect theory, standing alone, is descriptive rather than normative. What I meant was that in the larger context, prospect theory is an adjustment to classical decision theory, which is definitely normative. From a historical standpoint, classical decision theory has been augmented over the decades with various fixes to become more descriptively accurate. For example, there was a time when people thought that nonlinear utility functions could explain the psychological/decision-making phenomenon of risk aversion. Kahneman, Tversky, and others showed that there is “loss aversion” and “uncertainty aversion” which cannot be explained in this way, hence certain aspects of prospect theory. So, yes, it’s descriptive. But I think it’s fair to say that prospect theory has an underlying goal of providing guidance, perhaps to calibrate our intuitions. Prospect theory and classical decision theory share a decisions/utilities/outcomes framework, and Krantz and Kunreuther argue that these theories have problems both descriptively and normatively.

    I think the normative connection is important. It’s easy to show that prospect theory–or just about any other theory of that type–is descriptively inaccurate. People make dumb decisions all the time–including dumb decisions where it would’ve been worth them spending 5 minutes to think things over first. That’s not so interesting. It’s a big step to also show (or claim) that the decisions/utilities/outcomes framework is not the best way to solve decision problems.

    For more elaboration on this, see Dave Krantz’s comments here: here.

  • http://www.stat.columbia.edu/~gelman/blog/ Andrew

    Hopefully A.,

    I don’t think I had any deep point here. Sometimes we can tell people what to do–or, even more precisely, it’s easier to get people to do what we want if we can first figure out what we want them to do.

    What I really meant was that decision theorists such as Dave Krantz aren’t in a position to tell you and me what to do. When I’ve taught classical decision analysis, I’ve told students that it does two things:
    (1) For well-defined utilities and uncertainties, the theory can actually tell you what to do.
    (2) More generally, the theory can point out inconsistencies in your decisions and preferences.

    In our project in Bangladesh, and also in our earlier work in home radon remediation, we certainly aren’t in a position to tell people what to do. This is one reason I think that an appropriate role for government and other outside agencies to play is to collect and analyze information that individuals can find useful.

    See this discussion of personal vs. institutional decision analysis. In textbooks, decision analysis is often described in the context of personal decisions (which car should I buy, which job should I take, etc.) but I think the theory works better in institutional contexts or in settings where the analyst is not the same as the person who makes the decision.

  • Hopefully Anonymous

    Andrew, thanks for the quick and clear explanation. In a select environment of very positvely deviant rational analysts here on this blog, I think you stand out in the crowd. Do you share concerns about personal mortality odds (and hence general existential odds of humanity) that some other contributors and commentors of overcomingbias have? I’m thinking here of Anders Sandberg and TGGP, among others. If so, how are you attempting to maximize your personal persistence odds?

    Also, have you considered maximizing offspring with women that are also the most demonstrably able to analyze and model solutions the existential threats we all face? For example, by a process of sperm and egg donation, in vitro fertilization surrogate pregnancy, and adoption, and perhaps even incentive trusts to encourage the talented members of such offspring to get an education and work in fields where they’re most likely to positively impact our existential odds?

    I’m asking as someone who is concerned with maximizing my personal odds of persistence, and looking for the most efficient ways to achieve that goal. Feel free to reply from an anonymous email account to lawfinals@yahoo.com if you feel the need to.