For years, Dave Krantz has been telling me about his goal-based model of decision analysis. It’s always made much more sense to me than the usual framework of decision trees and utility theory (which, I agree with Dave, is not salvaged by bandaids such as nonlinear utilities and prospect theory). But, much as I love Dave’s theory, or proto-theory, I always get confused when I try to explain it to others (or to myself): "it’s, uh, something about defining decisions based on goals, rather than starting with the decision options, uh, …." So I was thrilled to find that Dave and Howard Kunreuther just published an article describing the theory. Here’s the abstract:
We propose a constructed-choice model for general decision making. The model departs from utility theory and prospect theory in its treatment of multiple goals and it suggests several different ways in which context can affect choice.
It is particularly instructive to apply this model to protective decisions, which are often puzzling. Among other anomalies, people insure against non-catastrophic events, underinsure against catastrophic risks, and allow extraneous factors to influence insurance purchases and other protective decisions. Neither expected-utility theory nor prospect theory can explain these anomalies satisfactorily. To apply this model to the above anomalies, we consider many different insurance-related goals, organized in a taxonomy, and we consider the effects of context on goals, resources, plans and decision rules.
The paper concludes by suggesting some prescriptions for improving individual decision making with respect to protective measures.
Going to their paper, Table 1 shows the classical decision-analysis framework, and Table 2 shows the new model, which I agree is better. I want to try to apply it to our problem of digging low-arsenic wells for drinking water in Bangladesh.
Is vs. should
I have a couple of qualms about Dave’s approach, though, which involve distinguishing between descriptive and normative concerns. This comes up in all models of decision making: on one hand, you can’t tell people what to do (at best, you can point out inconsistencies in their decisions or preferences), but on the other hand these theories are supposed to provide guidance, not just descriptions of our flawed processes.
Anyway, I’m not so thrilled with goals such as in Krantz and Kunreuther’s Table 5, of "avoid regretting a modest loss." The whole business of including "regret" in a decision model has always seemed to me to be too clever by half. Especially given all the recent research on the difficulties of anticipating future regret. I’d rather focus on more stably-measurable outcomes.
Also, Figure 4 is a bit scary to me. All those words in different sizes! It looks like one of those "outsider art" things:
In all seriousness, though, I think this paper is great. The only model of decision making I’ve seen that has the potential to make sense.
Need a better name
But I wish they wouldn’t call their model "Aristotelian." As a former physics student, I don’t have much respect for Aristotle, who seems to have gotten just about everything wrong. Can’t they come up with a Galilean model?
Relevance for "overcoming bias"
Dave’s work relates to "overcoming bias" because I think the classical decisions/utilities/outcomes model of decision making has serious problems (well-known problems such as decision-makers limiting their search of actions to what was already considered in the model, and downweighting the probability of unforeseen outcomes), to the extent that this model is itself introducing bias into our decisions.
Andrew, thanks for the quick and clear explanation. In a select environment of very positvely deviant rational analysts here on this blog, I think you stand out in the crowd. Do you share concerns about personal mortality odds (and hence general existential odds of humanity) that some other contributors and commentors of overcomingbias have? I'm thinking here of Anders Sandberg and TGGP, among others. If so, how are you attempting to maximize your personal persistence odds?
Also, have you considered maximizing offspring with women that are also the most demonstrably able to analyze and model solutions the existential threats we all face? For example, by a process of sperm and egg donation, in vitro fertilization surrogate pregnancy, and adoption, and perhaps even incentive trusts to encourage the talented members of such offspring to get an education and work in fields where they're most likely to positively impact our existential odds?
I'm asking as someone who is concerned with maximizing my personal odds of persistence, and looking for the most efficient ways to achieve that goal. Feel free to reply from an anonymous email account to lawfinals@yahoo.com if you feel the need to.
Hopefully A.,
I don't think I had any deep point here. Sometimes we can tell people what to do--or, even more precisely, it's easier to get people to do what we want if we can first figure out what we want them to do.
What I really meant was that decision theorists such as Dave Krantz aren't in a position to tell you and me what to do. When I've taught classical decision analysis, I've told students that it does two things:(1) For well-defined utilities and uncertainties, the theory can actually tell you what to do.(2) More generally, the theory can point out inconsistencies in your decisions and preferences.
In our project in Bangladesh, and also in our earlier work in home radon remediation, we certainly aren't in a position to tell people what to do. This is one reason I think that an appropriate role for government and other outside agencies to play is to collect and analyze information that individuals can find useful.
See this discussion of personal vs. institutional decision analysis. In textbooks, decision analysis is often described in the context of personal decisions (which car should I buy, which job should I take, etc.) but I think the theory works better in institutional contexts or in settings where the analyst is not the same as the person who makes the decision.