11 Comments

JMG3Y, as Hal notes simple attempts to "debias" usually fail. But anytime someone uses statistical techniques to draw a conclusion, they are implicitly acknowledging that just eye-balling the data would be biased. I'd call that a typically successful attempt to overcome bias.

Expand full comment

JMG3Y, there has been a great deal of research on "debiasing", attempts to reduce various perceptual and judgmental biases in different ways. I've looked at a few of these papers, and it seems that the consensus is that debiasing is extremely difficult and usually doesn't work. However, it is not usually done simply by explaining the reality of Bayesian inference or probability theory, then turning people lose on problems. Rather, various tricks are used, such as getting them to consider alternatives, or imagine themselves in certain scenarios, or rewording the problems to try to reduce biasing effects. And as I said, usually these don't help much.

Tetlock told an amusing story of his debiasing experiment that backfired, in his book I reviewed earlier. He attempted to get participants to explicitly consider a wide range of alternative scenarios in making a forecast, to try to overcome a common bias of focusing too soon in analysis. But his single-minded "hedgehogs" refused to take the scenarios seriously since they thought they already knew exactly what was going to happen; their scores didn't change. And his open-minded "foxes" wasted so much time delightedly exploring the intricacies of the new scenarios that they lost track of the bigger picture and ended up doing worse in the exercises.

In general there seems to be something of an unstated assumption that just teaching people Bayesian decision theory would be uselessly abstract; I don't know if this is due to earlier failed experiments, or perhaps reflects experimenters' judgment that the theory is too complex for average subjects to grasp.

Expand full comment

Reading Robin's paper on what in our ancient brain might be driving our health care choices, from the consumption level to the the health policy and research expenditure allocation level, and other posts on this blog brings to mind a basic (and likely naive) question:

How strong is the empirical evidence that an understanding of the cognitive problems that can result in decision making errors, such understanding the sources and effects of the many forms of bias, improves an individual's decision making significantly? Or, instead of improving metacognition, does the evidence show that more benefit comes from improving the process itself? Or both?

Is this different for group decision making as opposed to invidividual decision making? In other words, what provides the decisions with the least error - design and execution of the process or training the individual members? Would everyone, from individual consumers to national politicians actually make significantly different choices if they better understood what aspect of this?

A related question. Does a sound understanding of the human learning process, such as it is, improve a learner's performance significantly? Or should the focus remain on the design of the instruction process itself that in turn drives and controls the learner's behavior much of the time?

Expand full comment

Hi Robin, the answer is "probably". I haven't seen any serious obstacles to turning intuitionistic probability theory into intuitionistic decision theory, though as I mentioned before I haven't carried this program through in any detail. There are a couple of places where things are likely to go very differently from classical Bayesianism -- one in particular is that constraining utility functions to constructive functions will mean that you have a different class of utility functions (relative to classical logic). A second question is the interpretation of conditional probabilities -- while you can go ahead and define P(A|B) = P(A and B)/P(B) just as before, it could be a better idea to interpret conditionality as a modal operator in the logic. (That is, the sentences that get assigned probabilities are like A and B, A or B, A|B, A implies B, etc.)

Finally, there are still computational objections to even intuitionism -- interpreted computationally, intuitionistic logic limits you to computable functions, but a strongly finitist view might be that "computable" is still too generous, since no one can actually compute a function that takes (say) hyper-exponential space. While I have a good deal of sympathy for this point of view, I think that the logics here are still to immature to try and base a decision theory on.

Expand full comment

Neel, is your concept of rational beliefs integrated into a concept of rational decisions, similar to the way ordinary probabilities are integrated into ordinary decision theory?

Expand full comment

Nick, regarding your three points: I grant that our degree of error may be context dependent, and so the attractiveness of Bayesian analysis may vary with context in that way. I will elaborate in future posts more about how to apply Bayesian analysis to more types of belief.

Finally, I suggest Bayesian beliefs as a normative standard of reference, not as an exact procedure. So it would be a problem if I could not show you Bayesian arguments that our initial inclinations of beliefs about epistemic criteria are full of error. But it is not problematic that I did not exactly calculate Bayesian probabilities when I formed those beliefs. In fact, we have many kinds of data suggesting l(to a Bayesian) large errors in our beliefs about what kinds of beliefs are reasonable. Is this really in any doubt?

Expand full comment

As it happens, I am not a (classical) Bayesian, because I don't see any reason that the requirement that probabilities in a probability distribution must sum to one. If I have a low belief that something is the case, then it doesn't follow that I have a high belief that it is not the case -- if P(X) = 0.1, then it doesn't follow that P(not X) = 0.9. It can certainly be the case that I don't have strong beliefs about the subject at all.

This works perfectly sensibly as a probability theory too. The main change is that instead of building probability theory over a boolean algebra (ie, a model of classical logic), you build it up over a Heyting algebra (ie, a model of constructive logic). The constructive failure of the law of the excluded middle becomes the rule that P(X) + P(not X) <= 1. Now, you can do decision and game theory in a standard way over this nonstandard probability theory, though I haven't analyzed any theorems to see how they decompose constructively. (I need to graduate....)

This choice about the appropriate standard of rationality will be motivated by your opinions about the proper foundations of mathematics (eg, constructive or classical). This, in turn, means that you cannot choose the foundations rationally, because how you rationally update your beliefs depends on your beliefs about the foundations.

Expand full comment

Robin: I understand the issue you are raising like this: it is not one of restricted methodology, that is, when are Bayesian methods appropriate, but the general claim that we ought to be Bayesian believers, or that rational belief is believing in accordance with probabilistic belief as determined by Bayesian methods. What you offer is essentially an argument by analogy.1. Given noisy data, it is more truth conducive to use a simpler rather than more complex curve fitting strategy.2. Our beliefs are like noisy data3. Therefore it is more truth conducive to use a simpler strategy for rationalising our beliefs.4. Bayesianism is simpler than other modes of non-deductive reasoning.5. Therefore we ought to be Bayesian believers.I note that the first premiss conceals a huge and very interesting methodological issue in its own right, and the fourth premiss could bear considerable discussion, but that is not what I want to discuss.

There are three main points I would make. First, I would suggest that Bayesianism is ill suited to justifying why we should believe the second premiss, since what we need is an argument for why beliefs are relevantly like noisy data. You have offered some reasons along that line, and my point is that you have engaged in standard non-deductive reasoning rather than offered a calculation of its probability. If this point is correct then the conclusion must be false and all we can talk about are which circumstances are those in which Bayesianism is the right guide. But that is not what you want. Secondly, the second premiss looks like a contingent claim, so the conclusion is too strong and could only apply when beliefs are in fact relevantly like noisy data. Finally, the conclusion has to be amplified in terms of justified belief being a matter of belief formed in accordance with Bayesian principles, but what are they and what exactly would that mean for how our beliefs ought to be? As a matter of fact we do not reason by calculating probabilities except when we are reasoning with *full* beliefs about probabilistic propositions. But your Bayesian wishes to apply these principles to our beliefs in general, including a priori beliefs such as philosophical doctrines (I see that Paul has brought up the class of a priori beliefs that are moral beliefs in his querying the use of Bayesianism). The way you have applied this is by formulating general principles of reasoning by reflecting on the outcomes of the mathematical results, e.g. ‘take account of disagreement’. But that, by being only supplemental, is to acknowledge the priority of the standard modes of non-deductive reasoning.

Expand full comment

Conchis, perhaps hedgehogs place excess confidence in the predictions of their single model, while foxes are appropriately uncertain about the implications of their eclectic mix of models? Perhaps in other areas the relative confidence of those with simple and complex models is different?

Expand full comment

It's interesting to think how this fits with Philip Tetlock's work (which has been commented on previously on this blog) suggesting that it's those who are more willing to err on the side of context specific explanatory models (foxes) who are typically more accurate, compared with those who try to apply simple "one-size-fits-all" frameworks to every problem (hedgehogs). My knee tends to jerk with Robin's, but the challenge is to figure out when simplfication is likely to be useful and when not. Context-dependence is context dependent. Or something.

Expand full comment

I just completed a course on approximation of functions, and this reminds me of a common phenomenon there. Consider, for example, a continuous function f and we wish to approximate it by a polynomial function p of degree n. If we suppose that f is unknown but have a set of points from it (as is often the case with functions describing some new physical process), we can choose p such that we can intersect every such point up to n. The trouble is that as we increase the number of interpolation points (and thereby increase n), p oscillates wildly, and unless f is periodic, that's generally not a good thing. So we can always get a function that better fits the data, but it's another matter as to whether that function is a better approximation to the unknown function.

Expand full comment