Author Archives: Nick Bostrom

Write Your Hypothetical Apostasy

Let's say you have been promoting some view (on some complex or fraught topic – e.g. politics, religion; or any "cause" or "-ism") for some time.  When somebody criticizes this view, you spring to its defense.  You find that you can easily refute most objections, and this increases your confidence.  The view might originally have represented your best understanding of the topic.  Subsequently you have gained more evidence, experience, and insight; yet the original view is never seriously reconsidered.  You tell yourself that you remain objective and open-minded, but in fact your brain has stopped looking and listening for alternatives.

Here is a debiasing technique one might try: writing a hypothetical apostasy.  Remind yourself before you start that unless you later choose to do so, you will never have to show this text to anyone.

Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete.  The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved.  The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago).  Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective.  Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.

(If anybody tries this, feel free to comment below on whether you found the exersise fruitful or not – but no need to state which specific view you were considering or how it changed.)

GD Star Rating
loading...
Tagged as: , ,

Moral uncertainty – towards a solution?

It seems people are overconfident about their moral beliefs.  But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don't know which moral theory is correct?

It doesn't seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.

Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework.  For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism.  Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils.  (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.)  Now what do you do, for different values of X?

The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc.  We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.

I'm working on a paper on this together with my colleague Toby Ord.  We have some arguments against a few possible "solutions" that we think don't work.  On the positive side we have some tricks that work for a few special cases.  But beyond that, the best we have managed so far is a kind of metaphor, which we don't think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:

Continue reading "Moral uncertainty – towards a solution?" »

GD Star Rating
loading...
Tagged as: , ,

Towards a typology of bias

It seems to me that we have reached a stage in our discussions on this blog, and in the field of bias studies more generally, where it would be useful to begin to develop a more systematic typology.  There are so many different alleged biases that without some unifying framework it is easy to get lost in the details.  Finding the right categories would also help us theorize better about bias.

To this end, let me tentatively propose a classification scheme, organized around the sources of bias:

Type-I biases arise from the fact that our beliefs sometimes serve functions – such as social signaling – that can conflict with their navigational (truth-tracking) function.  For example, our tendency to overestimate our own positive attributes may be an example of a Type I bias.

Type-II biases arise from the shortcomings and flaws of our minds.  We are subject to various kinds of processing constraints, and even aside from these hard limitations we weren’t very successfully optimized for efficiency in abstract rationality even in contexts where no adaptive function interferes with the navigational function of our beliefs.  Type II biases can result from fast-and-frugal heuristics that compromise accuracy for speed and ease of use, or from various idiosyncratic features of our brains and psyches.  We can distinguish subtype-II(a) biases deriving from shortcomings general to the human psyche (availability bias?), and subtype-II(b) biases deriving from shortcomings specific to some individual or group (beliefs about being danger among the paranoid?)

Type-III biases arise from our avoidable ignorance of facts or lack of insights, the possession of which would have improved our epistemic accuracy across a broad domain.  (Many of Eliezer’s recent postings appear to aim to overcome Type III bias, for example by explaining important facts about evolution, which would help us form more accurate belief about many specific issues that are illuminated by evolutionary biology.)  We distinguish subtype-III(a) resulting from lack of (procedural) insights about methodology, logic, or reasoning principles (e.g. anthropic bias), and subtype-III(b) resulting from lack of (substantial) knowledge about theoretical or concrete facts (e.g. errors resulting from ignorance about the basic findings of evolutionary psychology).

Continue reading "Towards a typology of bias" »

GD Star Rating
loading...
Tagged as: , ,

Why do corporations buy insurance?

Yesterday I wondered:

Why do corporations by insurance for fire damage and such?  It seems to me that maybe the oughtn’t, since the cost of insurance is greater than the expected payouts (due to administrative costs, asymmetric information, moral hazards etc).  Investors should presumably prefer corporations to be pure bets, and reduce risk and volatility by holding suitably diversified portfolios.

Today my colleague Peter Taylor, who worked in the insurance industry for many years, replied (reproduced here with permission):

Corporations certainly do buy insurance against fire and very good value it proves to be for them I must say when a large-scale fire does occur.  Your argument was adopted by some large corporations going "self-insured" or creating  their own "captives" but generally it takes one large loss and they are back in the insurance market.  Moreover, the argument for self-insurance can be about saving a few pennies off expenses rather than assessing the real risk – a recent example was Hull Council deciding to self-insure with its own fund against flood rather than pay the market price – underestimating the losses by an order of magnitude.  The reversion to the insurance market is partly to do with shareholders’ wish for stable results as well as their reluctance to accept bad luck.  Shareholders don’t seem to accept that accidents/fires/whatever happen and blame the management (Napoleon’s unlucky generals) so from a management point of view it is much easier to buy the insurance year on year and avoid getting caned when a loss does occur.

I’m still not sure I completely understand why insurance is bought. It might be that shareholders are biased (which seems to be what Peter suggests).  If so, is this a recognized failing? Do sophisticated institutional investors also prefer that the companies they own stock in buy fire insurance?

Continue reading "Why do corporations buy insurance?" »

GD Star Rating
loading...
Tagged as:

Disagreeing about cognitive style (or personality)

I think I can understand what Tyler is getting at when he accuses Robin of a penchant for "logical atomism".  In the present context, I interpret Tyler’s claim as a plea for greater appreciation of the messiness and ambiguity of evidence when looked at closely, more toleration for different modes of consideration, and less eagerness to embrace a few "stylized facts" and use them to draw bold, sweeping, shocking conclusions; and less faith in the fact/value distinction.

One might think that cognitive style a purely matter of taste, with no right or wrong.  Alternatively, one might think that different people have different strengths and weaknesses, and that it makes sense for individuals to adopt a cognitive style that makes the best use of the cognitive resources they have.  For example, somebody who is good at numbers should make more use of numerical data; while someone who is weak in math should employ more qualitative or narrative modes of cognition.  On this view, there is right and wrong, but it is relative: different for different people. 

A third alternative is that there is an optimal cognitive style that we should all attempt to approximate.  Through accidents of genetics and development, we diverge from this ideal in different directions.  But we can learn from others and from our own experience to calibrate our tendencies to better resemble the ideal human epistemic agent.  (A similar set of views could be formulated about emotional style, or personality.)

Continue reading "Disagreeing about cognitive style (or personality)" »

GD Star Rating
loading...
Tagged as:

URGENT AND IMPORTANT (not)

Eliezer recently noted the general problem of lack of accountability for futuristic predictions. I wonder if there may not also be an additional problem specifically for claims of urgency or importance (e.g. ones referring to "a critical period" or "a crucial stage" or "very important task"…)

I’ve noticed in some projects that I’ve been involved with that there were many steps each of which, at the time, were said to be and gave the appearance of being "the really crucial one, the one that would determine the success or failure of the project". Having passed one hurdle, there was another one — this time the really critical one. Then another, the really really critical one. Then one more…

Maybe project managers produce inflation in the currency of urgency. In order to maximize the effort of their teams, they hype each stage as being more important and urgent than it really is. Once the team catches on, the manager must increase the hype even more, just to achieve the same effect. In the end, every task must be a priority flag in order to get done at all.

I’m trying avoid doing this, but I suspect that I am thereby making my communication less effective when I’m talking to audiences whose "importance-meter" has been calibrated to speakers who routinely use emphatic language to get attention and to underscore the importance of what they are saying. 

Continue reading "URGENT AND IMPORTANT (not)" »

GD Star Rating
loading...
Tagged as:

Overcoming bias – what is it good for?

One sign that science is not all bogus is that it enables us to do things, like go the moon. What practical things does debiassing enable us to do, other than refraining from buying lottery tickets?

In this context, it is not so helpful to adduce controversial philosophical or futuristic conclusions, such as that one should sign up for cryonics, reallocate all one’s charity to combat existential risk, or focus one’s work on creating Friendly AI. For presumably it would be as easy to delude oneself that these conclusions are correct as it is to delude oneself that one has been successful in overcoming bias and that one has thereby become an importantly better epistemic agent.

Consistent long-term success in active stock market speculation would be an impressive proof. But to require that would be to set the standard too high. Presumably, markets already suffer much less from bias than many other contexts, so even if one cannot beat the market one might nevertheless have gained some important ability.

But in what sphere of application does success at overcoming bias yield uncontroversial practical benefits?

Continue reading "Overcoming bias – what is it good for?" »

GD Star Rating
loading...
Tagged as:

Multipolar disagreements (Hal’s religious quandry)

Hal Finney wrote: "…reminds me of my justification for not being religious: the majority of people in the world are not Christian, the majority of people in the world are not Muslim, the majority of people in the world are not Hindu, the majority of people in the world are not Buddhist, etc… So I can’t pick any religion without being in a minority! I’m not sure the conclusion really follows though. Something I’m still working on."

Also, the majority of people in the world are not atheist (or non-religious, or secular). Absent reasons to weight some opinions more, what should one believe when there are several inconsistent views, none of them commanding majority support?

I think in such a case one should believe a superposition of the views, i.e. one assigns a probability measure over the alternatives that reflects the degree of support they each have from their various constituencies. In the unrealistic, simplest case, where everyone’s reliability is the same and errors are uncorrelated, this might perhaps amount to assigning probability proportional to number of proponents.

Assuming the unrealistic simplifying premiss, in Hal’s case this would amount to being uncertain but not dismissive about spiritual matters, say being an agnostic who tends to believe that some existing religion is probably right, but not sure which one although more likely one of the big ones than some minor cult.

Of course, you might find that almost everybody would agree that such agnosticism is wrong, and you would find yourself in disagreement with this overwhelming majority. But it would nevertheless seem to be the position that would minimize disagreement.

A separate problem is what you should do if you end up with this belief. Suppose each religion claims that you will go to hell unless you believe that particular religion with all your heart. In that case, your rational course of action might be to pick the most likely religion and then do what you can to try to become a full convert to it.

The existence of such extreme disagreements as in the religious case, however, strongly suggests that not everybody involved is unbiased an in honest pursuit of objective truth. Some other factors must play a huge role in determining religious belief. So you might also think that by carefully examining what those non-rational factors are, you may be able to do better than minimizing disagreement; you might reach some insights that would make it rational for you to take sides. Of course, it is easy to delude oneself into thinking that one has such special insights, so one should be cautious. 

GD Star Rating
loading...
Tagged as: ,

When are weak clues uncomfortable?

Robin Hanson asked, "What is the common element of topics where people are uncomfortable with weak clues?" I will hazard a guess:

There are situations where even considering (deliberating about, giving conscious attention to) some possible course of action sends a bad signal. Sometimes taking that sort of action might be necessary. But ideally, you don’t want to be considering whether to take that action unless you really will take the action. In such cases, you may be uncomfortable with weak clues. If you notice and acknowledge such weak cues at all, they force you to consider whether they are strong enough that you need to take the dread action; yet they are likely too weak, so you won’t take the action after all. All you have done is to send out the bad signal.

Consider examples such as accusing an employee of stealing, a spouse of cheating, or declaring war on a neighboring country. Sending the signal that you are thinking about whether the weak clues you have are sufficient to warrant embarking on these drastic courses of action may well sour your relationships. In these cases, weak clues can be worse than useless.

GD Star Rating
loading...
Tagged as:

Needed: Cognitive forensics?

Perhaps we need a new field of "cognitive forensics" for analyzing and investigating motivated scientific error, bias, and intellectual misconduct. The goal would be to develop a comprehensive toolkit of diagnostic indicators and statistical checks that could be used to detect acts of irrationality and to make it easier to apprehend the culprits. (Robin’s recent post gives an example of one study that could be done.) Another goal would be to create a specialization, a community of scholars who had expertise in this subfield, who could apply it to various sciences, and who could train students taking advanced methodology classes.

Of what components would cognitive forensics be built? I’d think it would have a big chunk of applied statistics, but also contributions from cognitive and social psychology, epistemology, history and philosophy of science, sociology of science, maybe some economics, data mining, network analysis, etc.

Compared to this blog, the field could have somewhat narrower scope, focusing primarily on empirical scientific research rather than on rationality in general. It might also focus primarily on statistical tests rather than on wider issues such as institution design (although ideas for institutional reform might emerge as a side product). It might be driven more by statistical analysis of particular data sets than by big theories of common human cognitive biases (although the latter would serve as a source of inspiration for hypotheses to test).

The time might be ripe for this sort of endeavor. I have the impression that scattered articles on the problems of peer review and on possible statistical biases in scientific research (e.g. by funding source, file drawer effect etc.) are now appearing fairly regularly in Science and Nature.

Three questions I have are: (1) to what extent would it make sense to study *motivated* scientific error semi-separately (as a sub-discipline) rather than as part of the course of statistics and scientific methodology in general? (2) to what extent does such a sub-discipline already exist today? (3) if there is a need for a new sub-discipline, should it be as envisaged here or should it be constructed in a different way?

GD Star Rating
loading...
Tagged as: