44 Comments

Realistically if a plumber has even heard of the word "rationalist" (s)he is probably an unusually smart plumber and will probably bother to keep track of whether (s)he loses or wins money on average, and might even realise that stocks have a better risk/reward profile than sports betting.

I feel this discussion is very detached from reality. Go to your local gambling venue and you will not find people who have read up on probability theory and rationality and are overconfident. You will find people who don't even know what the word rationality means and are close to innumerate. Learning a small amount of rationality is probably beyond their mental capability, but for those who could learn about concepts like probability and calibration it would probably be helpful. For example, they might learn to identify the gambler's fallacy and the inverse gambler's fallacy.

Just to disclaim: this comment is not intended to insult people who spend a lot of time and money gambling, I'm just pointing out the reality of the situation; often lack of education is to blame for the problems that these people go through.

Expand full comment

If the plumber thinks to himself "I'm a rationalist", he may well be more likely to bet, because he thinks the usual advice to avoid betting is targeted at those irrational people.

Expand full comment

The push-back against this line of argument is (a) that it is actually not very difficult to be rational about a specific, narrow topic if you actually want to be, in fact it is almost automatic if you are competent, and (b) that there are significant gains to be had from learning a rationality lesson in one area of your life and then applying it elsewhere rather than wasting the learning opportunity, for example calibration, overconfidence, noticing your own confusion, etc.

Do we find the the majority of good plumbers are irrational within their plumbing work? Or the majority of Taxi drivers are irrational about how to drive a Taxi? I think not. But if they go to the local betting shop after work and lose most of their earnings on sports betting machines, then perhaps they would benefit from not following Robin's advice...

Expand full comment

Striving for rationality builds good mental habits.

It's humanly impossible to develop broad rationality habits. (Loosely derived from homo hypocritus.)

I think it encourages people to overlook their mistakes when they should be jumping on them.

Largely from the belief that it's possible come the illusions accompanying the label "rationalist."

[Added.] Most importantly: were it possible to develop broad rational habits, nobody would want them! Hypocrisy (hence irrationality) is too useful, in fact, absolutely necessary. (That's a direct inference from homo hypocritus.)

Expand full comment

The consensus (or at least plurality) view is that willpower is depleted by effort at self-control, which intellectual rationality requires.

My advice differs from Robin's, true. I think he neglects that the vast amount of our willpower is spent in practical matters. Even if we style ourselves intellectuals, we must get writing done. (The conflict between the intellectual and practical might even be anathema to Robin, whose prediction markets aspire to make the intellectual practical.)

I also think Robin ignores the interconnectedness of ideas, hence thinks that specializing one's rationality (apart from one's skills) is practicable. I think Robin would want to say that intellectuals can specialize their intellectual rationality and be irrational about (in particular) religion.

Expand full comment

But what evidence is there that intellectual rationality depletes willpower?

(Also, if it does, this leads to different advice than what Robin gives, since you can buy more rational beliefs at the cost of rational actions and vice versa).

Expand full comment

Striving for rationality builds good mental habits.

Implied (loosely) by homo hypocritusI think it encourages people to overlook their mistakes when they should be jumping on them.

Largely from the belief that it's possible come the illusions accompanying the label "rationalist."

[Added.] Perhaps most importantly: were it possible to develop broad rational habits, nobody would want them! Hypocrisy (hence irrationality) is too useful, in fact, absolutely necessary. (That's a direct inference from homo hypocritus.)

Expand full comment

The EY regime offers to make you rational in your practical life at the same time as it makes you a rational intellectual. I think intellectual rationality is at the cost of practical rationality; so, I reject EY's claims. (Just to be clear.) [EY himself has sometimes demonstrated a remarkable lack of practical rationality, which agrees with my view.]

The main empirical evidence for severe willpower limitation has been with us long before Baumeister did his studies. It is (it seems to me) the only way to explain akrasia. (Thus, the term (and concept) "ego depletion" originated with Freud rather than Baumeister.) (See Akrasia explained — http://tinyurl.com/arg4ttq )

I think Robin is resisting the implications of homo hypocritus by making his claim that rationality is a limited resource a platitude about limited time. The platitude doesn't support the essay's title.

Expand full comment

Hmm. Looks like I tried to specify in too much detail. Here's another shot:

The 'swing away' regime holds if rational thinking is no more costly than biased thinking, and the only constraint is your maximum (growable or not) on how rationally you can think. You still have to budget your thinking, but you should always think as rationally as you can.

The EY regime holds if thinking rationally depletes a separate persistent resource, but investing in growth consistently returns enough to justify the expenditure including opportunity costs.

The RH regime holds if thinking rationally depletes a separate persistent resource and justified growth investments are few. Though there's a continuum from EY to RH.

Expand full comment

It isn't clear to me how your framework deals with familiar typical cases where there are gains to continued focus on particular tasks. If you want to be a good doctor, you need to put in a lot of time learning to be a good doctor. It isn't so much that playing basketball has a depletion or growth effect on doctoring, it is more that spending lots of time and energy learning to play basketball well comes at the cost of time you could have spend to learn to doctor well.

Expand full comment

Let me try to reframe the debate by characterizing model-space. That is, specifying what empirical facts are disputed and establishing a mapping from their possible values to behavior prescriptions.

I think we can take as an axiom that a given person in a given state facing a given question has a limited capacity to be rational on that question (note that this axiom alone explains why no one can ever be rational about every question, so that fact won't help us decide among models with this axiom). What's disputed is how being rational on one question affects the capacity to be rational on future questions. I would characterize this with two 'parameters': the depletion effect and the growth effect. Also relevant is the supply of questions over time. I'm going to consider models where questions have two properties: how much rationality they require to answer correctly(the difficulty), and how much utility a correct answer yields (the value).

One logical possibility is that the depletion effect is negligible, either because its magnitude is small or zero, or because its duration is shorter than the time before the next question becomes available. In these models, there's no downside to using your rationality on a question, so you should always do it (unless you want to gain from hypocrisy), whether or not there's any growth effect. I'll call this the 'swing away' regime.

Another possibility is that the growth effect is very large on a questions of certain difficulty, and high-value questions of that difficulty are infrequent relative to the duration of depletion, whereas low-value questions of that difficulty are common. In that case it makes sense to work on lots of low-value questions to maximize growth, in addition to some high-value questions to actually generate utility. This is basically what EY is contending, as I read him.

Another possibility is that the growth effect is moderate to none, the depletion effect is large and long-lasting and, crucially, high-value questions are frequent. In this case value-difficulty ratio is the most important consideration and you get something like what RH is arguing.

This isn't an exhaustive classification of possible models, but I think it reasonably captures what's been put forward (the formulation in terms of discrete questions is the biggest weakness).

Now, what empirical evidence do we have that lets us discriminate among models? Or are different individuals in different regimes, in which case this post is trueish but should be titled "When Not To Be Rationalist".

Expand full comment

Returns are initially increasing, then they decrease. So you must focus to get those initial increases, but then not focus too much.

Expand full comment

You're probably right, although when you say "our brains are naturally irrational but we're trying to make them adapt and become less so" that may be true for Robin but I for one am fully aware than things joy and pleasure are very much dependent on some measure of irrationality, I think most people are too and thus would consciously create "irrationality conservation reserves" for topics that we do not find it very important to have a rational opinion about.

Expand full comment

"You should spend your rationality budget where truth matters most to you." But there are probably diminishing returns to rationality-effort in any one area. Maybe I, as a plumber, while devoting 80% of my rationality-effort to plumbing, should use the other 20% in miscellaneous other areas, getting more value for my effort even though each of these other areas is less important to me than plumbing. (Or do you think there are increasing returns to area-specific rationality-effort?)

Expand full comment

Bueller?

Expand full comment

Assertion: If your theory is true, before deciding on any other 'X', it is important to decide to be "Rationalist on which X to be rationalist about."

Given the sheer number of topics to choose from, I doubt that most people have sufficient 'rationalist' budget to do this well.

Expand full comment