10 Comments

If someone views a certain belief as (most likely) true, while also thinking that his holding a contrary belief would be more beneficial to him, he probably cannot make himself believe the latter: we do not have direct voluntary control of our beliefs. But the mind does have layers, and in extreme cases the “deep” mind, which knows the truth, might be able to get the “shallow” mind to believe the convenient falsehood. That is, the person would not be conscious of (deeply) believing the inconvenient truth; his casual (shallow) introspection would seem to show belief in the convenient falsehood. This would then suffice to make him act as if he really believed the latter, without conscious dissembling (which would be a psychic burden). The deep mind is epistemically rational, but the mind as a whole is geared more to personal evolutionary fitness, which may require this kind of layered self-deception.

Expand full comment

The problem is universality, which is a framework that the current Internet business models (with very naive scaling rules) has forced onto us. Otherwise, everyone / every locality / every sub-community can make their own tradeoffs.

There'll never be a universal best solution that everyone agrees on, which is why universality is just a bad assumption for the problem definition.

Expand full comment

I'm sure there are beliefs that standard rationality can not prove or disprove, but can be shown to be beneficial.

Science has so far mostly focused on beliefs that correspond to (or describe) the material world and the justification or validation of this in wider society is that these beliefs about the material world have been shown to be useful. You can view science as an endeavor to find out what beliefs work. Any beliefs established to work, but which haven't been proven or dis-proven in a material sense, can also be believed.

In fact, I'd say they should be believed. It'd be irrational not to believe things shown to be most useful, when there is nothing disproving them.

Maybe switching to a focus on what works for people and society, rather than what works in an experimental setting is your way out - even if the two sometimes conflict.

Expand full comment

I don’t see any important conflict between satisficing and rationality that is different from the usual resources/quality tradeoff in doing any job.

As for apparent conflicts between rationality and utility of the kinds you reference, it seems to me that these are overblown if not nonexistent, once one accounts for the harms done by individual and group conflicts based on opposing delusional belief systems, for the tendency of irrationality, once legitimized, to bleed beyond its original target domain, and for other unintended consequences (including the one you highlight - potentially reduced protection from malicious persuasion). The answer to all of which is rationality: (our best chance of) apprehending and dealing with reality.

Does this mean, for example, that we should never tell white lies to avoid hurting others, nor lie for our survival in an existential scenario? I don’t think so, but I think there is a line to be drawn, and that the building of elaborate delusional belief systems almost always falls on the other side of that line.

Expand full comment

See my (new) 2nd to last paragraph in the post.

Expand full comment

If people tend to function better with some type of belief, then in theory with enough careful observation you can work that out based on seeing how people do over time when they have that belief.

Most people would initially use their own values to judge if people are doing better or not, but who's to say what values are correct? I would argue that if the people with the beliefs being studied are on average more satisfied than others, and the beliefs are being passed on and surviving, or growing perhaps, under a "free choice" basis, then the beliefs are on average beneficial.

I don't think this is any rejection of the usual methods currently used in science or modern rationality. It sounds to me a lot like the type of reasoning used in economics. It's a way out isn't it? At least a way to proceed, even if it maybe doesn't end completely, but science isn't going to end completely is it.

It's reasonable to think that use of logic and easily experimentally proven things are going to be very useful and beneficial beliefs, and they are. But some things are just too difficult (or impossible) to decide using such means, or are too difficult (or impossible) to apply in the context in which they would be useful. And as you say, we don't work that way - and there are good evolutionary reasons why. So we are stuck with rules of thumb, or even completely unproven rules and beliefs that have survived and been passed on to us, some of which may be logically contradictory. But we can still work out which are more effective, maybe not in the time you'd like, but eventually.

If you're looking for some way in a debate to come to some conclusions, look to use what you know of history and tradition to see if any claimed usefulness is consistent with what you know has happened. Have people with that belief survived and prospered, has the belief spread when people were free to choose?

Expand full comment

The rules are rules for getting the truth. Of course, we will not follow them if we have other aims.

Expand full comment

Seems you are just repeating that the usual rules are implied by the usual ideal assumptions, assumptions which are often not true.

Expand full comment

It's an easier question when we are considering rules admitted to be deviations from rationality. The harder question is when people simply assert that treating God, motherhood and apple pie in a special way is the rational rule.

After all, as you observe we can't actually prove a priori one rule is more likely to produce true beliefs than other. So it's ultimately an empirical question as to what rule is the rational one.

I mean I agree with you on what the rational rule is in these cases but I don't think it's super easy to draw that distinction.

Expand full comment

I think you are alleging some sort of epistemological problem. I don’t see it.

Yes, “we can often improve our situation via [embracing irrational] beliefs that [positively] influence how others think of us.” Ideally, we would believe the truth, while convincingly feigning belief in the socially advantageous falsehood. But if that exceeds our histrionic skill, we may have to put on blinders, adopting an automatic semi-conscious procedure of embracing the socially desirable belief in certain easily recognizable kinds of situation. But the critic whose aim is simply the truth, without regard for our social status--will rightly reject the procedure by which we formed this belief.

A quite different point: rational belief-formation may be too hard: too complicated, too time consuming: too expensive. We may do better to rely on a heuristic that quickly and easily gives us an answer that is probably correct. Rationally deriving some new truths from what we already know may cost more than the certain knowledge would be worth, when probably true belief is available cheaply. To the critic who complains about our use of a heuristic, we can plead guilty with an easy conscience, remarking that if he wants to go the long, hard way of complete rationality he is welcome to do so, and (we hope) share his results with us. What is too costly for us may not be so for him.

There is nothing here to call into question standard logical, statistical, etc. rules.

Expand full comment