Our world is complex, where we try many things, with varying success. And when we look at the distribution of our successes and failures, we often notice patterns. Pattens we then summarize in terms of models and rules of thumb regarding ideal arrangements.
For example, an ideal sentence has a verb and a subject, starts with a capital letter, and ends in a period. An ideal essay has an intro and a conclusion, and each paragraph has an idea sequence of sentence types. In my old world of lisp, ideal computer code has no global variables, each function has only a few lines of code and is documented with text, and functions form a near tree structure with few distant connections.
Ideal rooms have little dust or dirt, few loose items around, and big items are where they appear in a designed floor plan. Ideal job performance follows ideal work schedules and agreed-on procedures. Ideal communication uses clear precise language and is never false or misleading. And so on.
Such simple and easily expressed rules and ideal descriptions can be quite helpful when we are learning how to do things. But eventually, if we are primarily focused on some other outcomes, we usually find that we want to sometimes deviate from the rules, and produce structures that differ from our simple ideals. Not every best sentence has a verb, not every best code function is documented, and the furniture isn’t always most useful when placed exactly according to the floor plan.
However, when we are inclined to suspect each others’ motives, we often turn rules of thumb into rules of criticism. That is, we turn them into norms or laws or explicit rules of home, work, etc. And at that point such rules discourage us from deviating, even when we expect such deviations to improve on rule-following. Yes, it is sometimes possible to apply for permissions to deviate, or to deviate first and then convince others to accept this. But even so, enforced rules change our behavior.
With sufficient distrust, it can make sense to enforce a limited set of such rules. We can lose on net in getting the very best outcomes when people have good motivations, but we can gain via cutting the harm that can result from poor motivations. At least when on average poor motivations tend to move choices away from our usual ideals.
For example, we humans are complex and twisted enough that sometimes we are better off when we lie to others, and when they lie to us. Even so, we can still want to enforce rule systems that detect and punish lies. Yes, this will discourage some useful lies. But that loss may be more than compensated by also discouraging damaging lies from people with conflicting interests. We can want rules that tend to push behavior toward the zero-lie ideal even when that is not actually the best scenario for us, all things considered.
I recently realized that “rationality” is mostly about such ideal-pushing rules. We say that we are more “rational” when we avoid contradictions, when our arguments follow valid logical structures, when our degrees of belief satisfy probability axioms, and when we update according to Bayes’ rule. But this is not because we can prove that such things are always better.
Oh sure, we can prove such things are better in some ideal contexts, but we also know that our real situations differ. For example, we know that we can often improve our situation via less accurate beliefs that influence how others think of us. (Our book Elephant in the Brain is all about this.) And even accuracy can often be improved via calculations that ignore key relevant information. Our minds and the problems we think about are complex, heuristic, and messy.
Yet if we sufficiently fear being maliciously persuaded by motivated reasoning from others with conflicting motives, we may still think ourselves better off if we support rationality norms, laws, and other rules that punish incoherence, inconsistency, and other deviations from simple rationality ideals.
The main complication comes when people want to argue for us to accept limited deviations from rationality norms on special topics. Such as God, patriotism, romance, or motherly love. It is certainly possible that we are built so as to be better off with certain “irrational” beliefs on such topics. But how exactly can we manage the debates in which we consider such claims?
If we apply the usual debate norms on these topics, they will tend to support the usual rational conclusions on what are the most accurate beliefs, even if they also suggest that other beliefs might be more beneficial. But can people be persuaded to adopt such beneficial beliefs if this is the status of the official public debates on those topics?
However, if we are to reject the usual rationality standards for those debates, what new standards can we adopt instead to protect us from malicious persuasion? Or what other methods can we trust to make such choices, if those methods are not to be disciplined by debates? I’m not yet seeing a way out.
If someone views a certain belief as (most likely) true, while also thinking that his holding a contrary belief would be more beneficial to him, he probably cannot make himself believe the latter: we do not have direct voluntary control of our beliefs. But the mind does have layers, and in extreme cases the “deep” mind, which knows the truth, might be able to get the “shallow” mind to believe the convenient falsehood. That is, the person would not be conscious of (deeply) believing the inconvenient truth; his casual (shallow) introspection would seem to show belief in the convenient falsehood. This would then suffice to make him act as if he really believed the latter, without conscious dissembling (which would be a psychic burden). The deep mind is epistemically rational, but the mind as a whole is geared more to personal evolutionary fitness, which may require this kind of layered self-deception.
The problem is universality, which is a framework that the current Internet business models (with very naive scaling rules) has forced onto us. Otherwise, everyone / every locality / every sub-community can make their own tradeoffs.
There'll never be a universal best solution that everyone agrees on, which is why universality is just a bad assumption for the problem definition.