My Hopes For Rationality
When I wrote “politics isn’t about policy”, I actually didn’t know how many respected thinkers (e.g., Machiavelli, Hume, Marx, Weber, Pareto, Mosca, Schmidt, Schumpeter) had long before also concluded: politics is only to a minor degree a process whereby we adopt policies that are effective at achieving realistic goals that we have agreed together to achieve. Instead, politics is far more driven by slogans, symbols, signaling, and a great many other complex social processes that only somewhat, and more as a side effect, achieve explicit shared goals. This description seems even more true of the processes by which we manage our social norms and other cultural expressions.
Let’s call people “rational” re certain decisions if they explicitly and realistically consider their goals, options, and complications to estimate which options best achieve those goals in the presence of those complications. This will typically encourage them to have relatively consistent, coherent, and evidence-based values and beliefs. When are we more vs less rational?
Human rationality seems to peak at modest scale decisions, and to fall greatly for much smaller or larger decisions. At one extreme, we make most small decisions unconsciously, like where to step next as we walk or the next word to say as we speak. At the other extreme, we tend to be conscious but not so rational re our big personal decisions, like choice of career or marriage partner. And as reviewed above, we are even less rational re our collective decisions, such as government policy, especially regarding larger scopes and time scales. For these big decisions, we are emotional and social, and greatly influenced by symbols, rhetoric, and the sacred.
However, we rationalize on far more decisions than we are rational on. When asked to explain or justify most decisions, we typically try to explain our processes and outcomes in rationality terms. As most decisions are not at all made randomly, such rationalizations can typically make some sense of them. And we are usually uncritical enough of such rationalizations to accept these accounts. But in fact, careful studies usually show otherwise.
If we think that our explicit goals could often correlate greatly with our real goals, and that our beliefs could often correlate greatly with what our evidence actually supports, then we should want our societies to be more rational, especially re its biggest decisions. As we expect explicit rational analysis to greatly help us to actually achieve those goals.
Those of us who are especially good at rationality should be especially eager for this, as we can see just how much we are losing now by not doing this, and as we’d likely be valued move highly in such a scenario. However, our deep habit of rationalization makes it hard to actually pursue the goal of increasing our rationality, as we are quite prone to believe that we have succeeded when we actually have not.
Human specialists seem to be especially rational in engineering and finance, at least when their goals are clear and technical. So you might think we could make the rest of the world more rational if we induced them to ape the styles common in engineering and finance analysis. But alas it seems hard to generalize engineering and finance analysis methods to apply well to most other topics.
Both academia and journalism claim to constrain themselves to follow methods that promote rationality on a much wider range of topics than do engineering and finance methods. And specialists in these areas do obtain unusually high prestige, and wide audiences. But while they may in fact make our world more rational in its big choices than it would otherwise be, their influence is clearly also quite limited; politics remains mostly not about policy.
When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders.
Having long been an academic, I’ve long been skeptical about the potential for style and method norms, which academia has in abundance, to increase rationality. Such norms can be quite effective at channeling prestige to those who can master difficult methods, but seem much less effective at hindering people from rationalizing their non-rational beliefs and actions.
My economics training makes me much more optimistic about creating institutions that induce strong incentives for rationality on a wide range of topics. Such as prediction markets. The problem here is that we need most people to be okay with allowing such institutions legally to exist, and to sufficiently respect market price estimates to use them as a guide to action. And also to get enough people, perhaps just a small minority, to sufficiently subsidize such markets on important questions.
However, given how widespread and powerful is rationalization I can’t be very optimistic that we will adopt such institutions because we agree we want more rationality. Even so, there’s a lot of randomness in which institutions get adopted where when, and there’s reason to hope that cultural selection may favor more rational institutions, at least in competitive contexts.
So I still have hope for the idea that some big region might adopt futarchy as a form of governance, and then fix cultural drift by tying futarchy to a sacred long term goal, like space colonization, that conflicts with civ collapse. In this scenario, elites would need to give at least lip service to adaption as a goal, but would not need to be especially committed to that goal, or to rationality. Their institutions might instead do that work for them.


What is "rationality"? Rationality doesn't exist. Or rather, it only exists in domains like math and engineering where some problems are defined in tractable ways.
To apply "rationality" to any decision broader than this is a category error. The "rationalist ideologies" that purport to do so – Marxism, Libertarianism, Effective Altruism, and so on – are as likely to go astray as they are to generate anything of value. Worse still, they tend to ignore contradictory evidence that might cause them to self correct because, hey, you can't argue with rationality! We're above reproach!
Many experts spend their lives mulling over simplified models of irreducibly complex phenomena, and grow so enamored of those models that they fail to grasp the brittleness of their conclusions. Their conclusions aren't *wrong* – there is real value in expertise after all – they just aren't as right as they think they are. Average people often see this more clearly than experts do.
Re. "When it started in ’06, this blog was near the center of the origin of a “rationalist” movement wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress others, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders."
I agree, and I think we can pin down more-precisely what happened. I think the main problem with the rationalist movement was that "rationalism" doesn't mean what people think it does. It doesn't mean thinking effectively, or making good predictions. It means taking geometry as a model for all human thought. This makes a lot of terribly wrong assumptions, like:
- all thoughts and reasons can be represented in human language
- every word has a single Platonic definition, which applies to all uses of that word, so we don't have to pay attention to how we're using each word /at the moment/
- every word has a single Platonic definition, so symbolic AI will work
- every sentence in English is either True or False
- reasoning should be deductive
- if you reason deductively, you can be 100% certain of your conclusions
The rationalist community didn't consciously believe any of these things except the symbolic AI one; but it didn't worry about them, either; at least not compared to the phenomenologists and subjectivists in the continental philosophy tradition. Instead of using their knowledge of Bayesian reasoning, iterative optimization, information theory, and statistics to show how to resolve continentalism's paradoxes, they used the fact that all continental philosophers are ignorant of these things to dismiss the valid points those philosophers had made about the failures of strict rationalistic thought. They allowed rationalism to leech into their thought from the universal background Rationalism of Western civilization, and fell into the usual bad habits of rationalism: overconfidence, moral certainty, perfectionism, groupthink, and cults. They thought they could solve all their epistemological problems by focusing on things like how to set their priors, and ignored questions like how words mean, what values are and how a mind, person, or group "has" them, and why LLMs don't use Bayesian neural networks.
That finally led many to the most-common terminal failure mode of rationalism: the kind of irrationalism you get when you know rationalism doesn't work, yet keep all of its wrong assumptions, and conclude that either the physical world isn't the real world, or else the world is broken on a metaphysical level. E.g., the post-rat Buddhism, mysticism, spiritualism, phenomenology, post-modernism that I've seen at Fluidity Forum and Vibecamp.