30 Comments
User's avatar
Jack's avatar

What is "rationality"? Rationality doesn't exist. Or rather, it only exists in domains like math and engineering where some problems are defined in tractable ways.

To apply "rationality" to any decision broader than this is a category error. The "rationalist ideologies" that purport to do so – Marxism, Libertarianism, Effective Altruism, and so on – are as likely to go astray as they are to generate anything of value. Worse still, they tend to ignore contradictory evidence that might cause them to self correct because, hey, you can't argue with rationality! We're above reproach!

Many experts spend their lives mulling over simplified models of irreducibly complex phenomena, and grow so enamored of those models that they fail to grasp the brittleness of their conclusions. Their conclusions aren't *wrong* – there is real value in expertise after all – they just aren't as right as they think they are. Average people often see this more clearly than experts do.

Expand full comment
TGGP's avatar

> What is "rationality"?

"Let’s call people “rational” re certain decisions if they explicitly and realistically consider their goals, options, and complications to estimate which options best achieve those goals in the presence of those complications."

Expand full comment
Jack's avatar

This seems like a reasonable definition. It also kicks the can down the road if we're being honest.

What is the "goal" of a human being? Of an economy? Typically people use simple proxies like income, productivity, or lifespan for these questions – because otherwise the problem is intractable. It's looking for your car keys under the lamp post because that's where the lighting is good.

For example: Is population decline bad for people? We get a lot of very confident answers to this question based on the "goal" being some kind of macroeconomic measure. But does this goal capture the truth?

Expand full comment
Stephen Lindsay's avatar

That’s exactly it. What is the "goal" of a human being? What is the purpose of life? Until our society can form a consensus on these questions (Carl Trueman calls it the need for “a new humanism”), there can be no satisfying “rational” solutions to social problems.

Expand full comment
Phil Getts's avatar

Re. "When it started in ’06, this blog was near the center of the origin of a “rationalist” movement wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress others, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders."

I agree, and I think we can pin down more-precisely what happened. I think the main problem with the rationalist movement was that "rationalism" doesn't mean what people think it does. It doesn't mean thinking effectively, or making good predictions. It means taking geometry as a model for all human thought. This makes a lot of terribly wrong assumptions, like:

- all thoughts and reasons can be represented in human language

- every word has a single Platonic definition, which applies to all uses of that word, so we don't have to pay attention to how we're using each word /at the moment/

- every word has a single Platonic definition, so symbolic AI will work

- every sentence in English is either True or False

- reasoning should be deductive

- if you reason deductively, you can be 100% certain of your conclusions

The rationalist community didn't consciously believe any of these things except the symbolic AI one; but it didn't worry about them, either; at least not compared to the phenomenologists and subjectivists in the continental philosophy tradition. Instead of using their knowledge of Bayesian reasoning, iterative optimization, information theory, and statistics to show how to resolve continentalism's paradoxes, they used the fact that all continental philosophers are ignorant of these things to dismiss the valid points those philosophers had made about the failures of strict rationalistic thought. They allowed rationalism to leech into their thought from the universal background Rationalism of Western civilization, and fell into the usual bad habits of rationalism: overconfidence, moral certainty, perfectionism, groupthink, and cults. They thought they could solve all their epistemological problems by focusing on things like how to set their priors, and ignored questions like how words mean, what values are and how a mind, person, or group "has" them, and why LLMs don't use Bayesian neural networks.

That finally led many to the most-common terminal failure mode of rationalism: the kind of irrationalism you get when you know rationalism doesn't work, yet keep all of its wrong assumptions, and conclude that either the physical world isn't the real world, or else the world is broken on a metaphysical level. E.g., the post-rat Buddhism, mysticism, spiritualism, phenomenology, post-modernism that I've seen at Fluidity Forum and Vibecamp.

Expand full comment
Xpym's avatar
Sep 1Edited

>the phenomenologists and subjectivists in the continental philosophy tradition

Much of the blame is due to them as well. They engage in intentional obscurantism to guard their status, so what real insights they have aren't readily available to outsiders, and even most insiders are confused about the extent that their movement has resolved philosophical problems. Pomo's anti-rationality reaction went too far and threw the baby out with the bathwater, and sadly no substantial alternative has yet emerged which would take the best of both worlds and chart a realistic way forward.

Expand full comment
Phil Getts's avatar

I think the alternative--I would even say the solution--has emerged: deep neural networks. If you learn how they work, not at the Scientific American level, but enough to write your own DNN, you'll see how to do epistemology in a way such that the problems alleged by post-modernists never emerge in practice. Typically, the result is that there is a 1 in 100 to 1 in a billion chance of running into the worst-case scenario (eg, sensitivity to priors) that post-modernists think is the usual case, because there is so much more sense data, and more correlations and regularity, than pomos think there is, because people are so much BIGGER, physically, than they think they are. There are more atoms in your body than stars in the Universe, which makes massive data streams and statistical effects overwhelm the ambiguities pomos worry about. But people in the humanities have never grasped how big 6.022 x 10^23 is.

The problem is how to communicate this information to the humanities. I've tried, and I've found people in the humanities just get indignant when I claim this, and they never even look at the math. I've thought about writing rebuttals of various pomo arguments, but it seems pointless, because the people who need to listen have repeatedly failed spectacularly to recognize obvious correct solutions to much simpler problems (eg the Raven "Paradox"), because they will at best write lengthy, jargon-filled responses that don't make sense, and because the only way to appreciate that the mysteries have been solved is either to understand the math, or to run a lot of simulations and see that they all work.

It should already be obvious that self-driving cars and LLMs are solving the problems that post-modernists said are unsolvable, without getting trapped in hermeneutic circles; and that our perceptions of the world are massively overdetermined, not underdetermined, by sense data. Yet none of them admit this.

Expand full comment
Xpym's avatar

Hmm, to me the biggest practical problem with LLMs is that the current paradigm doesn't seem straightforwardly extendable to continuous learning. Sure, it's probably good enough to automate trivial jobs already established in the rational civilization, but the promises of superintelligence in a couple of years (or even decades) sound exceedingly silly.

Expand full comment
Phil Getts's avatar

It isn't necessary to produce superintelligence to show that LLMs, starting from random priors, reliably form abstract concepts which map cleanly onto existing human ontologies, from unlabelled data, using unsupervised learning. It doesn't matter if the "Ding an sich" is unattainable. The LLM surely isn't accessing it, but communicates clearly with humans anyway. It doesn't matter that we never observe causality, as Hume complained; LLMs never observe causality on the level of weights, yet understand and use causality on the level of concepts. Skinner was right and Chomsky was wrong; LLMs use nothing but behavioristic reinforcement, and learn language. Chomsky was also wrong about the "poverty of the stimulus". All the objections to the possibility of objectivity have been defeated already.

Expand full comment
Xpym's avatar

Well, I have never been seriously tempted by pomo nihilism anyway, so to the extent that LLMs are evidence for common sense, good for them. I'm still concerned that they aren't enough to chart a realistic way forward, and the problems of rationalism that you mentioned (overconfidence, moral certainty, perfectionism, groupthink, and cults) seem all too present among DNN's "true believers".

Expand full comment
Phil Getts's avatar

Oh, when I said they were the solution, I meant to the mysteries of epistemology, not to questions like how to develop sane or safe AI.

Expand full comment
Catherine Caldwell-Harris's avatar

Robin, you wrote a book that humans aren't rational, because we are preoccupied with social status and we need to use our reasoning powers to justify narrow self-interested pursuits (as Haidt also argued in Righeous Minds.

Expand full comment
Robin Hanson's avatar

Yes, I'd still like us to become more rational, at least re our biggest choices.

Expand full comment
Tim Tyler's avatar

Re: "While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders." Thinking of anything in particular?

Expand full comment
James M.'s avatar

I've come to believe that many elites will only change their attitudes (to better conform with reality) if the status and financial incentives push them to do so. People believe what they want to believe, and more educated people that's usually the thing that makes them feel and look like a nice person, unfortunately.

https://jmpolemic.substack.com/p/intransigence

Expand full comment
MBKA's avatar

I am not sure what to make of this. It seems like a category error.

First off, what is politics? My my understanding, politics is the process of negotiation between stakeholders (or their representatives) whereby ideally, most stakeholders get when they want most of the time. This is achieved by helping other stakeholders (or their representatives) get what they want. Politics is therefore naturally a process where unrelated or even conflicting goals are achieved through the forming of alliances between stakeholders whose goals only partially overlap. You scratch my back and I scratch yours. It may look unsavoury at times but it is unironically man's greatest achievement: optimally balancing societal interests.

Second, what is rationality? Again in my understanding, rationality is a state of affairs when the means employed to achieve stated goals are optimally chosen and deployed to achieve these goals, and do not conflict with the goals.

In other words, politics is about the process to achieve the necessary social support and collaboration to achieve goals, not about the content of these goals or the means to achieve them (policies). Rationality is about achieving goals by optimal means (policies), not about the reasonableness of the goals themselves or the process of finding social support (politics). Goals are just goals, they are not defined by reasonableness. Goals may be inconsistent between each other, but in and by themselves are orthogonal to rationality. Policies are orthogonal to politics unless once again there is inconsistency or contradiction. And rationality is purely asking, what are the optimal means to achieve a goal? Not about, how do you like the goal? A sentence of the form "Goal X is reasonable" makes no sense. A sentence of the form "It is not reasonable to have goal X if you also have goal Y as these goals conflict with each other" does make sense. A sentence of the form "It is not rational to deploy means A to achieve goal X as means A conflict with [or: as means A are ineffective in] achieving X" also makes sense. But really... we can't expect from politics that it be "rational" or achieving everything we want.

Politics is about forming coalitions to get help to achieve most of what we want, by helping others to also achieve what they want.

Expand full comment
Alexander Kurz's avatar

We usually think that it is rational as humans to try to prolong our lives. Would you think that it is rational to aim for this? Or is the aim itself outside rationality and all we can say is that given that we want to prolong our personal lives, some actions will be more rational than others?

Expand full comment
Robin Hanson's avatar

Seems like an argument over definitions of words. I'm not much interested in those.

Expand full comment
Steersman's avatar

> "then fix cultural drift by tying futarchy to a sacred long term goal, like space colonization ..."

We have certainly "drifted" from various sacred religious goals, though that has always been poorly or self-servingly defined -- schadenfreude for Christians, and brothels for Muslims (men).

But rather doubt that "space colonization" is really the ticket to the promised land -- I've often argued that Star Trek's "final frontier" is not in some "galaxy far, far away" but between our ears.

But that reminds me of a classic from Stuart Kauffman on "Reinventing the Sacred" -- been a while since I read it but I expect it was of a piece with his credible efforts on emergence, and the mechanisms undergirding evolution:

https://en.wikipedia.org/wiki/Stuart_Kauffman

Expand full comment
Robin Hanson's avatar

I did and reported on polls on what goals people thought could work here.

Expand full comment
Steersman's avatar

Still rather doubt that your "space colonization" is the ticket, despite Musk's best efforts to the contrary ...

Aren't even any "heathens" out there, presumably, to convert, much less savages to sell trinkets to ...

Expand full comment
James Hudson's avatar

Space colonization is unlikely to become sacred, unless it comes to be thought instrumental to some inherently sacred goal. The survival of *homo sapiens* seems the most basic sacred social goal, though that may change with the experience of AI.

Expand full comment
Robin Hanson's avatar

I did and reported on polls on what goals people thought could work here.

Expand full comment
John Ketchum's avatar

Great article. Here's a problem: People who are unusually skillful at reasoning also seem to be exceptionally adept at rationalizing.

Expand full comment
Robin Hanson's avatar

Indeed

Expand full comment
Jonathan Lalljee's avatar

This was a great read. I appreciated being able to understand that there is a difference between rationality and rationalizing.

Expand full comment
Catherine Caldwell-Harris's avatar

Regarding: "Human specialists seem to be especially rational in engineering and finance, at least when their goals are clear and technical. " I've wondered if this is an outcome of being part of a specialized ingroup, with similar training, culture and worldview, facing off against others with equally developed rationality tools. If one deviates from rationality, others in the group pounce. So everyone adheres to the rationality rules or else risks being booted out. The positive aspects of scientific peer review are also like this; one can't publish unless peers agree that sufficient rigor was exercised.

Expand full comment
George Shay's avatar

Part of the reason for this is that human existence cannot be reduced to pure rationality.

Expand full comment
Robin Hanson's avatar

I'm not proposing to do so.

Expand full comment