33 Comments
User's avatar
Jack's avatar
Jul 2Edited

Our evolution happened in a particular social context, and at a particular spatial and temporal scale. This left us generally unable to intuit things like quantum mechanics and cultural evolution.

One clear failing is that we tend to underestimate our ability to respond to slow-moving changes. We imagine their effect as if they were happening quickly – because that is what we can easily visualize. This affects climate policy where people for example envision large-scale starvation as farms become unproductive, but the changes will happen over decades and the productive farmland will simply migrate north.

I wonder how much of the pessimism over fertility decline will prove to be similarly misplaced – an artifact of our inability to reason beyond first-order impact.

Expand full comment
Wes's avatar

It takes a rationalist to write the best critique of reason's limits applied to culture. Thank you for covering this topic!

Expand full comment
Thomas Colthurst's avatar

I took Joseph Henrich's _The Secret of Our Success_ (which you commented positively on at https://www.overcomingbias.com/p/how-plastic-are-valueshtml) to be an argument that for human macro cultures, inner evolution is more important than outer evolution.

To take one example from that book, it's hard to imagine outer evolution producing (in reasonable amounts of time) the complex, multi-step process for making cassava safe to eat.

Expand full comment
Dean Rock's avatar

Will AI be better weighing all the factors?

Expand full comment
barnabus's avatar

No. I think Cremieux (?) has looked at all AI programmes on reviewing applications, and ALL of them prefererred applications with if the applicant had a female first name. So obviously, MSM published garbage in, garbage out. It's a poisoned well problem.

Expand full comment
Brett Stephens's avatar

Almost everything we believe outside of what we immediately observe is based on faith rather than reason. I never dropped a bowling ball off of a roof or shot alpha particles through gold foil. I trust that someone else did and that it proves what they say it does. Basically I believe it because everyone else believes it and I either can't or have no desire to test it. But in that sense I'm no different than any person in any age. The way the knowledge was transferred to me is the same also. My physics teacher didn't do those experiments. He read it in a book, no different than the monk from the 13th century that would have told you the sun revolves around the earth.

Expand full comment
Phil Getts's avatar

"And innovation is overall faster when units are smaller, suggesting between innovation matters more than inside innovation."

I didn't follow fully enough to be sure that this is relevant, but individual selection acts faster when units are smaller because smaller species have shorter lifespans. Speaking information-theoretically, most evolution happened in bacteria. Bigger animals are, comparatively speaking, evolutionary dead-ends; their evolution slows to a crawl as they get bigger and reproduce more slowly.

Expand full comment
Robin Hanson's avatar

I mean "smaller" in terms of the number of things in the unit, not the physical size.

Expand full comment
Phil Getts's avatar

You justified it by saying "We have good data showing this for species and corporate cultures." I assume that meant species sorted by size of the individual organism, not by size of the social group. I don't think you should include that data as justification, because that difference is due to reproduction rate.

It would be interesting to look at the data for corporate cultures, and see if there's a way to measure "reproduction rate". It would help a lot to have some footnotes there telling us where the data is, because your argument depends strongly on that claim, and it's one I've not heard before.

Expand full comment
Max Meursault's avatar

Not to shamelessly plug but my blog discusses this topic at length. Wrote a very similar post a few years ago here:

https://open.substack.com/pub/minordissent/p/rationalism-is-gay-and-even-rationalists?utm_campaign=post&utm_medium=web

Expand full comment
John Hamilton's avatar

In general, I agree that for very large cultures our reasoning is data-poor, so we should be much more hesitant to tinker with deep norms just because an argument sounds good. This is a plausible limit of reason. Having said that, does it apply here? In other words, the deeper problem is not a lack of data. Rather, people keep acting against data and against reason.

We have multiple good examples of major cultural changes which resulted in obvious major problems and said obvious major problems had been easily ascertained by common sense reasoning. For example, did we need Lenin and the other Bolsheviks to take over Russia, so we could empirically know that such a system would not work very well? Ex ante, for most people, Lenin's system as described by Lenin would be predicted to not work very well. Sharper thinkers could even predict that his system would lead to disasters like the kind seen when applied. And yet here we are in 2025, when a socialist--the same type or kind of socialist who would have been at least sympathetic to Lenin in 1917--can get a mayoral Democratic nomination for NYC. Likewise, the problems associated with rent control seem obvious, but many people still favor rent control. Many countries in the West have even adopted rent control. Is that an indictment of reason? The hurdle in these examples was not the lack of data. If such data even existed, it would have been ignored.

I did conduct some brief research on whether anyone predicted a fertility collapse ex ante because of the following cultural changes: the acceptance of contraception & abortion, female education, female careers, equal female property rights, other egalitarian gender norms, and secularization. Anthony Comstock (~1870s) explicitly argued that the acceptance of contraception & abortion would decrease fertility (his exact words were more colorful—widespread contraception would lead to “race suicide”). Jean-Jacques Rousseau in Book V of Emile (~1760s) criticized Plato for suggesting equal education of the sexes in the Republic. Rousseau did not dwell on the point, but he argued, “Finally all [of these maternal duties and dispositions] should not be a matter of virtue but of inclination, without which the human species would soon be extinct." Per secondary sources, Auguste Comte also argued equality of the sexes would imperil the continuance of humanity as early as the 1850s (it appears he wrote this in response to J.S. Mill’s work).

In the first half of the twentieth century, numerous thinkers worried about demographic decline (e.g., Adolphe Landry, Warren Thompson, John Maynard Keynes, Alvin Hansen, Frank Notestein, and Kingsley Davis). Sometimes they even explicitly linked it to the maladaptive cultural changes you worry about. Probably the best example is La Question de la population by Paul Leroy-Beaulieu. I have not read the book, but apparently Book III explicitly argues that education, egalitarian social norms and secularism have all led to a birth-rate plunge. But this came after the data. So Reason worked for Comte—and apparently not for Mill. (Interesting to think about how Tyler Cowen think Mill’s On the Subjugation of Women is one of the greatest books ever written, while your recent work would point to it as—as the kids say—an infohazard, a work literally assisting the downfall of civilization.)

Overall then—even after we had data to review changes to deep norms of very large cultures—no one cared or listened. Again, I do not think the problem here lies with reason’s limitations. The problem is that people do not act according to reason.

Expand full comment
Robin Hanson's avatar

The fact that some people drew good conclusions doesn't mean that reason reliably leads to those conclusions.

Expand full comment
Xpym's avatar

>People either assume that such changes typically increase adaptiveness, or that they don’t care much about adaptiveness, relative to such things.

Indeed. You seem to value adaptiveness unusually highly.

Expand full comment
Robin Hanson's avatar

You have to value adaptiveness if you value long term influence.

Expand full comment
Xpym's avatar

That's unusual too, most people hyperbolically discount.

Expand full comment
Paul Taylor's avatar

Professor Hanson, this is an off -topic but timely question. What do you think of the policy of "Trump accounts" described in a White House press release as follows: "Trump Accounts for newborns will be seeded with a one-time government contribution of $1,000. The accounts will track a stock index and allow for additional private contributions of up to $5,000 per year."

Expand full comment
Robin Hanson's avatar

Every little bit should help, but that's a little bit.

Expand full comment
David Manheim's avatar

Large parts of the dynamics of slow versus fast adaptation and cultural change which are being pointed to are indisputably correct, and the point about the limit to reason is well made. At the same time, I think it would be easy to overinterpret the limit as fundamental as opposed to contingent on our degraded and competitive epistemic and memetic environment. That is, I think that some people can reason relatively well and appropriately modestly about the likely long term consequences of social changes and the uncertainties, but those aren't the voices we hear today.

Of course, part of the reason for this is that we've built a maladaptive epistemic environment, which reinforces Robin's claim that our limited ability to make adaptive changes is insufficient - but as I pointed out a few months ago - https://exploringcooperation.substack.com/p/the-fragility-of-naive-dynamism -I view the current lack of adaptation as primarily a result of the increased *speed* of change overwhelming our collective ability to adapt, rather than a fundamental limit to our ability to reason about such changes.

Expand full comment
Robin Hanson's avatar

"degraded" makes it sound like we were once better at aggregating info to make decisions on such issues. Not clear it was ever better.

Also, while the rate of enviro change is in fact one of the parameters that has gotten worse re cultural evolution, three other key params have also gotten worse.

Expand full comment
David Manheim's avatar

I think that elites making decisions were in fact obviously better at making long term adaptive changes a century or two ago, when there was far less information aggregation but also far less short term pressure on government decisions. The text of presidential speeches and debates show far more focus on longer term impacts and the importance of culture then than most of the more insightful political commentators and influencers today.

Expand full comment
Kevin Bjorke's avatar

Surely this is in large part due (in the US and similar countries) to changes in the way those decision-making elites have come to be chosen, via megaphones, radio, TV and podcast.

Expand full comment
Nicholas.Wilkinson's avatar

Doesn't this argument undermine itself rather in that it requires you to reason out what is adaptive?

Expand full comment
Robin Hanson's avatar

No, you can just reason about the key parameters of the cultural evolution process, and use that to draw conclusions about whether it is likely suffering from maladaptive drift.

Expand full comment
AveragePCuser's avatar

You can reflect on your object level reasoning using meta level reasoning to decipher which object level methods work better in service of certain ends. I think robin hanson is critiquing object level use of reason specifically for basic values and norms with disregard for what he calls "between evolution".

Expand full comment
Stephen Lindsay's avatar

“they don’t care much about adaptiveness, relative to such things.” True.

Just look at the argument in utilitarian population ethics between those who want to look at “total” versus “average” utility. For me (and I think one who values adaptiveness), it is pretty obvious that at similar levels of individual utility, more is better. But not everyone agrees. It comes down to a difference in values - whether one values adaptiveness or not.

Expand full comment
Robin Hanson's avatar

But empirically, do total utilitarians actually value adaptiveness more overall?

Expand full comment
Stephen Lindsay's avatar

A society with culture/policy that is set up to maximize total utility in the long run would be a high fitness / adaptiveness society. Total utilitarians should value adaptiveness. But maybe what they really value in practice is different.

Expand full comment
name12345's avatar

It feels like a great step would be to incentivize small cultural groups to break off to uninhabited land, islands or seasteads through tax credits, military pacts, favorable trade pacts, etc. There must be plenty of libertarians, marxists, and dozens of other groups that would like to try their ideas on uninhabited lands (near rivers, etc.).

Expand full comment
Robin Hanson's avatar

Plausibly would help, though unlikely to happen.

Expand full comment
Prof. Steven Wayne Newell's avatar

AI technology as a means to augment the experience of our reality enhances the capacity to evaluate more factors in using our reason to seek to adapt to our environment in pursuit of goals. My patents 10404370, 12107632, and US D1076911S for a cyber-bio adaptation give trigram data flow, with X, Y, and Z axis LiDAR sensing to AI nodes and 3D memory of sight and feel in micro-thin wafers of diamond impregnated with carbon atoms displaced by nitrogen resulting in crystal luminosity response to matrix neuro-nets of light-signal based data streaming out/in LiDAR signal for the AI node acting as Agent. With faster trigram code than binary eight-signal code for language also the AI node acting as Agent can discuss with other nodes as they sense a 3D environment in real-time, 4th-dimensional processes. My AI augmented reality to assist humans in understanding environmental and social dynamic processes offers a new mode to learn to adapt. This is why I tried to show the concept in my sci-fi novel writing since 1987, and my recent effort to organize the production of a sci-fi movie series. I find people are afraid. But I hope we can get over that and try to work in cooperation to make necessary adaptive changes at this time.

Expand full comment
TGGP's avatar

> We have good data showing this for species and corporate cultures

Could you link to such data?

Expand full comment
barnabus's avatar

Could you repost it here if possible? Your paper on quillette is behind a paywall...

Expand full comment