31 Comments
User's avatar
James Hudson's avatar

“Our choices are stark.” This is an oddly collectivist framing, as if *mankind* were doing something closely similar to individual choice. Maybe, “The future looks bad”?

Tim Tyler's avatar

Re: "This drift will plausibly cause our civilization to fall, and likely be replaced by now insular fertile religious cultures. More similar rises and falls may follow."

If this was true we would likely look back and see the remains of previous technological civilizations that had gone bust. However there is no credible evidence for those. We are first. Which counts as evidence that this line of argument is mistaken.

DalaiLana's avatar

However, we do see many civilizations going bust, and they even record their reasoning for why it happened (usually wealth leading to complacency). Letting children run the culture is arguably a form of complacency.

Steven's avatar

"world of tradition, ignorance, war, poverty, and strong selection pressures"

You have grouped terms here that do not necessarily belong together. A world of tradition is not necessarily one of ignorance, of war, or of poverty. Tradition is literally a means of conserving the good and implementing persistent solutions to persistent problem sets, it's the lack thereof that guarantees the eventual downfall of any culture. It should surprise no one that when we neglect to maintain our culture it breaks down under continuing friction.

Consider a very simple thought experiment: If you lived in a perfect culture, one that you knew by whatever standard of proof you want that it could not be improved further in any significant way without opportunity costs and trade-offs that are worse on net, wouldn't you reasonably, rationally, attempt to conserve it as it is, to teach others to likewise conserve that best of all possible worlds, whether they fully understand the reasons that it works or not? Would you not discourage those who clearly do NOT understand why it works from transgressing the rules that exist for their own protection and benefit?

Stephen Lindsay's avatar

True. Our government and institutions, despite the drift, are pretty decent and historically still probably top 1% of everything that has been gone before. If we decide to replace them with something new, odds are good that we end up worse off.

Steven's avatar

That's kind of the central point of Rob's argument: without strong selection pressures and negative feedback from inferior choices our 'something new' IS effectively random. And, as I agree with my fellow Steven above, the better our current culture is the higher the corresponding odds that any arbitrarily selected change in it will be ultimately detrimental.

DalaiLana's avatar

This is true, but arguably people have been very poor at recognizing when their culture is good vs bad, tending to preserve it no matter what. Sort of the inverse of how we seem to seek to change our culture no matter what.

Steven's avatar

"Arguably"?. It seems to me that the incorrect assessments tend to run primarily in the direction of people with high functioning cultures falling to recognize how good they have it, not so much toward people in disfunctional cultures considering them better than they are. That's admittedly a challenging comparison to make because different cultures may legitimately have different priorities that lead them to have developed different solutions that require different trade offs, but cultures don't generally manage to last and stabilize into being 'traditional' unless those traditions represent practices that repeatedly proved more functional than the majority of other options at the time when the practice was first established. Put another way, it's not so much that 'traditional' cultures are necessarily the best cultures, but that 'bad' cultures are unlikely to be preserved long enough to become 'traditional' cultures without some unusual circumstances that remove selection pressures and negative feedback, hence traditional cultures are likely to have above average cultural fitness.

In theory though, you touch on a claim I've made previously here: the conceptual possibility space of cultures seems to have a central cluster distribution rather than a uniform distribution, such that arbitrary changes will tend to lead to a reversion toward the mean over time (high functioning cultures eventually decay and low functioning cultures eventually either improve or are eliminated). This would suggest that making arbitrary changes is only rational if the majority of people affected presume not only that their culture is imperfect, but that it is currently worse than the average possible culture. This is not what I often see, where the dominant view is generally something more like 'Our culture is better than most, but there's still room for improvements'. If the culture is genuinely above average, it's rational to prevent any further arbitrary changes because they are more likely to be detrimental.

Even that is overstating the probable risk/reward of arbitrary changes though, because culture does have potential irrecoverable negative states (extinction) but as far as we can tell no positive state that cannot be lost. The expected value of arbitrary change is thus 0 or less for the Gambler. Culture changes are therefore subject to The Gambler's Ruin. Despite the tendency for reversion to the mean, enough arbitrary changes will eventually result in hitting an irrecoverable failure state. Therefore it is irrational to continue indefinitely making arbitrary changes because they are statistically assured to eventually result in ruin.

Tim Tyler's avatar

Re: "fast cultural change largely uncorrelated with adaptiveness has been a key driver of our cultures drifting into maladaption."

I think you mean "negatiively correlated with adaptiveness". A lot of rapid modern cultural change actively hurts human host fitness - as indicated by factors such as the demograhic transition and the effect of education on human host fertility.

If meme and gene reproduction were "uncorrelated" things wouldn't be so bad - but of course memes and genes compete for some of the same resources. If you are raising some screaming kids, you are probably not such an effective influencer - and visa versa.

DalaiLana's avatar

The issue I think is the topic of individual choice. Previous cultures were largely collectivist out of necessity. In modernity, we have enshrined the rights of the individual above all. And the good of the individual frequently conflict with the good of the collective. For example, in the past being childless was often maladaptive. In the current, being childfree has many many benefits.

Frank Lantz's avatar

My reaction (in case it's useful).

First - yes, this makes sense, Joe Henrich, cultural evolution, I get it. Then, rationality and modernism - yes, makes sense that there would be lots of messy, problematic, even paradoxical dynamics as humans attempt to deliberately implement consciously-chosen goal-directed processes to augment or replace the instincts/traditions of emergent, organic evolutionary processes. This problematic aspect of modernism/rationality seems incredibly important and interesting and I'm excited that you are thinking about it.

Then, "maladaption", and I start to lose the thread. Is this entirely about fertility rates? I feel like I reasonably-well understand the problem of low fertility and am reasonably-appropriately worried about it, but I cannot overcome some degree of skepticism that we know for certain how the fertility crisis is going to play out. Am I supposed to forget that fairly recently it was widely-accepted, even among many experts, that humanity faced the opposite problem? Sure, it's possible that this time we really *do* understand how to extrapolate from current trends to accurately model the future trajectory of humanity, but you seem fully certain in a way I don't quite follow.

After all, I see other ways for humanity to screw the pooch - nuclear war, lethal pandemics, various other kinds of culture-destroying catastrophes. Your confidence about the significance of low-fertility - which you are taking as a given and building your entire argument around - reminds me of the confidence of some AI-safety zealots about the inevitability of superintelligence danger. I understand the difference, and like you I'm more worried about low-fertility than AI danger, but in both cases I see massive over-confidence in the ability to accurately model very complex and hard-to-predict dynamics.

This is more of a vibes-based critique than a substantial argument. Again, I offer it in hopes that it is useful to you.

Robin Hanson's avatar

Fertility fall is plausibly a symptom of cultural drift, a problem that should cause many other failures as well. We have both empirical and theoretical evidence that there is a problem here.

name12345's avatar

One hypothesis is that youth cultures are 1) very fickle and malleable, 2) moral intuitions can be relatively easily hijacked, and 3) abstract reasoning often gives way to primal instincts; therefore, conscious steering (points 1 & 2) with good looking influencers (points 1 & 3) could create pronatalist youth cultures. Caplan's arguments that kids don't need to cost a lot (in money, stress, etc.) yet have huge return on investment of joy if looked at in the right way could be the meta-steering.

GamblingManFromRambling2121's avatar

some will chose 1), others will chose 2), even less will chose 3) that looks like 1) :P

Phil Getts's avatar

"Either 1) do nothing and slowly go bust, 2) quit embracing abstract reason so much, returning to a world of tradition, ignorance, war, poverty, and strong selection pressures, or 3) find and adopt ways to go all-in on using abstract reason to choose adaptive cultural elements."

I favor 4) use empirical evidence to choose adaptive cultural elements.

Abstract reasoning fails because it isn't empirical. It's all just words, words, words, without the parties even agreeing on what the most-important words mean. Some people bring data to the debate, but nobody checks afterwards whose predictions were right.

The Enlightenment was most-significantly an epistemological revolution, which turned from Rationalism, the deductive epistemology of geometry and dialectic, which proclaims its conclusions to be unquestionable, and thus requires words to be unambiguous and to mean the same thing in all circumstances, and thus requires an ontology of eternal essences; to empiricism, which claims its inductive inferences are never 100% conclusive, and avoids relying on the meanings of abstract words by making their definitions operational (specified by a procedure for determining empirically whether the word applies), and using measurements rather than Boolean true/false statements.

It's time to bring politics out of the Middle Ages and into the 17th century. For instance, by conducting political experiments. Today, parties fight for months over a policy, yet have no plan for measuring its effects once it's in place. Parties who pass disastrous legislation are never held accountable for it; there is no official judgement of its effectiveness, so a partisan press can leave a faction entirely unaware of the disasters they've left in their wake just by not reporting them. Try price controls yet again? Or Modern Monetary Theory? Sure, why not?

Any legislation should be required to specify its intended effect operationally, with a protocol by which its success will be measured, and a timeline and budget for a bipartisan agency, along the lines of the old Office for Technology Assessment, to compute and publish that legislation's success.

Another option is to have futures markets in politics, or some national test in various areas of expertise, and to weigh votes on each question by the scores each voter has attained on the tests of relevance. This is basically a voting mechanism that uses statistical learning. This is obviously the right thing to do, since neural networks using this mechanism are vastly more-efficient at bringing all information to bear to make accurate predictions than GOFAI, the non-learning method of AI which is analogous to how our lawmaking process works today. A democracy is a learning algorithm, so let's use a learning algorithm that works! Also note that this would be most naturally implemented as a direct rather than as a representative democracy.

spork's avatar

Your 4 is literally Robin's 3. He never meant reason in the Kantian sense of pure reason. He meant something like "Choose to override our selection-hewn traditions with something we judge to be superior."

Phil Getts's avatar

Sorry for misreading your comment re. Kant. (Also for representing Kant incorrectly; he did understand the need to change from speaking of true and false to speaking of degree of truth, but his usual mode of writing was dialectical. I've deleted that section.) I was sleep-deprived and in a terrible rush.

I can't know what Robin was thinking, but the problems he alluded to with reason in the past were problems with rationalism, not problems with reason; and the phrase "go all-in" seems to indicate that we didn't reason hard enough or long enough, rather than that we were just using an outmoded definition of "reason". Switching from rationalism to empiricism isn't easy, but it is /possible/, unlike getting rationalism to work in the real world at superhuman levels of ontological resolution (resolution as in granularity, "the pixel size of concepts").

Phil Getts's avatar

It isn't, though. Unless you specifically call out the differences between rationalism and empiricism, people will just assume "rational" means "thinking correctly", and will assume "thinking correctly" means using propositional logic. That's so ingrained into westerners that at least 99% of them will do that. It's crucial to explain that we don't mean using dialectic, which is verbal logical argument with English words which are assumed to have eternal essences. (People don't KNOW that's what it means, but that's what they will DO.) We mean using numeric real-valued measurements on continuous spaces, and nominalist, operationalized definitions of words. And you need to explain in detail what "nominalist, operationalized definitions of words" means, or people will instantly revert into thinking they know what words mean.

Look at the rationalist community. Perfect example. These guys should know what empiricism is, and they even know how to do Bayesian statistics; yet they shattered into post-rats because Eliezer kept thinking in terms of symbolic AI (which is rational), which took him down utterly hopeless paths searching for "friendly AI"; and they keep reverting to rational thought in their writings rather than boring down in a nominalist way into what "values" and "goals" mean, and searching for proofs over symbols (rationalist) instead of statistical proofs of equilibria and attractors within continuous phase spaces.

Liam Riley's avatar

You suggest here that youth culture associated with schooling is a major cause of culture becoming increasingly maladaptive. The most popular and influential student movements thus far have been anti war, voting rights movements, anti dictatorship, anti sexual violence, and anti racism.

All these movements look extremely pro-adaptive to me, considering they arose in a world where weapons are now mass kill and/or remote, where high population density requires information sharing to meet needs, and where basic needs often require cooperation with people across race and sex boundaries.

These are ideas built to address issues in a world which technological change has pushed people closer, both locally and globally. Notably, these youth movements all arose after technological impacts had begun to transform society and generated new issues and needs which those movements were designed to address.

In that way they are no different (or maladaptive) than, say, the temperance movement, which arose in response to alcohol dependency for safe water and trade against a need for sober machine operators.

The overall material drivers in society seem much more powerful than abstract ideas in the heads of youths imo, unless we're categorising things like all scientific development under that too?

DalaiLana's avatar

However, we do see many civilizations going bust, and they even record their reasoning for why it happened (usually wealth leading to complacency). Letting children run the culture is arguably a form of complacency.

Tim Tyler's avatar

I still think that civilization crash followed by a Mormon / Amish / Muslim revival is likely a fantasy. We have some 8 billion humans. Yes, humans face some new competition from machines, and this will eventually lead to "peak human" being reached - followed by a decline in the human population. However, we need not worry much about innovation coming to a halt as a result of under-population - machines can pitch in and help out. The picture presented here is too much influenced by historical cycles with not enough respect for progress - for my taste.

Maladaptive memes are a problem, but they are not an especially new problem. We can expect the problem to get worse as the world gets more crowded, communication tech improves and horizontal meme transfer becomes a more dominant force. We do have some tools to help with those. For one thing, our memetic immune systems can be educated. There are memetic equivalents to vaccines - for example.

"Cultural drift" is a pretty duubious metaphor for what is happening, IMO. For one thing, there is a nearby term "memetic drift" - the analog of genetic drift - which is a different concept. For another, the term "drift" implies or suggests undirected changes - whereas many memes actively work to redirect resources from gene reproduction to meme reproduction. "Drift" doesn't cut it - because it both directs attention away from the real cause of the problem and underestimates its severity. On one hand this is just a terminology issue - but I am a terminology Nazi. Terminology directs thought. We need to get it right.

Of course, these kind of issues are part of why we need a mature science of cultural evolution.

Dagon's avatar

Don't a lot of animals have some amount of learning or culture-like behavior propagation? I think the complexity of such things and the balance of instinct vs learned is more of a continuum. Of course, we don't have enough data to know the shape of the curve, or even the units we'd measure in. I hypothesize that humans are orders of magnitude more learning-oriented than cats, say, but I can't tell if it's linear, exponential, or logistical (s-shaped) for hypothetical future beings.

As far as we know, only humans are self-reflective enough to examine and vocalize what we're learning (and what others are learning from us). That too may be a continuum, or it may be a threshold effect.

Regardless, the amount of choice available to individuals over what they've already learned/absorbed is very debatable, and how to decide to try to shift what they're teaching others is heavily dependent on what they've learned culturally. It's kind of obvious that if large groups had "better" equilibria and cultural beliefs, they'd be more successful (by whatever definition they prefer). But it's not obvious at all how choice works in the first place.

TGGP's avatar

> a sacred gold

I believe "gold" should be "goal".

> If so, our current industrial will

Is that missing the word "era" before "will"?

Going "bust" is a common result of going "all-in" when gambling, so it's slightly odd to have them as contrasts.

Robin Hanson's avatar

Fixed; thanks

MutterFodder's avatar

Are you proposing some sort of Platonic "Forms of the Good" as a road map for choosing a cultural adaptive strategy?

Robin Hanson's avatar

I don't see how.

Bassoe's avatar

Lifespan and automation are also workarounds assuming they're not snake oil. Namely, if the longevity escape velocity thing actually plays out it doesn't matter how low the birthrate among secular westerners gets since anything above zero is still positive population growth and if we automate everything, our civilization can keep running even if we're all dead.

Robin Hanson's avatar

Cultural drift is not solved by immortality or automation, though some of its symptoms would be alleviated

James Hudson's avatar

The main justification for pessimism about the prospects for our civilization is the low and falling birth rates. Quasi-immortality and AGI robots would undermine that concern. Another possibly encouraging development would be widespread genetic enhancement of human beings, which might change society very much for the better.

TGGP's avatar

Defeating aging wouldn't be quite the same as defeating death, though it would be a major step in that direction.