Evolution in animals selects not only for static features, like claws, but also for behaviors that change with context. Like a habit of using claws to cut into prey. Thus long term evolution selects for short term change.
But at a higher level of abstraction, one can think of such behaviors of change as a static habits that don’t change. And for most of human history, our behavior was mostly static in this sense. We evolved large libraries of behavioral habits and rules, and rules for deciding which ones win when they conflict, but mostly just executed those rules, instead of changing them.
The more complex and coherent our systems of habits and behaviors became, the more they came to approximate our standard decision theory ideal, and so could be seen as a combination of a Bayes-like system for updating estimates of facts based on info collected, together with a simpler more static system of values. Though as we didn’t fully reach this decision theory ideal, this division wasn’t clean.
Over time, however, these systems did change. The most common kind of change was mistakes, i.e., failures to follow one’s rules. There was also a lot of forgetting of rules, by failing in key habits of passing habits on to new generations. These are both bad kinds of change, and so we evolved many behaviors that try to prevent them, leading to our general attitude of dislike and fear of change. Most abstract change is bad, after all.
But there were other kinds of change, kinds which eventually added up to great progress. For example, sometimes people varied their behaviors on purpose, more than would happen just via mistakes, to explore new possibilities. And they would repeat successful variations more, to learn what works.
Most importantly, they wouldn’t inherit the habits of prior generations uniformly, but instead preferred habits that were more popular, habits of higher status folks, and habits that seemed more successful according to many more specific rules. For example, we might have a rule to judge the success of fishing habits based on the rate at which people using them catch fish.
Overall then, we evolved to be generally wary of change, at least when conceived abstractly, but also to be eager to find rare but powerful innovations. For many thousands of years this has lead to great gains in our tools and techniques for collecting info, growing food, keeping warm, fighting, and much more. But note that, until recently, these were mostly changes in how we achieved values that didn’t changed nearly as much.
The last few centuries, however, has seen a great acceleration not only in the rate of change of our tools and techniques, but also in our norms and values.
We roughly understand how more wealth, better science, faster communication, larger orgs, and more tolerance for experimentation, has allowed for sufficient selection among the many trials to support this faster rate of tech gains. However, it is much harder to see how we could have seen enough selection to support our faster rate of value change. After all, most tech can be changed and evaluated locally, but as most norm and values changes are hard to vary individually, their evaluation via selection requires whole communities who share norms to grow or die as a result of their value differences.
Over these last few centuries, we have seen substantial selection in the form of less successful cultures copying the habits and values of more successful cultures. But a great deal of value change seems to have been instigated by cultural activists within cultures. And while this process does involve selection among competing activists with varying bids, it is hard to see how such selection of activist bids could sufficiently proxy for selection of cultures themselves.
Perhaps past cultural evolution has somehow selected for powerful proxy mechanisms of this sort, mechanisms by which the short-term habits by which we pick cultural activists to win well tracks the long-term adaptiveness of entire cultures, mechanisms of which we are now unaware, and can’t even yet imagine. But if not, the simple story here is that such activist-driven cultural change is not actually adaptive, and thus our cultures have been drifting into maladaption for centuries.
That is the scenario of cultural drift that I’ve been worrying on for the last year. And it is a type of change of which we should be very afraid. Change can be good when we sufficiently vet each change to verify that it makes us better. But much more change than that is very bad, though it may take a while for its harms to reveal themselves. Beware value change.
Excellent, well-explained conclusion:
"Change can be good when we sufficiently vet each change to verify that it makes us better. But much more change than that is very bad, though it may take a while for its harms to reveal themselves. Beware value change."
Hi Robin—big fan of your writing. I’ve been thinking about this alot and I disagree that “We roughly understand how more wealth, better science, faster communication, larger orgs, and more tolerance for experimentation, has allowed for sufficient selection among the many trials to support this faster rate of tech gains.”
I don’t think there’s anywhere near sufficient selection on the faster rate of tech—a single mutation happens and it spreads through the population almost instantaneously, unlike genetic selection, which must spread much slower, and is therefore subject to a much more stringent “pressure test.” This slow rate of change is also better at vetting longer term consequences and knock-on effects. We don’t do anything like this with tech. As Sarah Hill points out—we will likely look back on the pill one hundred years from now and be shocked at how cavalier we are with women’s hormones.
What’s more, tech change feeds back on value change. Every big leap in tech opens new pathways that cause us to change our evaluations and priorities. So if the fast rate of value change is an issue, then so is the rate of tech change.
I suspect we may have chosen a long, circuitous path that ultimately dead-ends the moment we used our first stone-tool. Relying on tech is an inherently unstable and risky long-term strategy, much like our reliance on endogenous Ascorbic acid—it anchors us to an external process that isn’t subject to the same rigorous, slow pressure of genetic selection and leaves us super vulnerable.
Would be interested in your thoughts on this.