52 Comments

You write that “the claim that morals are independent of evolution makes much less sense now that we see each culture’s morals as arising via its cultural evolution.” This seems to confuse (a) the process by which a moral theory (together with a view about how best to implement it in practice) is arrived at, and (b) the theory itself (and associated doctrine about implementation). A moral realist will say that moral theory itself is just as independent of evolution as is physics, though both are learned by an evolutionary cultural process.

Of course, any particular culture may mistake the truth about morality, or about physics. Fortunately for us, our culture has at least *approached very near* the truth in both domains.

Expand full comment

According to this moral realist view evolution is still in the business of discovering moral truth, so you could still make the claim that what propagates is more likely to be moral. Fitness is evidence of good morality, just as it is evidence of good physics.

Expand full comment

It's not impossible to have a realist view (in the sense of normative facts), where that which is good is that which is adaptive etc. although even then you would end up with something fairly relativistic or mind dependent e.g. it might be "good" for you to kill a neighbouring tribe and kidnap their women, but "bad" for that tribe to do the same to you. It's not impossible given large enough "units of selection" for such "moral" disagreements to be resolved but even then I'm not sure Hanson actually believes that evolution is driven by sufficiently large units.

The second issue is that most moral realists would like to have arguments with "normative force", that simply telling someone who likes to stomp their own babies to death etc. that what they are doing is "bad" by which they mean they don't like it or it causes suffering or "boo" or is maladaptive etc. is not sufficient. That realists are really trying to say something like, "Independent of your desires/beliefs you have a strong reason not to stomp your own babies to death" and that being "rational" is that which you have most "reason" to do etc. Hanson's account doesn't seem to have such "normative force".

Lastly, within meta-ethics claims about moral beliefs being the product of cultural or DNA Darwinian evolution etc. are usually treated not as evidence of moral truth but rather as something that undermines the reason to trust such beliefs/intuitions, this is the essence of evolutionary debunking arguments, with the usual response by realists being not to say that Darwinian evolution is truth-tracking wrt to morality, but rather to simply deny that the human mind is wholly the product of Darwinian evolution.

Expand full comment

If the choice is between denying that the human mind is wholly the product of evolution and having less forceful moral arguments, I choose the latter.

Expand full comment

I agree.

Expand full comment

The response to unknown existential threats is diversity. It is a mistake to assume that improvements to humanity in response the existing environment and culture are enacting a Darwinistic program. Evolutionary jumps take place when large populations do not adapt to changed conditions. The species that survives have sufficient diversity that the one of the many phenotypes not only survives, but thrives in the change. Vegetable gardening and mining are essential skills for a social collapse. Human organizing and peace making are important skills for technological advance. One must not choose between the two. Diversity dictates that both be maintained.

Expand full comment

An argument for diversity as adaptive to threats is an argument of the form I am suggesting. Promoting diversity in our future evolution may indeed promote our continued existence.

Expand full comment

I think this viewpoint is most aligned with my own. In the face of a physical (and cultural) world that we can't fully understand or predict, the best course of action is to promote diversity. This is the argument for preserving the half million or so species of beetles on Earth: Not because every one of them is especially important, but because biodiversity in general equates with resilience in the face of an unknowable future.

Regarding cultural evolution, of course everyone fixates on how to put their thumb on the scale to deflect things in a direction they prefer. But this is just selfishness. The more fundamental problem is how do we maintain cultural diversity when technology (and culture itself) are pushing us so strongly toward a monoculture.

Expand full comment

I fully agree. We cannot control how the monoculture treats true cultural diversity. All we can hope for is that within our subculture cooperation across true cultural diversity will be something to cherish.

Expand full comment

Moral Realism is both a lie and an illusion.

If we reject moral realism, there is no “problem of evil”. There is no need to explain why people are evil. Evil is natural. Instead, there is a “problem of good”. It is hard to explain how humans manage to create cooperation, especially on a large scale.

My friend elaborates more in his essay: https://thewaywardaxolotl.blogspot.com/2024/08/the-case-against-moral-realism.html.

Expand full comment

In the end, X, Y, and Z are the same thing - success from a Darwinian point of view, that is, doing what produces most offspring/descendants that are capable and likely to continue to produce most offspring/descendants, and so on into the indefinite future.

From a morality point of view, you can look at history and see what morals lead to most descendants, and perhaps try to anticipate a bit into the future. Nazism wasn't successful because it was too aggressive and made too many enemies. Also, anything tied to a particular people is likely to fail by limiting itself. You can cross off a lot of moral and social developments and experiments this way. You are left with the major long lasting moral systems of the world and their branches and development - Christianity, Islam, Hinduism, Confucianism, and Buddhism. It would probably make sense to see what they have most in common, and go along with that. A development of whatever that is is likely to win out anyway, so you a Neo Darwinist should try to be an early adopter.

Expand full comment

You seem to be making arguments of the form that I recommend in this post.

Expand full comment

I agree with the argument that if we’re serious about making fitness claims about values, the evidence from extent survival points in the direction of traditional religious values. Different traditions do tend to conflict a lot with one another, so that should lead us to restrict the argument to a set of values that is shared across traditions. Such a set does not admit any particular claims about supernatural beings, but it does admit a notion of interpersonal duty and meaning. The evidence from GDP-per-capita and personal success stories does also count, and in many cases points away from traditional values. In addition, fitness in a past environment may not correlate with fitness in a future one. If we’re going to declare some values more likely to be fit going forward, please let’s try to do it on the basis of facts.

Expand full comment

Agree mostly, but there may be claims about the supernatural that are common - such as a creator god, and an afterlife rewarding and punishing behaviour in this life. These claims may be essential to productive attitudes to life - such as approaching life as a challenge to improve and grow morally.

Also, it may be that some principles and points made by religions benefit from supernatural detail when working as social and or moral guidance. For example, you might philosophise that knowledge of people's own and others vulnerability lies at the heart of morality, but with an accompanying story involving a snake asking naked people to eat fruit from the tree of knowledge the point may become more memorable and easier to introduce to most people and perhaps even children.

You might look at the common underlying philosophical points, and pick the stories from each that are most effective at helping to spread the points.

Expand full comment

Yes. I guess the problem is that if the stories contradict each other they make each other less effective.

Expand full comment

Robin Hanson is our Nietzsche. I can't be the only person who sees this connection, yet I sometimes think I am.

Expand full comment

I think I see what you mean. The dream of a rational basis for moral choice exerts a gravitational field unlike any other. The grater the mind, the stronger the attraction.

Expand full comment

Huh, but what's the normative argument for this view?!?!? Objection D was logically correct whether or not it actually persuaded people -- you can't answer a should we X question by saying "I predict that in the long run we will/won't end up doing X". EDIT: And since **you** are advancing the normative claim in advocating accepting social darwinism it's your burden to argue such a theory suggests normatively desierable policies and D is an observation you haven't done so,

I don't understand the logical structure here. How is it different than responding to various appealing but misguided ideas like price controls in economics by saying "looks like these are rhetorically appealing so have an advantage in our kind of society so therefore let's do that."

The whole point of engaging in reasoned discussion about future choices is to recognize places where what we might be inclined to otherwise do should be avoided and try to avoid it. A theory of what will likely happen in the future can obviously inform our views about long term consequences but the whole point is to try to sometimes divert the course away from what might otherwise happen.

I mean so what if policy D will tend to result in lower competitive advantage? That's only a reason not to do it if it means that doing it results in a worse world than not.

Expand full comment

"For most X that you want to promote" is the normative argument. For most things you want, this is how to get it.

Expand full comment

Sorry, are you suggesting that all you mean by social darwinism is the descriptive claim that in fact what happens in the long run is subject to memetic selection and other selective forces?

If so I think that's a very confusing terminological choice as it's not what social darwinism meant originally and it renders D not as an objection to social darwinism but a claim about how one should apply that fact.

Expand full comment

I said what I meant, directly.

Expand full comment

I conjecture that not a single one of your readers understands what you're saying. I didn't even finish the article so I could be wrong, but just my impression.

Expand full comment

Any value you currently have has been selected by cultural evolution. It is in you because it has so far won the game of cultural evolution. Cultural evolution is the value-producing machine. Adopting a new value or rejecting an existing value based on its predicted contribution to the propagation of all your memes and genes is a standard procedure in evolution. In biology we call it mate selection. You are a joint venture of memes and genes and they may change board members at will. We do it all the time. It is the value-producing act.

Expand full comment

I also immediately zeroed in on objection D. I don't understand Hanson's reply*, but it seems likely that should have been the topic of the entire article if he's trying to make a normative claim.

I usually don't understand Hanson since he never addresses normative ethics.

* His reply was: "And re (D), that is no more a criticism of this than of any other concrete claims about where lies moral value."

Expand full comment

Moore's argument said it doesn't make sense to say "morality is C" for any specific C, because then it wouldn't make sense to ask "but is C moral?" This argument works for any specific C.

Expand full comment

As I understand it, the open question argument is arguing against naturalistic attempts to say Good/Bad is X analytically. But moral realism in the sense of objective normative facts etc. remains unharmed.

Do you believe in any objective normative facts about ethics? and or are you deriving the force of your arguments from somewhere else? Such as wants and preferences.

Expand full comment

Few who want to actually use Social Darwinism need to argue that evolution is good analytically. Just like few who want to use "greatest good for the greatest number" need to argue that is good analytically. The analytic critique is a distraction, and can apply equally to ANY claim about what is good.

Expand full comment

In some trivial sense it's true to say the analytic critique is a distraction, as it's simply a linguistic issue. But more deeply this doesn't capture the "action guidingness" of normative properties, or in this case the "belief guidingness" and how you ought to think about this conceptually.

Here is Chappell alluding to this a while back "But this conflates moral belief with truth, as well as evolutionary with normative goals. The fundamental moral facts, if there are any, did not evolve: like other abstract truths (e.g. mathematics), they just are. Perhaps our moral beliefs/dispositions were shaped in part by evolutionary selective pressures. But even if the evolutionary "purpose" of our moral beliefs (like everything else) is to help us survive and propagate our genes, that doesn't make it a "purpose" we must share. Normatively speaking, belief aims at truth, so the purpose of our moral beliefs is to accurately represent whatever the moral truths are. And whether it's good for us to survive is a substantive normative question -- albeit one that's plausibly settled by whether our lives tend to be good for us on net." (https://www.philosophyetc.net/2012/04/value-of-life.html) As in if a moral reality really does exist you ought to correctly pair various moral properties conceptually with what ever they really are a part of.

Expand full comment

I just don't understand what point you are trying to make here about my post. It seems you now understand what I meant by my comment on "naturalistic fallacy".

Expand full comment

Wants and preferences are survivors of cultural and biological evolution. They are replicators. Yes, we rely on them to determine what is of value, but that does not mean we didn’t derive a value from evolution. They are the standard bearers of cultural and biological evolution, acting only in its name.

Expand full comment

Which is deeply puzzling. If someone said "We should increase the federal reserve funds rate to reduce inflation" and someone objected "But reducing inflation would be bad because it causes Y" it would be really odd to respond:

That's no more a criticism of my claim we should increase the funds rate than any other concrete argument the effect isn't desierable"

Uhh, maybe but that's kinda the whole game.

Expand full comment

I argued about this with TGGP a while back about whether Hanson was a moral realist, might be worth reading the whole conversation (https://open.substack.com/pub/richardhanania/p/the-dissident-right-should-engage?utm_source=direct&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=57910817) As I mention Richard Chappell has argued against Hanson's theory granting normativity.

Expand full comment

I think I do have moral views, but find them hard to defend, and thus also poor bases for arguments. I try to give arguments with the strongest supporting bases I can find, and so when possible use other non moral bases.

Expand full comment

I was wrong then, at least if such moral views entail normativity. The econ hat stuff is very similar to David Friedman's approach (http://www.daviddfriedman.com/Ideas%20I/Libertarianism/Economics%20vs%20Philosophy.pdf)

Expand full comment

I call first dibs on "post-neo-social Darwinism."

"now that we understand culture, we can analyze how to best participate in and promote cultural struggles"

I don't even know what it means to understand culture.

"And that is my proposal for a Neo Social Darwinism, an especially defensible version of what has long been a reasonable and unreasonably maligned stance, that it is sensible to study how to best participate in and promote the Darwinian struggle for existence. "

We should promote the survival of the survivors, then, not necessarily the fittest. Since fittest is defined as those who survived.

"(A poll of my Twitter/X followers finds only 10% favoring policy that resists Darwinian outcomes.)"

I'm guessing these were the same people who could distinguish between their deeply held beliefs and their deeply held intellectual opinions.

Expand full comment

“We should promote the survival of the survivors, then, not necessarily the fittest. Since fittest is defined as those who survived.”

The study of what survives in cultural evolution is not done with the objective of promoting it, as much as it is seeking to use it as source of good breeding stock. By combining the values you have with proven propagation vectors you can give your existing values a better chance at long term survival.

Expand full comment

Who is the ultimate benefactor of all this? So if EM's might replace us at some point, your argument is to get all humans replaced by EM's as quickly as possible. After all, current physics claims there should be no or more less consciousness with them.

Even within evolution, humans are not the benefactors of evolution, rather we are the vehicles of the replicators (a strand of DNA). There's no ultimate benefactor of evolution.

Yes there's obviously sexual and cultural selection in human societies, but I don't see how its existence solves any of the incredibly hard questions about normative ethics and moral theory. The same kind of question that the Friendly-AI people struggle with. After all evolution is just a math process, not a normative moral theory. The same goes for eg. for leisure-work-ratio.

There's huge amount of suffering in animal world due to intense Darwinian competition. It doesn't require killing to cause suffering. Why isn't this a parameter anywhere in your post / model. Of course you could claim via physics that no suffering exists, because consciousness does not exist, but in all honesty, if you asked physicists if they believe suffering is meaningless and there's nothing wrong with genocide (minus the economic loss), you'd probably get very close to zero who think so, even after adjusted for obvious signalling. We just probably don't understand suffering yet. After all Alex Tabarrok wrote an excellent paper on probabilistic ethics if consequences matter a lot. The same goes for pleasure, and all possible values we might have.

Ok re-reading your blog post I might have missed the point because you asked how to promote X beyond your death. Yeah ok fair enough, the same moral problems remain. We can dissect this a bit.

1. I think the important question is why would you even care that much things beyond death. The drug example I don't think fits because if do them, you ruin rest of your life or die. Unlikely that benefits outweight the costs, esp. because you can possibly live very long. If someone says they think they do, why are we judge their utility functions.

2. I don't think non-sentient things have utility function, and reason to promote them is mostly related to signalling. This leaves descendants. How much should you care about the descendants, I don't know, makes sense, but beyond say grandkids this to me looks more like actually signalling.

3. The more reductive question here is how much current gen should care about future gen. I don't pretend to know the math-utility answer to this. Seems hard, obviously not zero, and to some degree, especially due to human life logistic CDF. I guess I leave that to game theorists to resolve.

4. But you don't need to go beyond generations, you can just talk about work-leisure ratio. After all work is mostly fitness-increasing, and leisure is not. The same menu of moral problems remain.

Also cultural contagion. If a company said they promote "Darwinian competition in their company". Is this kind of a workplace you wanted to work in, is this kind of culture, and kind of people you want to work with? Obviously companies are in competition with each other, but culture matters.

Reminds me of Boeing. After McDouglas merger they went from engineer-ran company to I think lawyer-ran company. Former Boeing CEO who promoted very Darwinian thinking where they'd fire the worst performing group every year even if they did profit and nothing wrong. In fact the merger probably saved the McDouglas from bankruptcy but now contaminated the much better run Boeing. Eventually this cultural process lead to QA corner-cutting, and the issues they are now having. I really recommend this video by pilot about it: https://youtu.be/nCbHpJShoXk

If I'd ask Robin he'd probably say "this is not what I want from an airline company, I wanted safe, good airplanes, not someone who maximizes KPI without caring about anything else". Point is that what idea on your theory paper looks like and what it is in reality, are two different things. Culture does matter, ask even Tyler. My guess is that "promotion of neo social darwinism" will invite many unpleasant people and unpleasant ideas as well.

Anyone who has travelled knows there're unpleasant features of cultures. Some of these features might be very fitness-increasing, but that they are desirable, especially in the long run, is very questionable.

Also do I get over-the-market returns by investing in "Darwinian culture" promoting companies? Does Robin invest in such, and why not? Or does it only apply to societies but not companies. The general reply to this is that there's much more to successful company than having Darwinian attitude to everything, and same goes for societies. I wanna know what Tyler thinks of this.

Also I'm curious what on concrete level does this promotion mean now? Is there some math function we're trying to maximize (minimize) here now? Fitness? Of humans, group, individuals? Is that basically saying ceteris paribus we should have more competition, and faster selection? Or what exactly? This even assumes more competition is overall fitness increasing, it might even reduce co-operation and decrease fitness. There're many examples in animal kingdom where excessive competition vs co-operating has lead to even extinction.

Anyway, I'd love to see a podcast debate with Tyler and you about this.

Expand full comment

On (F), one can quibble about whether an ideology can “cause” a historical event, but to me and most observers it is clear that an evolutionary analysis is fundamentally woven into the Nazi ideology. Our abhorrence to Nazism stems from its racial framing of Nietzsche’s Ubermensch / Untermensch concept, which rests on the conclusion of Nietzsche’s evolutionary analysis that nature is the will to power and the strong should take what they can.

True that not every evolutionary analysis has to come to this conclusion, as you mention, but Nietzsche’s and the Nazis’ did.

Expand full comment

Odd that you are trying to take back the term Social Darwinism. This is the definition Google highlights in a search. Very negative but consistent with what it brings to mind for most:

“Social Darwinists believe in “survival of the fittest”—the idea that certain people become powerful in society because they are innately better. Social Darwinism has been used to justify imperialism, racism, eugenics and social inequality at various times over the past century and a half.”

I think what you are trying to do is different, and calling it social Darwinism might not help your cause.

Expand full comment

some quibbles:

«As we humans are just not very good at distinguishing which of our very important goals are powerful general means versus ultimate ends, ... »

I would not say, that we are bad at making that distinction. The distinction is usually decision-irrelevant, so we wisely won't bother making it. The brain's ultimate motive is minimizing prediction error. Goals that don't fundamentally align with that core drive, are fanciful self-delusions. If your goals are so long-term, that their fulfillment cannot be seen achieved within controllable/perceivable time horizons, they are picoeconomically unsustainable, hence unachievable.

"Yes, you could insist that you care little for the distant future, but only want to promote stuff now. But think of how you might advise a drug addict who just sought pleasure now, at the cost of plausibly dying in a few months."

That's not a fair comparison. People do not care about the distant future, because they correctly perceive themselves to be unable to predict it to a reasonable degree. So called long-termists, who demand idiotic things like expensive nuclear waste storage that lasts millennia, ignore that humanity will likely A) become more capable in the future trivializing the problem within hundreds of years or B) may be extinct within hundreds of years. Obviously, we want to get to A), but wasting resources and attention on such "long-termist" plots today, makes B) more likely.

Expand full comment

We may indeed not often need to distinguish powerful means from ultimate ends, but that suggests we would be bad at doing so, having little need to develop such a skill.

You guys keep just claiming it is impossible to predict the long term, without proof.

Expand full comment

"You guys keep just claiming it is impossible to predict the long term, without proof."

I believe your long-term predictions are actually useful. But also that long-term framing is often used to argue for counterproductive policy, by boosting weak arguments via pretentious claims to grand foresight.

Your neo-social Darwinist claim does not need a "long after you die"-framing to be persuasive, at all. It's actually a hindrance. If thru your actions you're not seeing directional desired changes and momentum towards X within your lifetime, they sure won't happen after it, anyway.

[epistemic status: meh whatever. My minor disagreement are boring even myself now.]

Expand full comment

Promote the strugle for existsnce is such a bad take. I’m honestly amazed at the quality of this post

Expand full comment

Care to say WHY it is bad take?

Expand full comment

The capability that enables some memes to proliferate while others perish is collaborative in nature. As Lynn Margulis pointed out, too much focus on competition in evolution is mistaken. When a gains-to-scale production function is in operation, cultural evolution is selecting for memetic groups that are able to grow and maintain functional coordination in the face of increasing scale. Growth in the cultural domain is primarily about collaboration, not competition. Coordination is collaboration, not competition. The mistake is in Darwinism itself promoting an incorrect view of evolution as a process dominated by competition. Instead it should be viewed as a process that explores coordination space: coordination between genes and other genes, coordination between memes and other memes, and coordination between memes and genes. It is not the mutation that is its driving force, it is recombination. We don’t need Neo Social Darwinism. We need Social Margulianism.

Expand full comment

I just said to analyze what wins, not that competition wins over cooperation. I left that question open on purpose.

Expand full comment

The opponents of Social Darwinism will not let you clean it out by attaching a prefix. Social Darwinism invokes an emphasis on competition. If you want to promote a study of cultural evolution you need to address the deep concern of the opponents of Social Darwinism.

Their deep concern is that the study of evolution in the value realm is not moral, because it leads people to focus on competition instead of collaboration. They have a point. The only way to discuss evolution in this realm without generating that adverse effect is to reject the emphasis on competition and replace it with an emphasis on collaboration.

Of course reality is multi-faceted and cannot be boiled down to a choice between competition or cooperation. I’m not saying that you have to choose one and forever deny the role of the other. It’s just a matter of emphasis, just about what sign you hang on the door. Hang a sign that says you’re looking for ways to collaborate, not one that says (to most people) that you’re looking for ways to murder or rape. More attractive sign on the door, more people coming in. Isn’t that cultural evolution 101?

BTW, have you read Robert Wright’s book NonZero? It posit that the direction of evolution is to find ever more non-zero sum games and play them for mutual gain. We don’t need to salvage Social Darwinism when we can just get behind contemporary evolutionary analysis.

Expand full comment

Natural selection has been the main way in which life has evolved, but with the rise of intelligent beings (us) we see more and more artificial selection; that is, the conscious decisions of us human beings are playing a greater and greater role in determining what sorts of living beings populate the earth. So the natures of future intelligent beings--which determine their decisions, given their environment--will more and more determine what life forms will subsequently exist. The natures prevailing in each generation of these beings will in considerable part be rather directly determined by the natures of the previous generation. Realistically, there is nothing we can do now to guarantee that our beliefs/values/styles/whatever will persist generation after generation, indefinitely; too many decisions by autonomous intelligent beings intervene between us and the far future. It would be irrational not to limit our concern to the next generation or two, where we have a modicum of influence.

Expand full comment

Cultural selection was not possible until what you call "artificial" selection appeared. Analyses of cultural evolution include such effects.

You guys keep claiming that the future is impossible to predict. But offer no proofs. Evolutionary analysis suggests otherwise, and i've offered contrary example.

Expand full comment

«It would be irrational not to limit our concern to the next generation or two, where we have a modicum of influence."»

Make that 3. Having great-grand children is very achievable. And that third next generation has potentially a lot more members, if you play your cards with the first and second generation right. If anti-aging tech matures, within your lifetime, you can increment that number potentially indefinitely.

«Realistically, there is nothing we can do now to guarantee that our beliefs/values/styles/whatever will persist generation after generation, indefinitely; too many decisions by autonomous intelligent beings intervene between us and the far future.»

Life is decision making under uncertainty. There will never be any guarantees. But if you will have managed to create a large gen 3, you will also have exerted a large influence, on what those intelligent autonomous beings will be like. If you amass wealth and power in the here and now, you can also shape gen 1 to 3's environment advantageously, in the future.

Expand full comment

I think I detect an ambiguity in your statement that “few care much about the future after they are gone . . .,” affecting the word ‘care’. In principle, I “care” a whole lot about the far future, which I expect to outweigh in importance everything that has happened up to now. I also “care” about the past, including the distant past; I wish things had gone altogether better than they did. But in a practical sense, I am almost exclusively concerned with my own affairs in the near--especially the very near--future, because these are what I understand best and what I can strongly influence. My influence over the past is nil; and over the far future it is unknowable, so that I exercise practically no *control*. In short, there is theoretical “caring” vs. practical “caring”—“far caring” vs. “near caring.” Your statement, above, is correct for *near caring*, but not for *far caring*.

Expand full comment