37 Comments
Nov 25, 2023·edited Nov 25, 2023

> in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths.

You're making the very dubious assumption that a rational person well-informed about morality in different cultures would take a position of great uncertainty. Different historical cultures also have had different views on science, but it is not the case that a rational person well-informed about the cultural history of science would tend towards maximal uncertainty about which science is correct. The fact is, modern science is much closer to the truth than ancient Greek or Aztec understandings of the world, and a rational person would accept that modern science is just mostly right and ancient science was just mostly wrong. It's not rational to be uncertain just because you're exposed to many disagreeing perspectives. It depends on the merit of those perspectives.

Some cultures just have worse morality than others, with e.g. slavery, poverty, political oppression, shortsightedness. Exposure to these cultures just affirms that fact to a rational person.

Highly educated people do tend towards certain moral viewpoints that are different from the general public. This is because those viewpoints are better informed and more consistent and defensible. Not all moral viewpoints are created equal.

Also, you're making an implicit claim that morality is about "power" and "coordination" in a local society, and should be judged by these metrics. That is a very specific and extreme moral position for you to take, which can be used to justify all sorts of atrocities.

You're also implicitly assuming that the degree to which moral reasoning is rational/Bayesian is independent of culture. In fact, some cultures are much more rational, allowing free public dialogue and educating their people in critical thinking, and other cultures are much less rational, relying on accepting the word of authorities and suppressing any dissent. The rational cultures- freethinking subcultures of Western democracies - do have substantial agreement about moral matters, and a general agreement that the repressive cultures are getting it wrong.

Expand full comment

Doesn't this pre-assume a framing of morality where there is a true underlying "what is moral", and people have different beliefs about what that thing is? The alternative is to think of morals as being a subset of people's goals, and see that people who are raised in different environments have different goals.

Perhaps it depends on the meaning of morals "applying well to all times and places":

It could mean that _your behavior_ should follow similar principles across different times and places. For example, if you think people having access to knowledge is universally good, you would support airdropping copies of wikipedia to all countries, even [especially!] those whose governments restrict it, screw what local cultural norms say.

Alternatively it could mean some kind of expectation that all people should have similar moral goals to yourself, and a mentality that people who have different moral goals are broken and need to be re-programmed - the re-programming itself being the goal, and not just getting others to (even if grudgingly) act in ways that align with your own moral goals. This position seems stranger.

But if we take this framing of morals as goals rather than beliefs, Bayesianism doesn't really seem to apply in either case.

Expand full comment

It's interesting. I believe that I have spent my life carefully updating my sense of right and wrong into something unrecognizable (and indeed unthinkable) to a child version of myself. I'm also aware that I'm bias-ridden and have likely made a lot of bad updates.

Expand full comment

I think there is some evidence that most people are better "intuitive Bayesians" in childhood than in adulthood. Maybe it just so happens that by the time most of us are exposed to extra-cultural "evidence" we're no longer that good at updating (moral) beliefs.

To mind comes Gopnik comparing the process of growing up to simulated annealing: childhood is a "high temperature" regime, where updates are large and larger areas of parameter space are explored; adulthood being "low temperature", allowing only smaller updates to hone in on local maxima after the first phase hopefully got us closish to the optimal solution. A great strategy for learning to tie shoelaces, not so great for forming ethical systems.

Expand full comment

To everything said by others (especially Berder), I have to add that if we demand that order of updating doesn't influence the result to be Bayesian, then almost _no_ updates most humans make are Bayesian. (See Scott Alexander's "trapped priors" for an extreme example.)

Expand full comment

I'd say those options are not mutually exclusive, because morality consist of many parts and they are not all equally subject to the same forces. So A, B, C and possibly some D.

Expand full comment

There is another way, which is not a choice, but a view which includes discussions about how what-is-under-discussion came about. This may make the choice an unforced error rather than a unavoidable dilemma.

If this is accepted as a possibility and one can avoid evolutionary just-so stories, "the obvious options" may still look obvious but very partial.

The plasticity may look like something else too.

Values are a bit like vowels, constrained by a vocal tract each language works and changes a subset of the possible continuum, such that the neutral vowel the schwa sounds different compared to a true neutral annouciation. Such that the same sound can be perceived as different vowels in different language groups/ accents. Values are constrained by life events (worlding) and more meta worries that we can call doctrine or more particularly 'grammar' (world-building) and they shift around within certain constraints.

While the plasticity does speak of a learning adaptive space, in which we inherit priors and off you go choosing your way, or our way if we chat among ourselves (culture). It also indicates that any vowel, or value, in any language or culture, is an outcome.

This means that there is a selective force to have a moral urge, but not for any particular value (or vowel) within the constraints of survival. In particular, an urge to should on others. I.e. to seek out and meet with others, and learn. If we world well then there will be good outcomes, if we world-build badly then paranoid psychopaths will be given free reign to implement empathy-free death cults (the failing Russian empire is the most egregious example currently).

There is no selection for neutrality, for to be alive is to be biased/prior-ed. However to world well we must deal with the variety of life and accents and values. We are biased 'to world' as much as we live a body, however cultural outcomes are not directly selected for, only that we should should.

If we can take this on board then we can update our biases.

https://whyweshould.substack.com/about

Expand full comment
Nov 25, 2023·edited Nov 25, 2023

This is only true if you assign essentially the same weight to the experiences that shape your inherited morality growing up as compared to the experiences that modify it as you get older.

In reality, the lessons that are taught to you as a child are more valuable in three ways:

First, they are the morality of the culture in which you live, and will therefore be useful for navigating the local social dynamics in addition to any inherent value.

Second, they are the assembled wisdom of generations, and therefore have already been adapted to many of the unusual circumstances and edge cases that you would have to navigate around if you made significant alterations. (Admittedly, sometimes by simply accepting injustice in those cases as the cost of doing business, but that is in itself an adaptation.)

Third, they are imparted to you by people who have genetic and cultural reasons to wish for your success and few reasons, beyond a desire for you to agree with them, to lie to you. As anyone who has spent sufficient time on the Internet can attest, this becomes much less true once you have to deal with the broader public. Later lessons in morality will be much more heavily tainted by self-interest and political calculation.

Expand full comment

B seems correct. It would be a very strange moral epistemology that took our moral beliefs to be justified on the basis of "updating a Bayesian prior on the evidence of our early education".

Similar remarks will, of course, also apply to our non-moral epistemological beliefs -- such as whether or not to endorse a Bayesian epistemology.

In general, Bayesian updating seems a poor model for understanding a priori justification, and foundational philosophical commitments such as favoring induction over counterinduction, taking green and blue rather than bleen and grue to be projectable predicates, etc.

Expand full comment

What do you make of the Golden Rule?

Expand full comment

I don't see what's so stark about it. (B) is obviously true as a description of my foundational moral intuitions (=unscrutinized moral beliefs), but scrutiny under roughly Bayesian strategies has led me in the direction of several views (e.g. ethical veganism, EA, scalar consequentialism sans obligation or permissibility, strong support for voluntarist eugenics) that differ radically from both mainstream views in my culture, and my own untutored intuitions as shaped by that culture.

Expand full comment

This ignores a few points in modernity's favor.

In the past views of the way the works were much more flawed, from physics to biology to astronomy. We have statistics and epistemology, and the philosophy in the water supply has built on the past.

Part of the model being compromised so heavily compromises the model in general.

For example, believing in 4 humors as opposed to the periodic table of elements may induce to believe that balancing the humors is in important part of health, and perhaps shows an important moral truth of the universe as well.

Another advantage is a broader view of things - in the past awareness of other cultures was much more limited, due to lack of travel, education, literacy etc compared to today.

Today there are more people, which means the smartest people are likely smarter than the smartest people in the past, and congregate more easily.

Expand full comment

BIas is there to be overlived.

Expand full comment

What changes about morality if we just switch the word 'moral' (as in describing something as morally correct) with the word 'rule'?

What does 'moral' mean on a planet with no life? How can it mean anything?

I cannot update beyond my intuition that moral reason is arbitrary assertions anchored in social rules and I would appreciate some help on this.

Expand full comment

Wouldn’t it be vastly more likely that someone who knows bayes rule is acting in a Bayesian way? D seems most likely given the prevalence of magical thinking in most times by most people

Expand full comment

Ignoring realists who claim to have some grand theory of moral progress, my suspicion is that a common response from realists might be to argue that in fact many of the supposed differences in moral beliefs accross cultures, are really just due to differences in circumstances resulting in people choosing the least bad beliefs (but nevertheless people having fairly universal intuitions etc.) examples might include choosing slavery over genocide etc.

This seems to have a great number of radical conclusions that I've yet to see any realist endorse, so probably isn't a very workable response.

Expand full comment