Choose: Cultural or Bayesian Morality
One of humanity’s key superpowers is our cultural plasticity: we change our species by each of us as kids copying the adults around us. We can consistently be well aware that humans at other times and places are quite different, as long as we see each such cultural variation as well-suited to its situation; we would each want to act and think like them in their situation.
But there’s more of a problem when the result of copying the folks around you is to disagree strongly and deeply with all those other humans from all the other human cultures that have or will exist. Yet this is what happens when one learns one’s morality from one’s culture, and one treats morality not as a local social convention to coordinate local behavior, but instead as an absolute moral truth applicable to everyone in all times and places.
In this case the key question is: how can it be a valid inference for members of each culture to hold their culture’s differing estimates of moral truth, especially when they are aware of the very different estimates of other cultures? One might try to explain the variation across cultures over times as rational updating, except that by this theory such changes should roughly follow a random walk, and it usually doesn’t, and also this doesn’t address the huge variation across cultures at a given time.
To me, the obvious options here are either to limit our moral views to being within-culture claims, which then do not disagree across cultures, or to accept our human superpower of cultural plasticity as a non-Bayesian feature. In which case the changes that happen to most of us as kids when we learn our culture, and then later when we learn about other cultures, are not well modeled as our updating a Bayesian prior on the evidence of our early and then late education.
After all, while Bayesians should draw the same conclusions regardless of what order they learn their evidence, in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths. That earlier evidence in fact counted for a lot more for our final views.
The options here are stark: (A) reject the typical wide scopes of moral claims as applying well to all times and places, (B) accept that you were not Bayesian regarding how you picked your moral beliefs, (C) adopt a position of a culturally-neutral substantial uncertainty on moral truth, or (D) try to explain why you are an exception, so that even though most all humans were not Bayesian in adopting their culture’s views, you and your friends are in fact being Bayesian in adopting your culture’s views on moral truth in great detail.
Added Dec 1: I try to re-express the insight of this post better in this new post.