One of humanity’s key superpowers is our cultural plasticity: we change our species by each of us as kids copying the adults around us. We can consistently be well aware that humans at other times and places are quite different, as long as we see each such cultural variation as well-suited to its situation; we would each want to act and think like them in their situation.
But there’s more of a problem when the result of copying the folks around you is to disagree strongly and deeply with all those other humans from all the other human cultures that have or will exist. Yet this is what happens when one learns one’s morality from one’s culture, and one treats morality not as a local social convention to coordinate local behavior, but instead as an absolute moral truth applicable to everyone in all times and places.
In this case the key question is: how can it be a valid inference for members of each culture to hold their culture’s differing estimates of moral truth, especially when they are aware of the very different estimates of other cultures? One might try to explain the variation across cultures over times as rational updating, except that by this theory such changes should roughly follow a random walk, and it usually doesn’t, and also this doesn’t address the huge variation across cultures at a given time.
To me, the obvious options here are either to limit our moral views to being within-culture claims, which then do not disagree across cultures, or to accept our human superpower of cultural plasticity as a non-Bayesian feature. In which case the changes that happen to most of us as kids when we learn our culture, and then later when we learn about other cultures, are not well modeled as our updating a Bayesian prior on the evidence of our early and then late education.
After all, while Bayesians should draw the same conclusions regardless of what order they learn their evidence, in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths. That earlier evidence in fact counted for a lot more for our final views.
The options here are stark: (A) reject the typical wide scopes of moral claims as applying well to all times and places, (B) accept that you were not Bayesian regarding how you picked your moral beliefs, (C) adopt a position of a culturally-neutral substantial uncertainty on moral truth, or (D) try to explain why you are an exception, so that even though most all humans were not Bayesian in adopting their culture’s views, you and your friends are in fact being Bayesian in adopting your culture’s views on moral truth in great detail.
Added Dec 1: I try to re-express the insight of this post better in this new post.
> in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths.
You're making the very dubious assumption that a rational person well-informed about morality in different cultures would take a position of great uncertainty. Different historical cultures also have had different views on science, but it is not the case that a rational person well-informed about the cultural history of science would tend towards maximal uncertainty about which science is correct. The fact is, modern science is much closer to the truth than ancient Greek or Aztec understandings of the world, and a rational person would accept that modern science is just mostly right and ancient science was just mostly wrong. It's not rational to be uncertain just because you're exposed to many disagreeing perspectives. It depends on the merit of those perspectives.
Some cultures just have worse morality than others, with e.g. slavery, poverty, political oppression, shortsightedness. Exposure to these cultures just affirms that fact to a rational person.
Highly educated people do tend towards certain moral viewpoints that are different from the general public. This is because those viewpoints are better informed and more consistent and defensible. Not all moral viewpoints are created equal.
Also, you're making an implicit claim that morality is about "power" and "coordination" in a local society, and should be judged by these metrics. That is a very specific and extreme moral position for you to take, which can be used to justify all sorts of atrocities.
You're also implicitly assuming that the degree to which moral reasoning is rational/Bayesian is independent of culture. In fact, some cultures are much more rational, allowing free public dialogue and educating their people in critical thinking, and other cultures are much less rational, relying on accepting the word of authorities and suppressing any dissent. The rational cultures- freethinking subcultures of Western democracies - do have substantial agreement about moral matters, and a general agreement that the repressive cultures are getting it wrong.
Doesn't this pre-assume a framing of morality where there is a true underlying "what is moral", and people have different beliefs about what that thing is? The alternative is to think of morals as being a subset of people's goals, and see that people who are raised in different environments have different goals.
Perhaps it depends on the meaning of morals "applying well to all times and places":
It could mean that _your behavior_ should follow similar principles across different times and places. For example, if you think people having access to knowledge is universally good, you would support airdropping copies of wikipedia to all countries, even [especially!] those whose governments restrict it, screw what local cultural norms say.
Alternatively it could mean some kind of expectation that all people should have similar moral goals to yourself, and a mentality that people who have different moral goals are broken and need to be re-programmed - the re-programming itself being the goal, and not just getting others to (even if grudgingly) act in ways that align with your own moral goals. This position seems stranger.
But if we take this framing of morals as goals rather than beliefs, Bayesianism doesn't really seem to apply in either case.