> in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths.
You're making the very dubious assumption that a rational person well-informed about morality in different cultures would take a position of great uncertainty. Different historical cultures also have had different views on science, but it is not the case that a rational person well-informed about the cultural history of science would tend towards maximal uncertainty about which science is correct. The fact is, modern science is much closer to the truth than ancient Greek or Aztec understandings of the world, and a rational person would accept that modern science is just mostly right and ancient science was just mostly wrong. It's not rational to be uncertain just because you're exposed to many disagreeing perspectives. It depends on the merit of those perspectives.
Some cultures just have worse morality than others, with e.g. slavery, poverty, political oppression, shortsightedness. Exposure to these cultures just affirms that fact to a rational person.
Highly educated people do tend towards certain moral viewpoints that are different from the general public. This is because those viewpoints are better informed and more consistent and defensible. Not all moral viewpoints are created equal.
Also, you're making an implicit claim that morality is about "power" and "coordination" in a local society, and should be judged by these metrics. That is a very specific and extreme moral position for you to take, which can be used to justify all sorts of atrocities.
You're also implicitly assuming that the degree to which moral reasoning is rational/Bayesian is independent of culture. In fact, some cultures are much more rational, allowing free public dialogue and educating their people in critical thinking, and other cultures are much less rational, relying on accepting the word of authorities and suppressing any dissent. The rational cultures- freethinking subcultures of Western democracies - do have substantial agreement about moral matters, and a general agreement that the repressive cultures are getting it wrong.
Doesn't this pre-assume a framing of morality where there is a true underlying "what is moral", and people have different beliefs about what that thing is? The alternative is to think of morals as being a subset of people's goals, and see that people who are raised in different environments have different goals.
Perhaps it depends on the meaning of morals "applying well to all times and places":
It could mean that _your behavior_ should follow similar principles across different times and places. For example, if you think people having access to knowledge is universally good, you would support airdropping copies of wikipedia to all countries, even [especially!] those whose governments restrict it, screw what local cultural norms say.
Alternatively it could mean some kind of expectation that all people should have similar moral goals to yourself, and a mentality that people who have different moral goals are broken and need to be re-programmed - the re-programming itself being the goal, and not just getting others to (even if grudgingly) act in ways that align with your own moral goals. This position seems stranger.
But if we take this framing of morals as goals rather than beliefs, Bayesianism doesn't really seem to apply in either case.
If you start out uncertain about your goals, and then you learn more about your goals from your early life education, why wouldn't you continue to learn more about your goals from hearing about how others are educated in other cultures?
Is "learning about your goals" the right frame, or is "being reprogrammed to have other goals" the right frame? Learning is (in standard decision theory) always desirable, being reprogrammed to have different goals is not desirable, unless the reprogramming comes with a capability increase.
Ethics are not culture based, it is so that the Bible teaches you the the commandments, which is a GREAT help/guidance for humanity, take that away and you probably have an ethical gap , but to teach ethics should be part of school curriculum, since it seems like the Bible is downtalked for the last 3 years especially.
It's interesting. I believe that I have spent my life carefully updating my sense of right and wrong into something unrecognizable (and indeed unthinkable) to a child version of myself. I'm also aware that I'm bias-ridden and have likely made a lot of bad updates.
I think there is some evidence that most people are better "intuitive Bayesians" in childhood than in adulthood. Maybe it just so happens that by the time most of us are exposed to extra-cultural "evidence" we're no longer that good at updating (moral) beliefs.
To mind comes Gopnik comparing the process of growing up to simulated annealing: childhood is a "high temperature" regime, where updates are large and larger areas of parameter space are explored; adulthood being "low temperature", allowing only smaller updates to hone in on local maxima after the first phase hopefully got us closish to the optimal solution. A great strategy for learning to tie shoelaces, not so great for forming ethical systems.
To everything said by others (especially Berder), I have to add that if we demand that order of updating doesn't influence the result to be Bayesian, then almost _no_ updates most humans make are Bayesian. (See Scott Alexander's "trapped priors" for an extreme example.)
I'd say those options are not mutually exclusive, because morality consist of many parts and they are not all equally subject to the same forces. So A, B, C and possibly some D.
There is another way, which is not a choice, but a view which includes discussions about how what-is-under-discussion came about. This may make the choice an unforced error rather than a unavoidable dilemma.
If this is accepted as a possibility and one can avoid evolutionary just-so stories, "the obvious options" may still look obvious but very partial.
The plasticity may look like something else too.
Values are a bit like vowels, constrained by a vocal tract each language works and changes a subset of the possible continuum, such that the neutral vowel the schwa sounds different compared to a true neutral annouciation. Such that the same sound can be perceived as different vowels in different language groups/ accents. Values are constrained by life events (worlding) and more meta worries that we can call doctrine or more particularly 'grammar' (world-building) and they shift around within certain constraints.
While the plasticity does speak of a learning adaptive space, in which we inherit priors and off you go choosing your way, or our way if we chat among ourselves (culture). It also indicates that any vowel, or value, in any language or culture, is an outcome.
This means that there is a selective force to have a moral urge, but not for any particular value (or vowel) within the constraints of survival. In particular, an urge to should on others. I.e. to seek out and meet with others, and learn. If we world well then there will be good outcomes, if we world-build badly then paranoid psychopaths will be given free reign to implement empathy-free death cults (the failing Russian empire is the most egregious example currently).
There is no selection for neutrality, for to be alive is to be biased/prior-ed. However to world well we must deal with the variety of life and accents and values. We are biased 'to world' as much as we live a body, however cultural outcomes are not directly selected for, only that we should should.
If we can take this on board then we can update our biases.
This is only true if you assign essentially the same weight to the experiences that shape your inherited morality growing up as compared to the experiences that modify it as you get older.
In reality, the lessons that are taught to you as a child are more valuable in three ways:
First, they are the morality of the culture in which you live, and will therefore be useful for navigating the local social dynamics in addition to any inherent value.
Second, they are the assembled wisdom of generations, and therefore have already been adapted to many of the unusual circumstances and edge cases that you would have to navigate around if you made significant alterations. (Admittedly, sometimes by simply accepting injustice in those cases as the cost of doing business, but that is in itself an adaptation.)
Third, they are imparted to you by people who have genetic and cultural reasons to wish for your success and few reasons, beyond a desire for you to agree with them, to lie to you. As anyone who has spent sufficient time on the Internet can attest, this becomes much less true once you have to deal with the broader public. Later lessons in morality will be much more heavily tainted by self-interest and political calculation.
B seems correct. It would be a very strange moral epistemology that took our moral beliefs to be justified on the basis of "updating a Bayesian prior on the evidence of our early education".
Similar remarks will, of course, also apply to our non-moral epistemological beliefs -- such as whether or not to endorse a Bayesian epistemology.
In general, Bayesian updating seems a poor model for understanding a priori justification, and foundational philosophical commitments such as favoring induction over counterinduction, taking green and blue rather than bleen and grue to be projectable predicates, etc.
If we should not accept claims that authorities make later in life, merely because they are authorities, why should we treat early-life-authorities claims differently?
Is there some alternative way for young children to learn about the world?
Many end up with unjustified beliefs. There's no (content-neutral) method that guarantees justified beliefs. All we can do is hope that we are among the lucky ones. (Just think how messed up you would be if raised by Nazis or counter-inductivists!)
I'm not questioning the views of young kids. I'm questioning the views of adults once they have learned about the existence of many different cultures that teach very different moral views.
Many adults, perhaps even most, when they encounter the moral beliefs of a different culture, do change their childhood view of morality. I see this happening on a regular basis with international students from East Asia. They were taught as children and internalized the moral goodness of obedience to parents and authority; and the moral necessity of subordinating their own needs to those of their group. They sojourn to North American for university, and learn other moral values, and question the validity of what they were taught as children. This is common for immigrants; think Ayaan Hirsi Ali fleeing Somalia for the Netherlands.
Yeah I am surprised that they seem to think everyone sticks to (or mostly sticks to or grants unearned trust in) their childhood beliefs. I live in Utah where up until this century and pre internet, it was very insular Mormon...but exposure to the outside world by people moving here and the Internet is completely destroying LOTS of adult Mormons' worldview and beliefs, they are leaving in droves. And once the foundation cracks, they absolutely question everything they were taught and have to build up an entirely new set of morals from scratch. This is often a psychologically devastating process that takes years for them to work through.
Agree with you. The assumption of not updating childhood beliefs seems to be a privilege of being born into the dominant / majority culture. Nonetheless, the lasting power of childhood teachings is compelling -- as you say, that is the reason for the psychological difficulty of the updating and revision that Mormons have experienced.
What fraction of significant cultural differences survive the Chesterton's Fence filter? I think I know very well why many cultures are violently opposed to male homosexuality, for example. It has a strong explanation, in terms that don't survive moral scrutiny. And the same is true of my own society's opposition to incest, even in cases where it's guaranteed not to involve abuse or to produce any offspring.
One option here is to say that not only were you not a bayesian when you learned your moral beliefs as a kid, you are not a bayesian when it comes to your moral beliefs even today. That is, they are more or less fixed by a mixture of biological and cultural influences, and you are stuck with them as an adult.
I agree, but this hypothesis can help us explain something you mentioned, namely that people do not seem to update their moral beliefs when they learn about the existence of other cultures. I would also add that even if our fundamental moral beliefs are fixed, in the sense that we regard some states of the world as intrinsically better, how to reach those states is something we can have a rational discussion about.
Why would the existence of different cultures give you any reason to change your view, once you have one? The more fundamental issue is that there are multiple *possible* coherent views out there, and no non-question-begging reason to think yours in particular (or even all of humanity's, if all agreed) is correct.
In the end, the best we can do is to have "default trust" in our own starting points and dispositions, even though such trust does not guarantee justification (let alone truth). But it at least gives us a *shot* at both justification and truth, which is more than can be said of the alternative (radical skepticism):
Why should we have "default trust" in the points we were pushed into by our parents and early life cultural authorities? I don't see why we should just pick something at random to believe just so we might get lucky and believe the truth.
It's not so much that you should "pick something at random", but that (i) causal influences will shape your psychology, and (ii) once your psychology has a shape (and associated deep-rooted commitments: pro-induction, anti-suffering, etc.), it's reasonable to maintain default trust in your own psychology and not prefer an amorphous psychology of global skepticism that would have no chance at all of getting anything right.
Think of it this way: suppose you begin life as an amorphous arational blob, incapable of judgment. Your mother is offered a coin-flip: heads you develop to have a rational psychology. Tails you develop to have an irrational psychology. Should she take the offer? Sure! You're not any better-off in the arational state than with an irrational psychology, after all. And at least you have some chance of developing into a proper agent this way.
Later in life, you learn of the above events. Except you never learn the result of the coin-flip. Should you be glad that your mother took the bet? Sure! You (reasonably enough) take yourself to be rational. And even if you can't be absolutely certain of that, you certainly wouldn't be better off as an arational blob incapable of forming any judgments at all.
Now suppose the coinflip is replaced by evolutionary & cultural forces. There are more than just two possible psychologies on offer now. On the other hand, the process is not so "chancy": the forces in question might systematically push in certain directions (towards a certain balance of pro-social and selfish dispositions, perhaps). There's no independent guarantee that those forces would be aligned with objective truth and rationality. But if it turns out that they are so aligned, you needn't regard it as a total coincidence that you (as a product of those forces) turned out to be at least roughly rational (and with roughly true moral and other normative beliefs).
I spell out the full story in this paper (responding to Parfit):
To be clear, we shouldn't accept the claims of early-life-authorities as authoritative. They provide no evidence. But they (causally) shape our eventual priors. Looking back, from a position of default trust in our own minds, we are forced to think that we were lucky to be shaped as we were. But that would be just as true if our moral beliefs were decided by an explicitly random procedure such as a magic 8-ball, which obviously isn't reliable or justification-conferring. We should think of our early-life authorities in much the same way.
Let's say there are two aspects to morals: evolved norms and reasons. In that case, there would be two types of moral claims, Bayesian ones and non-Bayesian ones.
Hugo Grotius calls the Bayesian ones Natural Law. i.e. very general facts about behaviors that require just a modicum of rational thought and data to see as advantageous or disadvantageous for the people you care about.
Other elements of moral behavior, cultures learn slowly though many iterations of norms and competition. He calls the things that are convergent evolutions of this process the Laws of Nations. If you play enough "Moral rounds" iterate enough, then there will more moral facts discovered. Such as Rule of Law being superior to special legal privileges for aristocrats.
He doesn't have a word for the random variations for adapting to highly local circumstances.
But does Bayesianism really apply to highly dimensional iterated game theory dilemmas with multiple equilibria? History matters. As does the decision algorithms of other agents. So the order of events matters too!
I don't see what's so stark about it. (B) is obviously true as a description of my foundational moral intuitions (=unscrutinized moral beliefs), but scrutiny under roughly Bayesian strategies has led me in the direction of several views (e.g. ethical veganism, EA, scalar consequentialism sans obligation or permissibility, strong support for voluntarist eugenics) that differ radically from both mainstream views in my culture, and my own untutored intuitions as shaped by that culture.
In the past views of the way the works were much more flawed, from physics to biology to astronomy. We have statistics and epistemology, and the philosophy in the water supply has built on the past.
Part of the model being compromised so heavily compromises the model in general.
For example, believing in 4 humors as opposed to the periodic table of elements may induce to believe that balancing the humors is in important part of health, and perhaps shows an important moral truth of the universe as well.
Another advantage is a broader view of things - in the past awareness of other cultures was much more limited, due to lack of travel, education, literacy etc compared to today.
Today there are more people, which means the smartest people are likely smarter than the smartest people in the past, and congregate more easily.
Wouldn’t it be vastly more likely that someone who knows bayes rule is acting in a Bayesian way? D seems most likely given the prevalence of magical thinking in most times by most people
Ignoring realists who claim to have some grand theory of moral progress, my suspicion is that a common response from realists might be to argue that in fact many of the supposed differences in moral beliefs accross cultures, are really just due to differences in circumstances resulting in people choosing the least bad beliefs (but nevertheless people having fairly universal intuitions etc.) examples might include choosing slavery over genocide etc.
This seems to have a great number of radical conclusions that I've yet to see any realist endorse, so probably isn't a very workable response.
> in fact the further evidence that we learn later in life about the different views of other cultures are not typically capable of moving us back to a culturally-neutral position of great uncertainty on moral truths.
You're making the very dubious assumption that a rational person well-informed about morality in different cultures would take a position of great uncertainty. Different historical cultures also have had different views on science, but it is not the case that a rational person well-informed about the cultural history of science would tend towards maximal uncertainty about which science is correct. The fact is, modern science is much closer to the truth than ancient Greek or Aztec understandings of the world, and a rational person would accept that modern science is just mostly right and ancient science was just mostly wrong. It's not rational to be uncertain just because you're exposed to many disagreeing perspectives. It depends on the merit of those perspectives.
Some cultures just have worse morality than others, with e.g. slavery, poverty, political oppression, shortsightedness. Exposure to these cultures just affirms that fact to a rational person.
Highly educated people do tend towards certain moral viewpoints that are different from the general public. This is because those viewpoints are better informed and more consistent and defensible. Not all moral viewpoints are created equal.
Also, you're making an implicit claim that morality is about "power" and "coordination" in a local society, and should be judged by these metrics. That is a very specific and extreme moral position for you to take, which can be used to justify all sorts of atrocities.
You're also implicitly assuming that the degree to which moral reasoning is rational/Bayesian is independent of culture. In fact, some cultures are much more rational, allowing free public dialogue and educating their people in critical thinking, and other cultures are much less rational, relying on accepting the word of authorities and suppressing any dissent. The rational cultures- freethinking subcultures of Western democracies - do have substantial agreement about moral matters, and a general agreement that the repressive cultures are getting it wrong.
Doesn't this pre-assume a framing of morality where there is a true underlying "what is moral", and people have different beliefs about what that thing is? The alternative is to think of morals as being a subset of people's goals, and see that people who are raised in different environments have different goals.
Perhaps it depends on the meaning of morals "applying well to all times and places":
It could mean that _your behavior_ should follow similar principles across different times and places. For example, if you think people having access to knowledge is universally good, you would support airdropping copies of wikipedia to all countries, even [especially!] those whose governments restrict it, screw what local cultural norms say.
Alternatively it could mean some kind of expectation that all people should have similar moral goals to yourself, and a mentality that people who have different moral goals are broken and need to be re-programmed - the re-programming itself being the goal, and not just getting others to (even if grudgingly) act in ways that align with your own moral goals. This position seems stranger.
But if we take this framing of morals as goals rather than beliefs, Bayesianism doesn't really seem to apply in either case.
If you start out uncertain about your goals, and then you learn more about your goals from your early life education, why wouldn't you continue to learn more about your goals from hearing about how others are educated in other cultures?
Is "learning about your goals" the right frame, or is "being reprogrammed to have other goals" the right frame? Learning is (in standard decision theory) always desirable, being reprogrammed to have different goals is not desirable, unless the reprogramming comes with a capability increase.
If parents and teachers told you that they goals they wanted you to have were "your" goals, why should you just believe them?
Ethics are not culture based, it is so that the Bible teaches you the the commandments, which is a GREAT help/guidance for humanity, take that away and you probably have an ethical gap , but to teach ethics should be part of school curriculum, since it seems like the Bible is downtalked for the last 3 years especially.
It's interesting. I believe that I have spent my life carefully updating my sense of right and wrong into something unrecognizable (and indeed unthinkable) to a child version of myself. I'm also aware that I'm bias-ridden and have likely made a lot of bad updates.
I think there is some evidence that most people are better "intuitive Bayesians" in childhood than in adulthood. Maybe it just so happens that by the time most of us are exposed to extra-cultural "evidence" we're no longer that good at updating (moral) beliefs.
To mind comes Gopnik comparing the process of growing up to simulated annealing: childhood is a "high temperature" regime, where updates are large and larger areas of parameter space are explored; adulthood being "low temperature", allowing only smaller updates to hone in on local maxima after the first phase hopefully got us closish to the optimal solution. A great strategy for learning to tie shoelaces, not so great for forming ethical systems.
To everything said by others (especially Berder), I have to add that if we demand that order of updating doesn't influence the result to be Bayesian, then almost _no_ updates most humans make are Bayesian. (See Scott Alexander's "trapped priors" for an extreme example.)
I'd say those options are not mutually exclusive, because morality consist of many parts and they are not all equally subject to the same forces. So A, B, C and possibly some D.
There is another way, which is not a choice, but a view which includes discussions about how what-is-under-discussion came about. This may make the choice an unforced error rather than a unavoidable dilemma.
If this is accepted as a possibility and one can avoid evolutionary just-so stories, "the obvious options" may still look obvious but very partial.
The plasticity may look like something else too.
Values are a bit like vowels, constrained by a vocal tract each language works and changes a subset of the possible continuum, such that the neutral vowel the schwa sounds different compared to a true neutral annouciation. Such that the same sound can be perceived as different vowels in different language groups/ accents. Values are constrained by life events (worlding) and more meta worries that we can call doctrine or more particularly 'grammar' (world-building) and they shift around within certain constraints.
While the plasticity does speak of a learning adaptive space, in which we inherit priors and off you go choosing your way, or our way if we chat among ourselves (culture). It also indicates that any vowel, or value, in any language or culture, is an outcome.
This means that there is a selective force to have a moral urge, but not for any particular value (or vowel) within the constraints of survival. In particular, an urge to should on others. I.e. to seek out and meet with others, and learn. If we world well then there will be good outcomes, if we world-build badly then paranoid psychopaths will be given free reign to implement empathy-free death cults (the failing Russian empire is the most egregious example currently).
There is no selection for neutrality, for to be alive is to be biased/prior-ed. However to world well we must deal with the variety of life and accents and values. We are biased 'to world' as much as we live a body, however cultural outcomes are not directly selected for, only that we should should.
If we can take this on board then we can update our biases.
https://whyweshould.substack.com/about
This is only true if you assign essentially the same weight to the experiences that shape your inherited morality growing up as compared to the experiences that modify it as you get older.
In reality, the lessons that are taught to you as a child are more valuable in three ways:
First, they are the morality of the culture in which you live, and will therefore be useful for navigating the local social dynamics in addition to any inherent value.
Second, they are the assembled wisdom of generations, and therefore have already been adapted to many of the unusual circumstances and edge cases that you would have to navigate around if you made significant alterations. (Admittedly, sometimes by simply accepting injustice in those cases as the cost of doing business, but that is in itself an adaptation.)
Third, they are imparted to you by people who have genetic and cultural reasons to wish for your success and few reasons, beyond a desire for you to agree with them, to lie to you. As anyone who has spent sufficient time on the Internet can attest, this becomes much less true once you have to deal with the broader public. Later lessons in morality will be much more heavily tainted by self-interest and political calculation.
B seems correct. It would be a very strange moral epistemology that took our moral beliefs to be justified on the basis of "updating a Bayesian prior on the evidence of our early education".
Similar remarks will, of course, also apply to our non-moral epistemological beliefs -- such as whether or not to endorse a Bayesian epistemology.
In general, Bayesian updating seems a poor model for understanding a priori justification, and foundational philosophical commitments such as favoring induction over counterinduction, taking green and blue rather than bleen and grue to be projectable predicates, etc.
If we should not accept claims that authorities make later in life, merely because they are authorities, why should we treat early-life-authorities claims differently?
Is there some alternative way for young children to learn about the world?
Many end up with unjustified beliefs. There's no (content-neutral) method that guarantees justified beliefs. All we can do is hope that we are among the lucky ones. (Just think how messed up you would be if raised by Nazis or counter-inductivists!)
See, e.g., Elga's "Lucky to be rational": https://philpapers.org/rec/ELGLTB
I'm not questioning the views of young kids. I'm questioning the views of adults once they have learned about the existence of many different cultures that teach very different moral views.
Many adults, perhaps even most, when they encounter the moral beliefs of a different culture, do change their childhood view of morality. I see this happening on a regular basis with international students from East Asia. They were taught as children and internalized the moral goodness of obedience to parents and authority; and the moral necessity of subordinating their own needs to those of their group. They sojourn to North American for university, and learn other moral values, and question the validity of what they were taught as children. This is common for immigrants; think Ayaan Hirsi Ali fleeing Somalia for the Netherlands.
Yeah I am surprised that they seem to think everyone sticks to (or mostly sticks to or grants unearned trust in) their childhood beliefs. I live in Utah where up until this century and pre internet, it was very insular Mormon...but exposure to the outside world by people moving here and the Internet is completely destroying LOTS of adult Mormons' worldview and beliefs, they are leaving in droves. And once the foundation cracks, they absolutely question everything they were taught and have to build up an entirely new set of morals from scratch. This is often a psychologically devastating process that takes years for them to work through.
Agree with you. The assumption of not updating childhood beliefs seems to be a privilege of being born into the dominant / majority culture. Nonetheless, the lasting power of childhood teachings is compelling -- as you say, that is the reason for the psychological difficulty of the updating and revision that Mormons have experienced.
What fraction of significant cultural differences survive the Chesterton's Fence filter? I think I know very well why many cultures are violently opposed to male homosexuality, for example. It has a strong explanation, in terms that don't survive moral scrutiny. And the same is true of my own society's opposition to incest, even in cases where it's guaranteed not to involve abuse or to produce any offspring.
One option here is to say that not only were you not a bayesian when you learned your moral beliefs as a kid, you are not a bayesian when it comes to your moral beliefs even today. That is, they are more or less fixed by a mixture of biological and cultural influences, and you are stuck with them as an adult.
If you actually cannot change your beliefs, then if course there is nothing to discuss re whether you should.
I agree, but this hypothesis can help us explain something you mentioned, namely that people do not seem to update their moral beliefs when they learn about the existence of other cultures. I would also add that even if our fundamental moral beliefs are fixed, in the sense that we regard some states of the world as intrinsically better, how to reach those states is something we can have a rational discussion about.
Why would the existence of different cultures give you any reason to change your view, once you have one? The more fundamental issue is that there are multiple *possible* coherent views out there, and no non-question-begging reason to think yours in particular (or even all of humanity's, if all agreed) is correct.
In the end, the best we can do is to have "default trust" in our own starting points and dispositions, even though such trust does not guarantee justification (let alone truth). But it at least gives us a *shot* at both justification and truth, which is more than can be said of the alternative (radical skepticism):
https://www.philosophyetc.net/2009/02/skepticism-rationality-and-default.html
Why should we have "default trust" in the points we were pushed into by our parents and early life cultural authorities? I don't see why we should just pick something at random to believe just so we might get lucky and believe the truth.
Did you read the linked explanation?
It's not so much that you should "pick something at random", but that (i) causal influences will shape your psychology, and (ii) once your psychology has a shape (and associated deep-rooted commitments: pro-induction, anti-suffering, etc.), it's reasonable to maintain default trust in your own psychology and not prefer an amorphous psychology of global skepticism that would have no chance at all of getting anything right.
Think of it this way: suppose you begin life as an amorphous arational blob, incapable of judgment. Your mother is offered a coin-flip: heads you develop to have a rational psychology. Tails you develop to have an irrational psychology. Should she take the offer? Sure! You're not any better-off in the arational state than with an irrational psychology, after all. And at least you have some chance of developing into a proper agent this way.
Later in life, you learn of the above events. Except you never learn the result of the coin-flip. Should you be glad that your mother took the bet? Sure! You (reasonably enough) take yourself to be rational. And even if you can't be absolutely certain of that, you certainly wouldn't be better off as an arational blob incapable of forming any judgments at all.
Now suppose the coinflip is replaced by evolutionary & cultural forces. There are more than just two possible psychologies on offer now. On the other hand, the process is not so "chancy": the forces in question might systematically push in certain directions (towards a certain balance of pro-social and selfish dispositions, perhaps). There's no independent guarantee that those forces would be aligned with objective truth and rationality. But if it turns out that they are so aligned, you needn't regard it as a total coincidence that you (as a product of those forces) turned out to be at least roughly rational (and with roughly true moral and other normative beliefs).
I spell out the full story in this paper (responding to Parfit):
https://philpapers.org/rec/CHAKWM
To be clear, we shouldn't accept the claims of early-life-authorities as authoritative. They provide no evidence. But they (causally) shape our eventual priors. Looking back, from a position of default trust in our own minds, we are forced to think that we were lucky to be shaped as we were. But that would be just as true if our moral beliefs were decided by an explicitly random procedure such as a magic 8-ball, which obviously isn't reliable or justification-conferring. We should think of our early-life authorities in much the same way.
Let's say there are two aspects to morals: evolved norms and reasons. In that case, there would be two types of moral claims, Bayesian ones and non-Bayesian ones.
Hugo Grotius calls the Bayesian ones Natural Law. i.e. very general facts about behaviors that require just a modicum of rational thought and data to see as advantageous or disadvantageous for the people you care about.
Other elements of moral behavior, cultures learn slowly though many iterations of norms and competition. He calls the things that are convergent evolutions of this process the Laws of Nations. If you play enough "Moral rounds" iterate enough, then there will more moral facts discovered. Such as Rule of Law being superior to special legal privileges for aristocrats.
He doesn't have a word for the random variations for adapting to highly local circumstances.
But does Bayesianism really apply to highly dimensional iterated game theory dilemmas with multiple equilibria? History matters. As does the decision algorithms of other agents. So the order of events matters too!
and the non-Bayesian ones the Law of Nations.
What do you make of the Golden Rule?
I don't see what's so stark about it. (B) is obviously true as a description of my foundational moral intuitions (=unscrutinized moral beliefs), but scrutiny under roughly Bayesian strategies has led me in the direction of several views (e.g. ethical veganism, EA, scalar consequentialism sans obligation or permissibility, strong support for voluntarist eugenics) that differ radically from both mainstream views in my culture, and my own untutored intuitions as shaped by that culture.
This ignores a few points in modernity's favor.
In the past views of the way the works were much more flawed, from physics to biology to astronomy. We have statistics and epistemology, and the philosophy in the water supply has built on the past.
Part of the model being compromised so heavily compromises the model in general.
For example, believing in 4 humors as opposed to the periodic table of elements may induce to believe that balancing the humors is in important part of health, and perhaps shows an important moral truth of the universe as well.
Another advantage is a broader view of things - in the past awareness of other cultures was much more limited, due to lack of travel, education, literacy etc compared to today.
Today there are more people, which means the smartest people are likely smarter than the smartest people in the past, and congregate more easily.
BIas is there to be overlived.
What changes about morality if we just switch the word 'moral' (as in describing something as morally correct) with the word 'rule'?
What does 'moral' mean on a planet with no life? How can it mean anything?
I cannot update beyond my intuition that moral reason is arbitrary assertions anchored in social rules and I would appreciate some help on this.
Wouldn’t it be vastly more likely that someone who knows bayes rule is acting in a Bayesian way? D seems most likely given the prevalence of magical thinking in most times by most people
What, merely being able to write down a math expression for Bayes Rule insures that all your beliefs are always updated according to that rule?
Ignoring realists who claim to have some grand theory of moral progress, my suspicion is that a common response from realists might be to argue that in fact many of the supposed differences in moral beliefs accross cultures, are really just due to differences in circumstances resulting in people choosing the least bad beliefs (but nevertheless people having fairly universal intuitions etc.) examples might include choosing slavery over genocide etc.
This seems to have a great number of radical conclusions that I've yet to see any realist endorse, so probably isn't a very workable response.