17 Comments

Robin, I'll try and put together a post on that.

Expand full comment

Stuart, you seem to be claiming that adopting a position of more uncertain beliefs about morals would be less "coherent" and hence "a vaguer, more random (and probably hence biased) position." You seem to be justifying overconfidence via some other bias that overconfidence avoids - I hope you explain yourself more sometime.

Expand full comment

I was predicting a (weak) direction change to moral values - that those moral values dependent on free will assumptions will be weakened once we have specific examples of failing free will, rather than general statements as at present.

I was also predicting that those specific examples will be found.

So why haven't I updated my moral values (much) yet? And, more importantly, why hasn't the world?

I think it's because of the urge to make moral values systematic. I can (claim to) see the direction moral values will takes, but I prefer to stay with the current more coherent morality than to move towards a vaguer, more random (and probably hence biased) position.

Once the future is here, and all the brilliant minds have taken the (future) conclusions of science on board and come up with a coherent, strong system, I'll be ready to embrace it (completely or in part).

Expand full comment

Dave and Joseph, the trends you point to may indeed help drive changes in common morality.

Stuart, I have been talking about predicting the direction of changes to moral opinion relative to current opinion. Yes of course we can predict new info might produce some changes, and so predict a variance of opinion relative to current opinion.

Expand full comment

There are two major ways new info can affect morality: if the new info undermines some major assumption (such as "the earth goes round the sun" eventually undermined "we are the center of the universe"), or if the new info causes practical changes that makes some marginal morality more viable (trade rather than war as a better path to riches, organic farming once we have the knowledge (and money) to make do without pesticides, etc...).

Let's try two future prediction, one of each type. And I'll try and argue that both of them allow us to predict future moral trends that will depend on information we don't have.1) Future discoveries on the brain causes us to reassess our understanding of free will and freedom,2) Better meat substitutes will push us more towards vegetarianism

The second prediction is the least interesting - it changes our morality only in practical ways, so probably doesn't meet Robin's criteria (though it is an example of how today we try and predict future changes based on future info).

The first case is more challenging. The current philosophy that seems to come out of neuroscience is that we have no free will - a useless position, since it isn't testable. It's also useless philosophically, since just saying "you have no free will" doesn't provide any guidance on how to behave, and is so absolute it's generally ignored.

But we will soon be able to show in precisely which areas we have little or no free will, and a better understanding of what substitutes for free will in those cases (character, impulses, outside manipulation). Since so much of our morality is dependent on the free will assumption, I can confidently predict that this future info will change our morality in these specific areas to a large extent - and since free will is a rather absolutist concept, it will be undermined by these specific examples, and this will result in a general moral change.

Would these examples mitigate against The ability to substantially predict the future of morality would be a strong argument against morality changes being due to info we learn?

Expand full comment

In addition to trends toward greater welth and longer lives, there is also a probable trend toward greater populations. What effect would that have?

Past increases in population size had an effect. For example, a city state with a few thousand citizens could handle politics by town meetings. A bigger society can't. Central planning tends to break down in bigger societies.

At present, in a world of a few billion humans, there are plausible-sounding calls for the currently most-powerful nation to right every wrong (e.g., "We can stop the Darfur famine.") and plausible-sounding claims that picking and choosing between wrongs to right is hypocritical. I suspect that in a Dyson sphere similar ideas would be regarded as obvious nonsense.

Expand full comment

Nick, yes, a few people with unusually useful info always have the potential to forecast better than the average person. Of course it is always suspicious when a large fraction of people think themselves to be in such a situation.

Expand full comment

Suppose there is enough info to say with reasonable probability that moral view X is true (e.g. we should treat animals better than we do). But suppose people are to varying degrees biased against X. A few people have overcome their biases sufficiently to accept X. They might then predict that since X is true, more evidence for X will accumulate over time; and also that, as more evidence accumulates, it will grow strong enough to overcome increasingly strong biases against X; so the popularity of X can be predicted to grow over time. In this scenario, the predicted change in morality will be due to new information.

Expand full comment

Eliezer, sure if you have special info that few others have you might be able to predict better. But people who know enough history to see a past trend will surely think to project that trend into the future, as Hal and Illya did.

Expand full comment

I don't disagree with the overall idea here; but if you think that it can be rational to disagree with the majority on factual issues, you can expect a majority moral shift based on the majority receiving new factual information. E.g. the morality of burying your dead - if you think cryonics will work, then you can rationally expect a hell of a lot of recriminations once people realize it would have worked.

Info has no trend only among those who know that info has no trend. Here we are, looking at the trend of moral shifts and trying to anticipate them - that's why info should have no trend for us. If the majority hasn't gotten that far in their reasoning, they may end up with moral trends.

Expand full comment

So if moral trends really do exist and have predictive value, it is an indication that morality is not rational.

This would show that conventionalism - the metaethical view that values are determined by (nothing more than) societal approval or endorsement - implies that morality is not rational (if "rational" picks out the idea that values have a non-convential basis). This is not the same thing as saying that morality itself is not rational, unless you first assume the truth of conventionalism.

Expand full comment

Actually I do agree with Robin's comments, on thinking about them. For morality to be predictable it has to be somewhat arbitrary.

What I was doing was something like "technical analysis", the practice some financial traders have of looking at charts for patterns and extrapolating them forward. Historically, we have had certain trends going on for the past few hundred years, so I was looking at what would happen if they continued. It seemed to me that the article I was responding to implied that trends were more likely to be reversed, which is also possible - the "pendulum theory". But I don't think we have seen a lot of pendulum action on moral questions.

Technical analysis is controversial and there are arguments that it can't work. OTOH it can be a self-fulfilling prophecy if enough people are doing it and looking for the same patterns, so when they see a BUY signal they all buy. This kind of justification can't apply to moral trends, though. So if moral trends really do exist and have predictive value, it is an indication that morality is not rational.

Expand full comment

What about morality reversals, such as the prohibition?

I would say a reversal on abortion is far more likely than animal rights. New information would be the early viability of fetuses and revelations of the "realness" and "humanity" of a fetus by imaging technology.

Expand full comment

The problem might be that "new info" sometimes leads to changes in moral beliefs and at other times leads to changes in our behavior so that we behave in accordance with our moral beliefs (given, e.g., the discovery that our behavior is having an effect we believe to be morally undesirable). (But in the prior case, where some "trendless" info leads us to revise fundamental moral beliefs, we must not have believed them very strongly to begin with...)

Expand full comment

Matt, I understand us to be talking about the relative importance of new info for typical changes in common moral beliefs.

Expand full comment

I'm losing my grip on whether the question here (and in some of the other recent discussions) is about whether moral progress is possible or whether future moralities always will show "improvement." Couching the question "Do we have any foolish moral beliefs?" in terms of what future people will think isn't obviously helpful, since there's no reason to assume that people of the future will be infallible moral thinkers, either...

Expand full comment