Info Has No Trend

Over at Volokh Conspiracy, Illya Somin takes to heart Hal Finney’s suggestion that:

If you can make a case for progress even acknowledging that in the future your own practices may be seen as savage and appalling, you are much less likely to be manifesting self-satisfaction bias.

Ilya takes up the challenge:

I see at least three areas where there is a good chance of this happening: Animal Rights. … The Death Penalty. … Forced Labor. … If I am right about these predictions, should I revise any of my current moral views? … I am unmoved in my opposition to forced labor. … its increasing acceptance will say little about its rightness. I am less certain about the death penalty. … the fact that so many others are turning against it despite the lack of a clear self-interested or other biased reason for doing so does give me some pause. … I am least confident [regarding] animal rights. … My position is at least in part the result of a strong self-interested bias of my own: I like to eat meat. 

Hal similarly forecasts:

A more straightforward extrapolation to a future which is even more protective of powerless groups. Animal rights would expand; perhaps keeping pets will be seen as harmful oppression. … Children’s rights are another area of growth;

Commenting at Volokh Conspiracy, Friedrich Foresight quotes GK Chesterton (1904):

The way the prophets of the twentieth century went to work was this. They took something or other that was certainly going on in their time, and then said that it would go on more and more until something extraordinary happened.

Let me repeat my comment to Hal (endorsed by Eliezer):

The ability to substantially predict the future of morality would be a strong argument against morality changes being due to info we learn, just as the ability to predict future stock prices would argue against stock price changes being due to info. So you have to imagine a full range of possible future moralities, in all the imaginable directions, and then ask yourself if you would on average accept the future’s differing judgment, which ever way it went. If not, you don’t really believe that moral changes are mainly due to info.

Ilya and Hal both think they can forecast the same morality trends, but widely known long-term morality trends are not consistent with morality changes being mainly due to new info.  Communication delays might let a minority know about a trend for a short while, but the longer the trend and the more who know, the less this story works. 

This analysis applies to changes in music, clothes, work, mating, language, or pretty much any common behavior or attitude.  Info has no trend, so if you see trends, something besides info is at work.

If not info, what?  Wealth and lifespan have trended up.  Perhaps the morality of the old rich differs from that of the young poor.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Dave

    Lifespan and wealth trends probably dominate, but I would add abstractness of our interactions with the physical world (% employed in agriculture/resource-extraction/manufacturing vs. services, urbanization), cost of communications and transport (which have dropped far more quickly than wealth has risen), and specialization of employment as secondary drivers of morality changes.

    I see no reason to believe we have any more (or less) information about morality than our ancestors, which is probably why I lean libertarian. I always figured that conservatives figured our ancestors were more moral than us and liberals figured they were less moral, neither of which made any sense to me. Mostly we’re just more powerful than our ancestors, which changes the means but not the ends.

  • I’m losing my grip on whether the question here (and in some of the other recent discussions) is about whether moral progress is possible or whether future moralities always will show “improvement.” Couching the question “Do we have any foolish moral beliefs?” in terms of what future people will think isn’t obviously helpful, since there’s no reason to assume that people of the future will be infallible moral thinkers, either…

  • Matt, I understand us to be talking about the relative importance of new info for typical changes in common moral beliefs.

  • The problem might be that “new info” sometimes leads to changes in moral beliefs and at other times leads to changes in our behavior so that we behave in accordance with our moral beliefs (given, e.g., the discovery that our behavior is having an effect we believe to be morally undesirable). (But in the prior case, where some “trendless” info leads us to revise fundamental moral beliefs, we must not have believed them very strongly to begin with…)

  • Brian

    What about morality reversals, such as the prohibition?

    I would say a reversal on abortion is far more likely than animal rights. New information would be the early viability of fetuses and revelations of the “realness” and “humanity” of a fetus by imaging technology.

  • Actually I do agree with Robin’s comments, on thinking about them. For morality to be predictable it has to be somewhat arbitrary.

    What I was doing was something like “technical analysis”, the practice some financial traders have of looking at charts for patterns and extrapolating them forward. Historically, we have had certain trends going on for the past few hundred years, so I was looking at what would happen if they continued. It seemed to me that the article I was responding to implied that trends were more likely to be reversed, which is also possible – the “pendulum theory”. But I don’t think we have seen a lot of pendulum action on moral questions.

    Technical analysis is controversial and there are arguments that it can’t work. OTOH it can be a self-fulfilling prophecy if enough people are doing it and looking for the same patterns, so when they see a BUY signal they all buy. This kind of justification can’t apply to moral trends, though. So if moral trends really do exist and have predictive value, it is an indication that morality is not rational.

  • So if moral trends really do exist and have predictive value, it is an indication that morality is not rational.

    This would show that conventionalism – the metaethical view that values are determined by (nothing more than) societal approval or endorsement – implies that morality is not rational (if “rational” picks out the idea that values have a non-convential basis). This is not the same thing as saying that morality itself is not rational, unless you first assume the truth of conventionalism.

  • I don’t disagree with the overall idea here; but if you think that it can be rational to disagree with the majority on factual issues, you can expect a majority moral shift based on the majority receiving new factual information. E.g. the morality of burying your dead – if you think cryonics will work, then you can rationally expect a hell of a lot of recriminations once people realize it would have worked.

    Info has no trend only among those who know that info has no trend. Here we are, looking at the trend of moral shifts and trying to anticipate them – that’s why info should have no trend for us. If the majority hasn’t gotten that far in their reasoning, they may end up with moral trends.

  • Eliezer, sure if you have special info that few others have you might be able to predict better. But people who know enough history to see a past trend will surely think to project that trend into the future, as Hal and Illya did.

  • Suppose there is enough info to say with reasonable probability that moral view X is true (e.g. we should treat animals better than we do). But suppose people are to varying degrees biased against X. A few people have overcome their biases sufficiently to accept X. They might then predict that since X is true, more evidence for X will accumulate over time; and also that, as more evidence accumulates, it will grow strong enough to overcome increasingly strong biases against X; so the popularity of X can be predicted to grow over time. In this scenario, the predicted change in morality will be due to new information.

  • Nick, yes, a few people with unusually useful info always have the potential to forecast better than the average person. Of course it is always suspicious when a large fraction of people think themselves to be in such a situation.

  • In addition to trends toward greater welth and longer lives, there is also a probable trend toward greater populations. What effect would that have?

    Past increases in population size had an effect. For example, a city state with a few thousand citizens could handle politics by town meetings. A bigger society can’t. Central planning tends to break down in bigger societies.

    At present, in a world of a few billion humans, there are plausible-sounding calls for the currently most-powerful nation to right every wrong (e.g., “We can stop the Darfur famine.”) and plausible-sounding claims that picking and choosing between wrongs to right is hypocritical. I suspect that in a Dyson sphere similar ideas would be regarded as obvious nonsense.

  • Stuart Armstrong

    There are two major ways new info can affect morality: if the new info undermines some major assumption (such as “the earth goes round the sun” eventually undermined “we are the center of the universe”), or if the new info causes practical changes that makes some marginal morality more viable (trade rather than war as a better path to riches, organic farming once we have the knowledge (and money) to make do without pesticides, etc…).

    Let’s try two future prediction, one of each type. And I’ll try and argue that both of them allow us to predict future moral trends that will depend on information we don’t have.
    1) Future discoveries on the brain causes us to reassess our understanding of free will and freedom,
    2) Better meat substitutes will push us more towards vegetarianism

    The second prediction is the least interesting – it changes our morality only in practical ways, so probably doesn’t meet Robin’s criteria (though it is an example of how today we try and predict future changes based on future info).

    The first case is more challenging. The current philosophy that seems to come out of neuroscience is that we have no free will – a useless position, since it isn’t testable. It’s also useless philosophically, since just saying “you have no free will” doesn’t provide any guidance on how to behave, and is so absolute it’s generally ignored.

    But we will soon be able to show in precisely which areas we have little or no free will, and a better understanding of what substitutes for free will in those cases (character, impulses, outside manipulation). Since so much of our morality is dependent on the free will assumption, I can confidently predict that this future info will change our morality in these specific areas to a large extent – and since free will is a rather absolutist concept, it will be undermined by these specific examples, and this will result in a general moral change.

    Would these examples mitigate against The ability to substantially predict the future of morality would be a strong argument against morality changes being due to info we learn?

  • Dave and Joseph, the trends you point to may indeed help drive changes in common morality.

    Stuart, I have been talking about predicting the direction of changes to moral opinion relative to current opinion. Yes of course we can predict new info might produce some changes, and so predict a variance of opinion relative to current opinion.

  • Stuart Armstrong

    I was predicting a (weak) direction change to moral values – that those moral values dependent on free will assumptions will be weakened once we have specific examples of failing free will, rather than general statements as at present.

    I was also predicting that those specific examples will be found.

    So why haven’t I updated my moral values (much) yet? And, more importantly, why hasn’t the world?

    I think it’s because of the urge to make moral values systematic. I can (claim to) see the direction moral values will takes, but I prefer to stay with the current more coherent morality than to move towards a vaguer, more random (and probably hence biased) position.

    Once the future is here, and all the brilliant minds have taken the (future) conclusions of science on board and come up with a coherent, strong system, I’ll be ready to embrace it (completely or in part).

  • Stuart, you seem to be claiming that adopting a position of more uncertain beliefs about morals would be less “coherent” and hence “a vaguer, more random (and probably hence biased) position.” You seem to be justifying overconfidence via some other bias that overconfidence avoids – I hope you explain yourself more sometime.

  • Stuart Armstrong

    Robin, I’ll try and put together a post on that.