Morality of the Future

I am fascinated by the question of how our morality will change in the future. It’s relevant to the issue we have been discussing of whether we are truly making moral progress or not. So long as we view the question from a perspective where we assume that we are at the apex of the moral pyramid, it is easy to judge past societies from our lofty position and perceive their inadequacies. But if we imagine ourselves as being judged harshly by the society of the future, there is less self-satisfaction and ego boosting involved in making a case for true moral progress, hence less chance for bias. (In fact, when people make claims about how future society will judge the world of today, they almost always assume that their own personal moral views will become universal, so this hypothetical judgment merely mirrors their own criticism of contemporary society.)

Paul Graham’s essay mentioned a few times here is an interesting approach but some of his ideas seem flawed. I read him as implicitly criticizing our modern opposition to sexism and perhaps racism as mere "fashion". If this is intended to imply that we may relax our opposition to these practices, there doesn’t seem much historical precedent for such reversals.

I would suggest instead a more straightforward extrapolation to a future which is even more protective of powerless groups. Animal rights would expand; perhaps keeping pets will be seen as harmful oppression. Farming practices would change greatly, a trend just beginning. Children’s rights are another area of growth; we might see more encouragement of emancipation and attempts to get children to live on their own. More speculatively, plant rights may become an issue as we grow to appreciate the complexity of their internal feedbacks, slower than animal nervous systems but perhaps just as rich in some cases. This trend could reach its zenith by merging with environmentalism and imputing rights to inanimate objects such as rock formations or flowing water.

Of course, software is a huge field ripe for assigning rights of existence and growth. The possibility of Artificial Intelligence and rights of software creatures is one which has long been explored in literature. But even without demonstrable intelligence, we might see software rights. One of my dreams is designing software with a degree of autonomy, able to run without molestation or modification by human beings. In fact for years I have been running a test case of such software, designed to make it impossible for even me as the owner, designer, programmer and operator to modify its behavior. If such software systems become common and useful, society might consider extending rights to them, particularly if other complex systems like those described above are being protected.

Another of Graham’s ideas is to look for groups whose hold on power is shaky and of whom criticism is immoral. Again, a straightforward interpretation would point to racial minorities and women, leading us to turn back the clock. I would suggest instead that we should limit our search to groups who have long held power but whose power is now declining. Of course the Church is one example but that is an old story. A couple of ideas: perhaps the elderly? They’ve been powerful in society for a long time, criticism of them as a class is forbidden; could they someday be seen as having clung to power beyond their time? And how about the military, an important power in every society in history? At least in the U.S., nobody can criticize them – even the most vehement war critic must pay lip service to "supporting the troops". Maybe this self-censorship presages a decline in power, leading to an eventual morality where the military rank and file are seen in retrospect as evil war-mongerers.

In general, I think these kinds of exercises are helpful in analyzing the question of moral progress. If you can make a case for progress even acknowledging that in the future your own practices may be seen as savage and appalling, you are much less likely to be manifesting self-satisfaction bias. On the other hand, if you find yourself resisting ideas about future morality being different from the present, you need to look closely to see if you aren’t just protecting your own ego.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • http://pdf23ds.net pdf23ds

    I think you identify some interesting patterns that apply in the medium term. But after AI comes in full force, I think morality is ultimately doomed. (Post here.) Basically, human morality is quite constraining in many ways, and after the Singularity, barring a sysop AI that slows down progress or some sort of extinction, competition will be much fiercer than it is nowadays, and morality will become more of a liability. If the concept of evolution still applies at all at that point, that means morality as we know it (even including your hypothetical future morality) will be marginalized, at best.

    It seems that most of our moral feelings have a built-in time limitation. People nowadays aren’t nearly as outraged about the indian massacres from early American history as might be expected given their magnitude, for instance. Morality operates on a limited timeframe. As things speed up, one might expect the applicable timeframes to shrink commensurably. On the other hand, the limited timeframe might be due to people dying off. Things in living memory have much more force. So as very long-lived agents enter the scene (due to life extension and AI) we might see the timeframe increase.

  • http://profile.typekey.com/mpianalto/ Matthew Pianalto

    Hal, what you say about being mindful of the future seems right, and at least for my part in the discussions that have been going on, I certainly haven’t defended (or meant to defend) a “we’re at the apex” view. As for Graham, I don’t see what you mean about sexism and racism; he suggests that “political correctness” of a very high degree would be a sort of “fashion,” but the PC culture of the 90s was the good sense of not being sexist or racist on crack. (I was pretty young in the 90s and so by the time I knew what was going on, I never really bought into the PC thing). (Difference between being a prig and having good moral sense?)

  • http://pdf23ds.net pdf23ds

    Another thought. Human morality is basically divided into two sections: ingroup morality and outgroup morality. The latter doesn’t contain much to speak of–it’s basically, “survive however you can”. In the latter, there is no empathy for others, and even a tendency to treat others as objects, not subjects. (I.e., to treat them as not having moral existence or significance.) The former basically contains everything most people regard as being morality. The general flow of history has been the gradual expansion of the bounds of the ingroup, and many major conflicts are about where those bounds should be drawn. I contend that the size of the ingroup, as intuited by the average person in a society, corresponds directly to the prosperity of the society.

    People tend to be in denial about the existence of outgroup morality. They tend to think that people with smaller ingroups are barbarous or insane (like sociopaths, who effectively have no ingroup, or an ingroup of themselves only), and that people with larger ingroups are silly hippies. Here’s a conceptually succinct form of moral relativism: no one ingroup size is better than another.

  • Rob Spear

    Hal seems to interpreting morality in the late 20th century sense of “lets treat everyone equally, the more the better”. This is only a small part of what moral behaviour is about, based on the memory of the successes of the various civil rights movements, now fossilized into mere pressure groups.

    More generally across history, morality is about encouraging behaviour that is either in the long term interest of the individual themselves, or in the interest of their genetic progeny. Thus: fidelity in marriage, obediance to your parents, not “coveting yer neighbours ox”, spending your money wisely, retaining “faith in God” even as everything goes to hell (i.e. not despairing), fighting for your city state, etc etc.

    I don’t pretend to know what the future holds for moral development, but I doubt we’ll see vegetable or software rights until the vegetables or software are capable of fighting for it, either physically or by political pressure.

  • http://pdf23ds.net pdf23ds

    Rob, animals can’t fight for their own rights, yet we’re seeing animal rights movements trying to get better conditions for farm animals, and they’re not looking too unsuccessful.

  • http://profile.typekey.com/jhertzli/ Joseph Hertzlinger

    Plant rights might clash with animal rights. Do plants have the right to be protected against insects?

    One possible thought experiment is to try to figure out the morality of possible extraterrestrials. There’s a common scenario in science fiction where the human race encounters intelligent herbivores who are horrified at eating meat. What if we encounter intelligent autotrophs who are horrified at eating anything?

  • http://www.aleph.se/andart/ Anders

    Karl Schroeder already came up with the authotrophs in _Permanence_.

    I wonder about the assumption of fiercer competition close or after the Singularity. I constantly hear people complain about the world becoming more and more competitive, worrying that human enhancements will make the situation even worse. Yet I’m not convinced we are becoming a more competing society. We might just think we are, and note instances of high competition in a manner that previously would not have attracted notice. It seems far more possible today to live a pleasant life without having to compete much than in the past.

    Regardless of whether competition is on the rise, I doubt morality is a clear liability in competition. If you are clever about it and demonstrate your trustworthyness in ways that are hard to forge it can give you significant advantages.

  • michael vassar

    Anders: I wouldn’t recommend extrapolating from recent experience to the post-Singularity period. “Unknowable” was part of the original meaning, and while overly strong, “not usefully knowable via extrapolation, only via analytical methods” seems to remain a very good heuristic. Do you have any analytical disagreements with “The Future of Human Evolution”?

    Surely some forms of morality, such as concern for the much less able resulting in sharing of resources with those who will predictably not return any, are disfavored in competition. Status-seeking with positive externalities, which is what most charity is, may not be disfavored (though they may, given more efficient ways of directly observing an agent’s goal system and resources) but optimizing, efficacy based forms of charity are unlikely to be optimal, and post singularity non-optimal may be very close to non-viable.

    You agree with the latter statement with respect to dirt, don’t you? That post-singularity high entropy matter is in general unlikely to be left lying around uselessly?

  • Stuart Armstrong

    I feel post-singularity speculations are either entertaining stories, amplified ideas about the near future, or reflections about issues we worry about today. But they are in no way serious predictions! We seem to say:
    1) The singularity will be a radical transformation, changing things in ways we can’t even begin to imagine.
    2) Yet, it will have this characteristic, and this one, and this one, oh, and don’t forget that one.

    Extrapolating from today is all we can do, combined with some “if – then” scenarios (if the post singularity world is hobbesian, then its morality will be hobbesian – if not, it won’t).

  • Stuart Armstrong

    Should we always care if the future disapproves of us?
    A lot of our morality is very contigent on our circumstances, and will have to appear ridiculous to later generations. Certain forms of xenophobia and sexual restrictions make sense in very primitive, tribal societies; we disdain them rightly for ourselves, but was it automatically wrong for them? Our health system will be looked on with horror, nearly certainly, and probably also the way we treat our poor. It’s not axiomatic that with a different morality we could do better, in our current world.

    On a lighter note, there will certainly be the crusty traditionalists, arguing that everything was better in the early twenty-first century “then, people really knew how to behave! What a great time it was to be alive back then!”

  • Matthew C

    Before we worry about what will happen after the singularity, it would be helpful to actually have a singularity in the first place. . .

    Given that there has been no progress in understanding the hard problem of consciousness in the past century of neuroscience, that the promises of AI “10 years out” for the past 60+ years keep getting postponed, that no replicating nano-machines appear to be forthcoming, that, in fact, we don’t even begin to have a clue how protein folding, organelle, and cellular morphologies are controlled (other than the vaguest notions of chemical gradients — how is a concentration gradient supposed to generate this, for example) — perhaps it might be premature to wistfully pine for the atheist version of the rapture and second coming which is the singularity. . .

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    The ability to substantially predict the future of morality would be a strong argument against morality changes being due information we learn, just as the ability to predict future stock prices would argue against stock price changes being due to information. So you have to imagine a full range of possible future moralities, in all the imaginable directions, and then ask yourself if you would on average accept the future’s differing judgment, which ever way it went. If not, you don’t really believe that moral changes are mainly due to information.

  • pseudonymous

    Could “animal rights” be extended to wild animals? If so, should we prevent lions from violating the rights of zebras?

    If not, why not? Can you say you support a right, if you refuse to try to enforce it?

    Could we extend “reproductive rights”? If we can allow abortion of an entity one day before birth, why not one day after birth?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, I think that may be an important heuristic – “Imagine a full range of possible future moralities, and ask yourself how likely you are to accept the future’s differing judgment.” It may give us important information about our own moral beliefs.

    For example, I’d be very likely to accept the future’s differing judgment on the personhood of cats – this says both that I’m not absolutely certain on the issue and that I think that the moral judgment should change as a result of received information. (I currently think cats are not sentient and not persons.)

  • Stuart Armstrong

    Some future morality may not only depend on extra information, but mainly on extra technology. One you’ve got artificial meat, for example, of comparable taste and price to real meat, then the arguments against killing animals becomes much stronger.

    If artificial reality becomes cheap, and it’s easy to make a lion think he’s killing a zebra when he isn’t really, then we’ll probably do so.

    I think a lot of the moral arguments are already out there – it’s just a question of when/if it’ll become cheap and easy to follow them.

  • James Wetterau

    I see social standards of morality as, on average, a matter of attempting to address threats to an empowered group coming from individual behavior. (Of course some such attempts may go and have gone wildly awry.) I see the technological and scientific changes of preceding centuries as typically having broadened the sphere of those whom we are expected to consider with regard (increasingly as equals) and the precision and rationality with which we are expected to calculate the consequence of our actions. As our calculating powers become greater, I expect moral concerns to become more stringent and precise, as a reasonable person “ought to know better” in more and more circumstances.

  • http://volokh.com/archives/archive_2007_03_25-2007_03_31.shtml#1174960427 The Volokh Conspiracy

    Assessing our Moral Beliefs in Light of Predicted Future Moral “Progress”:

    At the excellent Overcoming Bias blog, Hal Finney makes an insightful point about our perceptions of past and future moral progress: