17 Comments

Assessing our Moral Beliefs in Light of Predicted Future Moral "Progress":

At the excellent Overcoming Bias blog, Hal Finney makes an insightful point about our perceptions of past and future moral progress:

...

Expand full comment

I see social standards of morality as, on average, a matter of attempting to address threats to an empowered group coming from individual behavior. (Of course some such attempts may go and have gone wildly awry.) I see the technological and scientific changes of preceding centuries as typically having broadened the sphere of those whom we are expected to consider with regard (increasingly as equals) and the precision and rationality with which we are expected to calculate the consequence of our actions. As our calculating powers become greater, I expect moral concerns to become more stringent and precise, as a reasonable person "ought to know better" in more and more circumstances.

Expand full comment

Some future morality may not only depend on extra information, but mainly on extra technology. One you've got artificial meat, for example, of comparable taste and price to real meat, then the arguments against killing animals becomes much stronger.

If artificial reality becomes cheap, and it's easy to make a lion think he's killing a zebra when he isn't really, then we'll probably do so.

I think a lot of the moral arguments are already out there - it's just a question of when/if it'll become cheap and easy to follow them.

Expand full comment

Robin, I think that may be an important heuristic - "Imagine a full range of possible future moralities, and ask yourself how likely you are to accept the future's differing judgment." It may give us important information about our own moral beliefs.

For example, I'd be very likely to accept the future's differing judgment on the personhood of cats - this says both that I'm not absolutely certain on the issue and that I think that the moral judgment should change as a result of received information. (I currently think cats are not sentient and not persons.)

Expand full comment

Could "animal rights" be extended to wild animals? If so, should we prevent lions from violating the rights of zebras?

If not, why not? Can you say you support a right, if you refuse to try to enforce it?

Could we extend "reproductive rights"? If we can allow abortion of an entity one day before birth, why not one day after birth?

Expand full comment

The ability to substantially predict the future of morality would be a strong argument against morality changes being due information we learn, just as the ability to predict future stock prices would argue against stock price changes being due to information. So you have to imagine a full range of possible future moralities, in all the imaginable directions, and then ask yourself if you would on average accept the future's differing judgment, which ever way it went. If not, you don't really believe that moral changes are mainly due to information.

Expand full comment

Before we worry about what will happen after the singularity, it would be helpful to actually have a singularity in the first place. . .

Given that there has been no progress in understanding the hard problem of consciousness in the past century of neuroscience, that the promises of AI "10 years out" for the past 60+ years keep getting postponed, that no replicating nano-machines appear to be forthcoming, that, in fact, we don't even begin to have a clue how protein folding, organelle, and cellular morphologies are controlled (other than the vaguest notions of chemical gradients -- how is a concentration gradient supposed to generate this, for example) -- perhaps it might be premature to wistfully pine for the atheist version of the rapture and second coming which is the singularity. . .

Expand full comment

Should we always care if the future disapproves of us?A lot of our morality is very contigent on our circumstances, and will have to appear ridiculous to later generations. Certain forms of xenophobia and sexual restrictions make sense in very primitive, tribal societies; we disdain them rightly for ourselves, but was it automatically wrong for them? Our health system will be looked on with horror, nearly certainly, and probably also the way we treat our poor. It's not axiomatic that with a different morality we could do better, in our current world.

On a lighter note, there will certainly be the crusty traditionalists, arguing that everything was better in the early twenty-first century "then, people really knew how to behave! What a great time it was to be alive back then!"

Expand full comment

I feel post-singularity speculations are either entertaining stories, amplified ideas about the near future, or reflections about issues we worry about today. But they are in no way serious predictions! We seem to say:1) The singularity will be a radical transformation, changing things in ways we can't even begin to imagine.2) Yet, it will have this characteristic, and this one, and this one, oh, and don't forget that one.

Extrapolating from today is all we can do, combined with some "if - then" scenarios (if the post singularity world is hobbesian, then its morality will be hobbesian - if not, it won't).

Expand full comment

Anders: I wouldn't recommend extrapolating from recent experience to the post-Singularity period. "Unknowable" was part of the original meaning, and while overly strong, "not usefully knowable via extrapolation, only via analytical methods" seems to remain a very good heuristic. Do you have any analytical disagreements with "The Future of Human Evolution"?

Surely some forms of morality, such as concern for the much less able resulting in sharing of resources with those who will predictably not return any, are disfavored in competition. Status-seeking with positive externalities, which is what most charity is, may not be disfavored (though they may, given more efficient ways of directly observing an agent's goal system and resources) but optimizing, efficacy based forms of charity are unlikely to be optimal, and post singularity non-optimal may be very close to non-viable.

You agree with the latter statement with respect to dirt, don't you? That post-singularity high entropy matter is in general unlikely to be left lying around uselessly?

Expand full comment

Karl Schroeder already came up with the authotrophs in _Permanence_.

I wonder about the assumption of fiercer competition close or after the Singularity. I constantly hear people complain about the world becoming more and more competitive, worrying that human enhancements will make the situation even worse. Yet I'm not convinced we are becoming a more competing society. We might just think we are, and note instances of high competition in a manner that previously would not have attracted notice. It seems far more possible today to live a pleasant life without having to compete much than in the past.

Regardless of whether competition is on the rise, I doubt morality is a clear liability in competition. If you are clever about it and demonstrate your trustworthyness in ways that are hard to forge it can give you significant advantages.

Expand full comment

Plant rights might clash with animal rights. Do plants have the right to be protected against insects?

One possible thought experiment is to try to figure out the morality of possible extraterrestrials. There's a common scenario in science fiction where the human race encounters intelligent herbivores who are horrified at eating meat. What if we encounter intelligent autotrophs who are horrified at eating anything?

Expand full comment

Rob, animals can't fight for their own rights, yet we're seeing animal rights movements trying to get better conditions for farm animals, and they're not looking too unsuccessful.

Expand full comment

Hal seems to interpreting morality in the late 20th century sense of "lets treat everyone equally, the more the better". This is only a small part of what moral behaviour is about, based on the memory of the successes of the various civil rights movements, now fossilized into mere pressure groups.

More generally across history, morality is about encouraging behaviour that is either in the long term interest of the individual themselves, or in the interest of their genetic progeny. Thus: fidelity in marriage, obediance to your parents, not "coveting yer neighbours ox", spending your money wisely, retaining "faith in God" even as everything goes to hell (i.e. not despairing), fighting for your city state, etc etc.

I don't pretend to know what the future holds for moral development, but I doubt we'll see vegetable or software rights until the vegetables or software are capable of fighting for it, either physically or by political pressure.

Expand full comment

Another thought. Human morality is basically divided into two sections: ingroup morality and outgroup morality. The latter doesn't contain much to speak of--it's basically, "survive however you can". In the latter, there is no empathy for others, and even a tendency to treat others as objects, not subjects. (I.e., to treat them as not having moral existence or significance.) The former basically contains everything most people regard as being morality. The general flow of history has been the gradual expansion of the bounds of the ingroup, and many major conflicts are about where those bounds should be drawn. I contend that the size of the ingroup, as intuited by the average person in a society, corresponds directly to the prosperity of the society.

People tend to be in denial about the existence of outgroup morality. They tend to think that people with smaller ingroups are barbarous or insane (like sociopaths, who effectively have no ingroup, or an ingroup of themselves only), and that people with larger ingroups are silly hippies. Here's a conceptually succinct form of moral relativism: no one ingroup size is better than another.

Expand full comment

Hal, what you say about being mindful of the future seems right, and at least for my part in the discussions that have been going on, I certainly haven't defended (or meant to defend) a "we're at the apex" view. As for Graham, I don't see what you mean about sexism and racism; he suggests that "political correctness" of a very high degree would be a sort of "fashion," but the PC culture of the 90s was the good sense of not being sexist or racist on crack. (I was pretty young in the 90s and so by the time I knew what was going on, I never really bought into the PC thing). (Difference between being a prig and having good moral sense?)

Expand full comment