Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
loading...
Tagged as: , , ,
Trackback URL:
  • manwhoisthursday

    This post mostly seems beside the point.  Most people are extremely confident that they can just _know_ what is right and what is wrong.

    • Robert Wiblin

      But that must be unfounded for most of them. And it’s not hard to work that out.

      • VV

         So what? Most people are religious, even if it’s not hard to work out that religious beliefs are unfounded.

      • http://www.facebook.com/people/Jeffery-Nicholas/100000088482891 Jeffery Nicholas

         Why must realists be foundationalists as well?

    • http://www.facebook.com/peterdjones63 Peter David Jones

       Yes. If you are a moral realist AND an moral intuitionist, you don’t have to “work out” morality.

  • Luke Muehlhauser

    Do I win the prize if I persuade you that the things you already believe can be called “moral realism” about as justifiably as they can be called “moral anti-realism,” by getting you to read Richard Joyce on metaethical pluralism?
    http://www.victoria.ac.nz/staff/richard_joyce/acrobat/joyce_metaethical.pluralism.pdf

    Also see the section on ‘moral realism vs. anti-realism’ in my post Pluralistic Moral Reductionism:
    http://lesswrong.com/lw/5u2/pluralistic_moral_reductionism/

    • Pablo

      Do I win the prize if I persuade you that the things you already believe can be called “moral realism” about as justifiably as they can be called “moral anti-realism,” by getting you to read Richard Joyce on metaethical pluralism?

      Can you restate what, on your understanding of this debate, “moral realists” and “moral anti-realists” are disagreeing about without using those expressions?  If not, the debate, as you are construing it, is probably a merely verbal dispute.

      • Luke Muehlhauser

        Exactly; that’s what is said by the two articles I linked. :)

  • David

    We evolved a general faculty of reason – the ability to think about, say, just-in-time production wasn’t advantageous in the Ancestral Enviroment, but I do have that ability. Similarly, the moral intuitionist (who is a realist) argues, our faculty of reason allows us to access moral truths, even though they were not advantageous in the AE.

    A similar argument is to ask how we can know mathematical truths – but I think this fails, because mathematical truths do feature in our best scientific theories, whereas moral ones do not.

    • Robert Wiblin

      Analogising morality to mathematics seems a promising line, but I’m not sure I’m convinced. Maths seem to feature in our observations of the world in a way that morality does not.

      • mjgeddes

        Morality would be concerned with observations about intelligences in general (heavily tied in with cognitive science).  But this research is still at an early stage (there is no artificial intelligence yet nor any general theories of mind). 

        Algorithmic information theory and the study of aesthetics is a promising line, as I argued below.  Both algorithmic information theory and aesthetics are linked by the notion of ‘complexity’ and ‘complexity measures’ (see early ideas by Schmidhuber along these lines).  If a definite link could be established between aesthetics and morality,  a universal morality could be derived.

        Read David Deutsch ‘The Beginning of Infinity’ , Chapter 14, he presents power-house arguments to the effect that the beauty of a flower to humans has no evolutionary explanation and argues in favor of universal aesthetics.

      • http://www.facebook.com/jake.witmer Jake Witmer

        Untrue.  There are theories of the mind, put forth by Kurzweil, Hall, Hawkins,  and many others.  Also, there are scientific theories of morality as well, that accurately explain human social reality.

      • Konshtok

         “Analogising morality to mathematics seems a promising line.”

        promising to run into godel

    • http://www.facebook.com/jake.witmer Jake Witmer

      [ We
      evolved a general faculty of reason – the ability to think about, say,
      just-in-time production wasn’t advantageous in the Ancestral Enviroment,
      but I do have that ability. Similarly, the moral intuitionist (who is a
      realist) argues, our faculty of reason allows us to access moral
      truths, even though they were not advantageous in the AE.]
      I disagree to some extent.  The precursors of the moral truths, at their current high level of comprehension, were useful in the ancestral environment, or empaths wouldn’t be here now.  Mirror neurons are an advantage, and increase benevolent outcomes, in civilized society.

      [A similar argument is to ask how we can know mathematical truths –
      but I think this fails, because mathematical truths do feature in our
      best scientific theories, whereas moral ones do not.]Incorrect.  The very best scientific theory, the one that is likely to save your life, is intensely interested in a scientific view of morality.  Look at the wealth generated by the industrial revolution: the reason you are alive (most likely).  –Partly a consequence of moral theories, and increasingly more mathematic and more moral (as in Spooner => Ayn Rand => and Eliezer Yudkowsky => Future philosophers who are more correct).

  • http://www.facebook.com/yudkowsky Eliezer Yudkowsky

    Moral cognitivism, or moral realism?  It’s far more likely that my thoughts about morality have a coherent logical subject matter with truth-values, and even that there’s substantial overlap in this subject matter between two people uttering similar words, than that our conversation is about an external stone tablet somewhere on which morality is written and that every possible rational agent finds the writing on this tablet psychologically compelling.

    • Pablo

      To clarify, moral realism is the view that moral facts are mind-independent (in a specific sense of ‘mind-independent’), whereas moral cognitivism is the view that moral statements do not express propositions.  The view that “every possible rational agent finds [mind-independent moral facts] compelling” is not part of moral realism, and seems instead to be a form of moral internalism: the view that moral beliefs are intrinsically motivating.

  • Margin

    Let’s say there are objective moral rules.

    Then I can still do whatever I want.

    What reason would I have to care about objective moral rules?

    • Robert Wiblin

      If you have no reason to care they are not objective moral rules.

      • Aisaac

        There are objective rules about driving too fast. If you break the rules, and you get caught, you get a fine. If there were a rule against speeding but no enforcement or penalty, there would still be objective speeding rules but no reason to care about them.

        Why should we care about objective moral rules? There has to be some sort of enforcement, or else you can do what you want and not worry about them, if you don’t want to.

      • Margin

        This.

        Unless you define “objective moral rule” as something everybody would want to follow if they had a deep enough understanding of objective reality.

        I cannot imagine anything that would fulfill this condition.

      • Margin

        Another interpretation is to consider the laws of physics as objective moral rules.

        Because they are self-enforcing.

        Whatever physically happens, should happen.

        Anyone who disagrees with the physical universe is objectively wrong.

      • Robert Wiblin

        I’m not sure what such rules would be or how they would be justified, which I why I doubt moral realism.

      • http://www.facebook.com/peterdjones63 Peter David Jones

        “Why should we care about objective moral rules?”

        One answer is that we care about reason and logic, and that morality can be justified in similar ways. This leads to a version of  MR where moral truths are more like the abstract truths of maths and less like something floating in space.

      • http://www.facebook.com/profile.php?id=100002541294703 Ryan Teehan

        You could have a rule based upon the notion of choice of free will itself.  To clarify, this rule would be a defining feature of the ability to choose.  With regards to caring about moral rules, all people would have reason to care about that type of rule, since it would be the very basis for having a reason.  

      • http://www.facebook.com/jake.witmer Jake Witmer

        Bad example.  You should have chosen “mala in se” (a crime with a victim) to illustrate the concept.  When someone is hurt, mirror neurons fire in the empath’s brain, making them register a lesser but distinct pain and sense of conflict.  This doesn’t happen in the sociopath brain.  If everyone’s a sociopath, or even directed by sociopaths, society looks like Nazi Germany -chaos.

        There is an order to society that includes sociopaths.  Optimal societies positively incentivize sociopaths and mitigate their damage, using jury trials as a means of limiting government. 

        Because we no longer do this in the USA (we have jury trials, just not proper, random ones –the randomness is defeated by the sociopath-favoring institution of improper “voir dire”), our system is more and more sociopathic and less protective of property rights and diversity every day.  Evidence: 2.4 million people in prison, with 60% of them there for victimless crimes.  Several wars of aggression.  The loss of the constitutionally-guaranteed right to self-defense.  The arbitrary shortening of life by the FDA which doesn’t suggest, it commands.

        Universal tax and debt enslavement, the loss of all future wealth, if the fiat currency system is retained.

        Empaths have baseline, reflexive “caring” about moral rules. This is what pre-neuroscience cultures called “conscience.”

      • http://www.facebook.com/jake.witmer Jake Witmer

        The non-aggression principle is a universal moral rule.  It works. Non-sociopaths (empaths and, to a lesser degree, conformists) tend to automatically follow it.  Systems built on this comprehension act in a non-sociopathic manner, prohibiting “mala in se” (theft, assault, murder).  Christians call a crude formation of this rule “the golden rule.”

      • Tim Tyler

        > If you have no reason to care they are not objective moral rules.

        That seems too strong.  There can be a natural morality without all agents being destined to follow it.

      • http://www.facebook.com/jake.witmer Jake Witmer

        Incorrect.  That person might be a sociopath, who ignores, and is evolutionarily wired to ignore, objective moral rules.  You follow these rules because, even if you see a little girl whom you could easily overpower and kill wearing a piece of gold, you don’t kill her and take the gold.  The sociopath does, if the thinks he can get away with it.  Hence, the construction of jury trials, to eliminate the influence of sociopaths on society.  99.9% of people agreeing with a law results in a conviction for murder, and still, there are serial murderers -but they are held in check, more and more. 

        But what happens when you get rid of proper jury trials?  The serial killers go free.  …Just ask John Douglas.  This is why he opposes, from a position of practical knowledge, victimless crime laws (crimes without “injury”+”intent to injure”).  (He also morally opposes them.)

      • http://www.facebook.com/jake.witmer Jake Witmer

        ” Another interpretation is to consider the laws of physics as objective moral rules.

        Because they are self-enforcing.

        Whatever physically happens, should happen.

        Anyone who disagrees with the physical universe is objectively wrong.

        This is stupid, and defeats the entire reason for a moral rule.  A moral rule would be a special case of reasoning and logic that makes the human universe more intelligent than baseline physical existence.  Just like a mathematical recognition of patterns in nature makes nature more comprehensible.

        This would already be a lowering of the conception of “morality” than what everyone intuitively accepts.  With stupidity being defined as “unwitting self-destruction.”  It would be unwitting, and ignorant, because moral rules that can be utilized to escape destruction by other men (sociopaths and their directed conformist agents) already exist.

    • Tim Tyler

      > What reason would I have to care about objective moral rules?

      The same reason you care about other things that affect your well being: evolution built you to care – and made it very hard for you to deprogram yourself.

      • http://www.facebook.com/jake.witmer Jake Witmer

        You’re assuming he’s not a sociopath.  Not a safe assumption.

    • http://www.facebook.com/jake.witmer Jake Witmer

      If you’re a smart sociopath, you have no reason to care about or follow moral rules, other than fear of retaliation for having misidentified your capacity to avoid blending in with empaths and conformists (who, together, are in the majority that will imprison or kill you, if they can discover you and isolate you from your power base).  …Likely if you’re Ted Bundy, less likely if you’re Paul Warburg.

      This is why sociopaths seek power offices in government and industry -not only are the unable to care beyond the implications of not caring, they also have experienced social awkwardness of their not caring.  This is also why governments collapse and the power of offices keeps expanding, and hastening the cycle that ends in collapse.  The constitution is instantiated, the sociopaths claim control of education until they can stop the constitutoin from being repeatedly instantiated, and they ignore or pardon each others’ sociopathic abuse of “the little people.”  Evolution pushes the tribe/village fighters toward power positions, once the tribe/village no longer needs to be defended.  Luckily, as soon as the technologists stop believing the leaders of men are like they are, there’s a technology and sociology solution in sight.

  • mjgeddes

    Morality traces back to sentience (positive qualia).  Qualia traces back to the brain’s representational system (categorization) and narrative creation (internal goal representation).  Good qualia traces back to minimizing the complexity of the process of generating these internal narratives.    Such minimization traces back  to aesthetics.  Ergo, all morality  ultimately traces back to aesthetics.

     Algorithmic information theory points to algorithms in platonic mind space. Algorithmic information theory captures aesthetics in logical terms.   Algorithms in platonic mind space express universal properies.  Ergo, platonic mind space is the stone tablet in the sky expressing universal values (including morality). 

    PS ‘Moral realism’ is simply the far weaker idea that you can assign true/false values to moral statements.  Platonic morality implies moral realism, but the converse is not true.

    Amen.

  • Mitchell Porter

    “If I believed in X, I would do A, B, and C. People who say they believe in X don’t do A, B, and C. Therefore they don’t really believe in X.” This is not a deductively valid argument. You should inquire what are the qualities that you possess, that would lead you to A, B, and C given X, and then ask whether those qualities exist in most other people.

  • OwenCB

    Your argument that the person-affecting view should get swamped by a small credence in theories which value future people runs into some complications. The question is how we compare value under one theory with value under another. While there does seem to be something natural about equating them on the area where they overlap — in this case the value of a human life — this is not obviously correct, and it seems to build some fanaticism into the way you deal with moral uncertainty.

    The main alternative approach that I know of (and would tentatively endorse) is to normalise so that each theory gets an equal say in what happens. Of course then you have to formalise what that means, but this seems to be doable. Happy to chat about this in person/skype some time if you’re interested.

    • Robert Wiblin

      Thanks Owen, we should chat about this next time we get a coffee. I agree it’s not as simple as I’ve presented it here.

    • Guest

      yep, and the overlap approach often leads to inconsistency if you have chains of theories that partially overlap in different areas.

      • Robert Wiblin

        Interesting is there anything published on this?

      • Guest

        I’ve heard that a guy called Bastian Stern has mentioned this issue in his undergrad thesis. ;)

      • Toby Ord

        I’ve written an email response in 2007 to an overlap-style theory due to Andrew Sepielli. I’ve forwarded this on to you in case it is useful. That said, I’m actually partial towards using an overlap approach, but only when there is an argument that the right overlap has been chosen. In your case I think it is quite plausible that you have chosen it correctly.

      • OwenCB

        That’s right; you can even do this with a chain of length two if they overlap in more than one place with different comparative values.

        There may be some ways of rescuing the overlap approach, but they will have to appeal to more information from the theories than just their account of what is good.

  • Tim Tyler

    Doing the ‘right thing’ is indeed about more than just doing what you want. Morality is also about doing what other people want.  It is a means of manipulating the behaviour of others via peer pressure, threats and guilt.  There’s a reasonable case for many aspects of morality being present in the math of game theory.  For example, the terms “Cooperate” and “Defect” in the prisoner’s dilemma are not chosen idly.
     

  • VV

    If you thought there were just a 10% chance that people who weren’t
    alive now did in fact deserve moral consideration, that would still mean
    collapse prevented the existence of 100 billion future people in
    ‘expected value’ terms. This still dwarfs the importance of the 7
    billion people alive today, and makes the case for focussing on such
    threats many times more compelling than otherwise.

    No discounting? Anyway, I it’s quite typical that doing these expected value ethical calculations you get counter-intuitive results. I consider this evidence against Utilitarianism.

    • Robert Wiblin

      I see no reason to discount. Counter-intuitive results are only evidence if you think your intuitions are a reliable guide which we probably shouldn’t here.

      • VV

        I see no reason to discount.

        Why not? Humans are empirically know to discount. You could perhaps make the case that exponential discounting (which includes no discounting as a special case) is “more rational” than other discounting schemes, but how do you choose the discounting factor?

        Counter-intuitive results are only evidence if you think your intuitions are a reliable guide which we probably shouldn’t here.

        Of course moral realists think that there is something to morality other than intuitions, but I don’t agree with moral realism.

  • http://profiles.google.com/aretaeblog K Aretae

    Are you suggesting that the epistemological difficulties in moral reasoning are _larger_ than the epistemological difficulties in any other field of human endeavor?  It seems to me that this position is far-fetched.  If you want to argue that _all_ topics of study need to seriously include a component of (moderately skeptic, modular-mind-informed, bayesian) epistemology before we take them seriously, I wouldn’t find that hard to support, but calling out moral reasoning especially seems to me to be largely uncalled for.

    As to moral realism…I am not one, but there’s one specific (epistemologically sound) moral realist argument that appears to provide value:  It takes two steps:

    1.  Haidt’s work on moral foundations.  It appears that there are a relatively limited number of things that human brains are evolved to find morally relevant.  (And of course, human beings are pretty similar).
    2.  It’s not clear that my “moral” one can safely mean _anything_ besides what that subjective, evolved moral experience is.  

    If one takes those two propositions seriously, one might well have to abandon one’s (yours and mine, if I read right) attachment to moral theories that we like, and instead accept a descriptivist moral position:  Pushing the fat dude off the tracks is immoral because most folks are morally opposed…because that’s what morality _means_.   

    • Robert Wiblin

      The evidence for moral claims is weaker than the evidence for most claims. Indeed, there is usually none other than reference to these intuitions. I have more evidence for the physical world around me than I do that anything is right or wrong.

      “Pushing the fat dude off the tracks is immoral because most folks are morally opposed…”

      That seems like subjectivism rather than realism: http://en.wikipedia.org/wiki/Ethical_subjectivism

  • A Country Farmer

    I’d love to hear you debate philosophy professor Michael Huemer from the University of Colorado, who wrote the book Ethical Intuitionism.

  • lemmycaution

    I believe in moral progress.  In general, our systems of morality are improving.  Seriously, there is no going back to human sacrifice and slavery.

    This isn’t really a matter of knowledge though.  Even if I did know what the superior ethical system of 1000 years from now is going to be, what the fuck could I do about it?  About the same if some dude in 1013 was to find out slavery was wrong. 

    Difficult and heroic moralities fail.  You don’t see too many puritans anymore.  But, how hard is it to avoid human sacrifice and slavery.  Not very hard as long as society is set up that way.

    It won’t be that hard to uphold the new and improved morality of 3012 either.  Getting society to the point that it has the new and improved morality of 3012 will be a pain in the ass though.

    • http://www.facebook.com/peterdjones63 Peter David Jones

       “Even if I did know what the superior ethical system of 1000 years from now is going to be, what the fuck could I do about it?”

      Take the first step. If you can’t free all your slaves, you can treat them decently, or free them after so many years service.

      • http://www.facebook.com/jake.witmer Jake Witmer

        Read Henry David Thoreau’s “On Civil Disobedience” and get back to us.  Watch Philip Zimbardo’s video “The Psychology of Evil’  Read about Kosckiusko’s pact with Jefferson.

  • http://www.facebook.com/profile.php?id=5310494 Sam Dangremond

    “If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil.”

    Reminds me of the central theme of Les Mis.

  • Tim Tyler

    It is trivially true that “doing the ‘right thing’ is about more than just doing what you want”. To recap, morality is concerned with manipulation, reputations and the propagation of moral memes (amongst other things). So: it’s about doing what you want, doing what others want, and doing what cultural symbionts want.  This makes the proffered definition of “moral realism” into a position that all should quickly accept.
     

  • http://profiles.google.com/katsaris Aris Katsaris

    “And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them.”

    That’s an arbitrary leap in reasoning, or a confusion of the meaning of the word “ought”. My moral sense, whether it calculates something real, or is utterly subjective, is only a portion of my set of motivations. To decide that my moral sense calculates something real doesn’t automatically have the power to motivate me “more” than if I considered it utterly subjective. Honestly your argument reminds me of those non-determinists who think that if determinists really believed in determinism they would stop caring about their decisions. It doesn’t *work* that way.

    On your question of moral realism, I think that our moral sense is a *subjective attempt at non-subjectivity*. To give a hypothetical example that I was planning to make into a LessWrong discussion post, but I can summarize here. Imagine two fires in a small town. One threatens the life of your child, the other threatens twenty children. The fire marshall has to prioritize and opts to save the twenty children over the one.

    To the core of your being you prefer that the fire marshall had chosen to save your own child – you care about your child more than a *million* random other children.

    But your moral sense isn’t outraged against the marshall — why? Because even in the grief you recognize that from a *non*-subjective point of view (if e.g. you hadn’t known which parent you were, or in which location your child was) the fire marshall chose the preferred course of action, chose to save the greater number of children.

    A different way I’ve thought of the concept of moral judgment recently is “your preference for other people’s behavior as *your ability to prescribe your personal preferences decreases*.” — i.e. if you didn’t know which parent you are, or in which of the two location your child was, or if you had no personal stake on the matter, the marshall chose the preferred course of action

    Or perhaps it could be described as “calculation of preferred behaviour in the attempted negation of your personal context’s influence on that calculation”.

    So morality can be said of to be “real” in how our moral sense is attempting to calculate something *non-subjective* (despite the fact that it inevitably passes through our subjective filters).

    • http://www.facebook.com/peterdjones63 Peter David Jones

       “And if those are the rules that everyone necessarily ought to be
      following, nothing could be worse than failing to follow them.”

      That can be made into a plausible argument: if moral rules are the rules that cannot be avoided by appealing to some more important duty that overrides them, and if doing something bad without an excuse is worse than doing something bad with an excuse, then breaking moral rules is the worse kind of bad.

    • http://www.facebook.com/peterdjones63 Peter David Jones

       “if you didn’t know which parent you are”

      Rawl’s Veil of Ignorance its very self!

  • Sebastian h

    “The number of ethicists with a public profile could be counted on one hand.” Is this an important data point for you?

    It is wrong. The normal label for ethicist is “religious leader” and there are huge numbers of prominent examples all over the world–almost certainly more than the total of all prominent non religious philosophers together.

    Apart from that, your argument appears to undercut itself. If moral realism is false (there is no objective morality) then there is no particular reason to worry about morality as a concept. Just focus on worrying about what will get you in trouble in your society and what won’t.

    • http://www.facebook.com/jake.witmer Jake Witmer

      [“The number of ethicists with a public profile could be counted on one hand.” Is this an important data point for you?

      It is wrong. The normal label for ethicist is “religious leader” and
      there are huge numbers of prominent examples all over the world–almost
      certainly more than the total of all prominent non religious
      philosophers together. ]Yes, but those religious “leaders” are not people who took ethics seriously enough to learn anything scientific and significant about it.  Also, their beliefs and teachings are generally demonstrably false or irrational.

      [Apart from that, your argument appears to undercut itself. If moral
      realism is false (there is no objective morality) then there is no
      particular reason to worry about morality as a concept.]This actually isn’t true.  The democide figures posted at R. J. Rummel’s website are true.  That’s just one reason.  Even if you’re a sociopath, there’s reason for concern: people who behave as sociopaths are sometimes held accountable.  Moral guidelines are thus exceptionally important to sociopaths.[Just focus on
      worrying about what will get you in trouble in your society and what
      won’t.]Holding yourself to the standard of a degraded totalitarian state is demonstrably unintelligent. As is holding yourself to the bigotries and conformist blindspots of an otherwise educated society.  In the North, before slavery ended, there were factions that turned in Fugitive Slaves, and factions that didn’t.  Sometimes, people got shot over this kind of thing.”I was just following orders.” was similarly not deemed to be a valid defense for several once-socially-successful nazis.  If you lack a moral compass, and your entire theory is “go along to get along” then it makes a lot of sense for the first person to find this out to pre-emptively kill you, if situations get tense.Do you think things will get tense with the government having spent the next two generations into debt before they were born?Just curious.

  • Sebastian h

    “Personally, I would like to think I take doing the right thing seriously…”

    Under your presentation of morality, this sentence doesn’t have logic.

  • bensouthwood

    If you don’t have the unshakeably strong intuition that torturing a child for fun is wrong then I’m not sure what anyone could do.

  • Ray

    “There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite.”
    That’s not necessarily true.  The capacity for morality evolved for some reason, and it’s actually quite reasonable to assume that it has survival value, with the correct morality promoting survival.  At least one moral system – Objectivism – is based on the promotion of survival.  

    • Paul Torek

       Indeed – considering the large number of philosophers who have connected morality and evolution without endorsing anti-realism, we need a better reason than the author’s say-so to suppose there is no such connection.

  • Ray

    To continue my previous comment: one can also argue that survival has improved alongside a substantial moral progress (on issues of slavery, oppression, racism, etc) in the last few centuries – and those two trends may be related.

  • sub guest

    You ask this question as if it’s assumed there is one “me.”  That may be the case, but I don’t think it’s a given.  It seems likely there’s alot of actors underneath the ego pulling our strings, and we don’t control them very well – rather our ego manages to reconcile all the differences as best as it can.  all imho

  • http://twitter.com/nerdbound Mike Wernecke

    If utilitarianism is true and moral realism is true, then what you say here is accurate, because utilitarians believe that (nearly?) every action is morally evaluable. But if, say, a loose deontological or virtue theory is true and moral realism is true, then what you say above does not follow, because those theories hold that most actions have no moral content.

    As a simple straw man theory, imagine that there is a true moral theory (moral realism) and its only content is that murder is wrong. If that were true, then people would discover the one moral fact early in childhood development, despair (rightly, by assumption) of finding any other moral rules, and then stop studying ethics. That’s a halt to to studying ethics which is totally consistent with moral realism.

    I see the above as establishing two things: 1) that you’re not merely making a metaethical argument but are also smuggling in some normative premises too (e.g. that the right normative theory is relatively demanding), and 2) that the vast majority of people are not demonstrated to be inconsistent by the argument above. For another example, it’s totally consistent to be a moral realist (about a few things) and a moral relativist (about the vast majority of ethical questions). My vague impression is that that’s what a lot of people think: if you really push them, they’ll say that there are a lot of things that they consider wrong for themselves or people like them, but a smaller list of things that are totally wrong, regardless of culture, upbringing, etc.

    To be slightly more accurate, I think a minimalist realist ethical theory + a different resolution to problems of moral uncertainty than those you sketch in your future generations example (more blather about that in a few paragraphs) is consistent with the way that the majority of people and philosophers (fail to) think about ethics.

    I am personally in some philosophical quadrant more influenced by utilitarianism, and think that the right moral theory will be demanding, so your argument does establish that I personally should worry quite a bit about ethics. But then again, I do worry quite a bit about ethics (and so do many people influenced by utilitarianism, it seems to me…). So I think your argument does establish something, for people who roughly agree with your normative premises even if I think that many ordinary people (and many philosophers) successfully escape it and thus escape charges of hypocrisy.

    So I actually agree with many of your conclusions (research into moral uncertainty is important! Ethical questions are important!) but I think the argument for them here is somewhat sloppy.

    Re: your more specific moral uncertainty argument about future generations, again, many of your arguments here are couched in sort of globally utilitarian terms. To keep having fun with really simple philosophical theories, imagine a simple deontological schema where doing something right has a value 1 and doing something wrong has a value -1. Now you tell such a deontologist, nono, utilitarianism is true, and this action over here is really really bad, with a value of -100,000. One response would be that even if utilitarianism has a 1/1000 chance of being true, you still basically have to become a utilitarian and seriously avoid that action, regardless of whether the deontological theory you think is true gives it a 1 or a -1 or whatever. Because utilitarianism has some chance of being true, and is particularly demanding, it just gets to win no matter what! And now we imagine some theory, crazytarianism, which says that raising your right hand has value -1 googol… True, the theory only has a 1 in a million chance of being true… But now I’m just having fun (in ways that many other commenters have already implied that fun can be had, such as OwenCB — these sorts of paradoxes are well-known). I’m not convinced that your argument about future generations goes through for reasons analogous to those expressed here, but it would take a lot more work to make the point rigorous.

    I guess I’ll just close by noting that most people’s and most philosopher’s views don’t seem to be anywhere close to Aumann’s agreement theorem, for better or for worse. The skeptical argument from moral uncertainty is hard to deploy, and most people just don’t buy it. Since there is a consistent position of minimalist ethical realism + prioritizing one’s own ethical views (due to worries about how to apply uncertainty or worries about whether one should apply uncertainty or…), we should tend to assume that that’s the view that people and philosophers hold, rather than declaring them inconsistent and hypocritical.

  • Hedonic Treader

    Moral realism is obviously false. There is a wide diversity in what people consider moral or immoral, and it corresponds to neurodiversity and even genetic diversity (and obviously memetic diversity).

    Hedonistic utilitarianism has the advantage that it has a very natural value source, namely good and bad qualia (if you want to use that label). Pleasure and pain clearly exist and are (feel) good and bad. So that’s a natural source of value.

    But of course, there is no universal compulsion that you have to maximize good feelings or minimize bad feelings. It is non-sequitur to jump to that conclusion. So moral realism is false.

    • http://www.facebook.com/peterdjones63 Peter David Jones

      ” Moral realism is obviously false.There is a wide diversity in what people consider moral or immoral”

      That doesn’t follow. The claim that there are objective moral truths is compatible with the claim that few people or nobody knows what they are . Likewise,the claim that there physical truths is compatible with there being multiple physical theories.

      • Hedonic Treader

        You’re correct.

  • Adriano Mannino

    Rob, you write:

    “The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise.”

    Hardly anyone believes that “future people do not count” tout court. The phrase is ambiguous. It could mean

    (1) the well-being of future people who will exist independently of my decisions counts, or
    (2) I have to make sure that happy people come into existence in the future.

    I think (1) is obvious, but (2) much less so – we have a bit of a Dennettian Deepity here (http://www.youtube.com/watch?v=Rg-4fmbpZ-M)! I, for one, would not (could not!) mind if I hadn’t come into existence; and I also don’t intrinsically care about not being turned off instantly and painlessly (although my utilitarianism leads me to instrumentally care about the minimization of individual X-risks). 

    Also, and as others have already pointed out, the argument from “moral uncertainty” to the prioritization of bringing future happy people into existence seems incomplete. For instance, the conclusion depends on *how important* bringing new happiness into existence is (*if* it is important) relative to eliminating existing suffering. 

    • Eli

      The failure to seek and understand is a basic universal moral failure. It is a failure to seek and understand even morality itself. It looks at morality in the face and says, I do not want you.

  • Adriano Mannino

    Last but not least, there’s the empirical question of whether we might *also* be causing expected harms by creating future happy people and how large those harms are. It seems to me that X-riskers are often prone to wishful thinking and overconfidence here.

  • Ed

    Many of us are moral realists because we are no longer realists about reality in the sense you want.  To take LW’s example, if you are willing to accept that we can’t define many of our real-world notions, such as “game”, and yet it is objectively true that some people are playing games and that some people aren’t (as well as some fuzzy cases in between) you may be less perturbed by apparent inconsistency in moral intuitions, even if you are a realist.   

    • http://www.facebook.com/peterdjones63 Peter David Jones

       Sort of. I think the position you and others are edging towards should be called something else, though, like Constrained Constructivism or Functional Role Metaethics.

  • Ed

    One more minor objection to the argument about moral uncertainty.  To apply expected value reasoning to the moral value of actions intended to increase humanities chance of survival, you would need to apply notions of probability to one-off events like humanity going extinct in the next 1000 years.  I think we are frequentists at heart in all kinds of subtle ways, and I would be more convinced by an example where frequentist notions of probability can be applied.  Does anyone have one?

  • http://www.facebook.com/peterdjones63 Peter David Jones

    “But why should natural selection provide us with instinctive knowledge
    of objective moral rules? There is no necessary reason for such
    knowledge to help a creature survive – indeed, most popular moral
    theories are likely to do the opposite. For this reason our intuitions,
    even where they agree, are probably uninformative.”

    If you see morality as about regulating interactions amongst people, then evolution could give people moral sense for group survival. That may or may not be what you mean by *objective* morality. Moral rules that are built into the very fabric of the universe are over-engineered for regulating interactions between humans. Moral subjectivism is under-engineered, since anyone can choose their own morality and no regulation ensues. A happy medium is needed.

    ETA:
    I think this shows that most people who profess moral realism are in fact not.

    If there is a genuine third option, then it is important not to confuse rejection of subjectivism with embrace of realism.

  • http://www.facebook.com/troy.camplin Troy Camplin
  • http://www.facebook.com/people/Jeffery-Nicholas/100000088482891 Jeffery Nicholas

    This post raises a variety of interesting academic questions. I say academic, because it does not reflect at all the way people in fact think morally even though the post is an attempt to explain why people are not really realists.

    The reason that people can be realists without doing all the things listed in the post is because they are also certain about the moral rules are. Most people — especially in the US — are religious. Doesn’t matter which religion, it gives them the rules of which they are certain and which are real for them. Many non-religious, I imagine, still believe in something fairly “realistic” sounding, like “do no harm.” In other words, most people can be realists without working out the moral rules because those rules are given.

    But this post hinges on a perspective that morality is something objective like physics. Moral rules are the objects which we uncover. But there are other approaches to morality which do not treat morality as an objective science in the manner of physics. Aristotelian virtue theory is realist without looking for objective objects. Aristotle’s whole point is that virtues are not things out there unchanging. They are the characteristics by which human beings are able to lead flourishing lives.

  • http://www.facebook.com/profile.php?id=100002541294703 Ryan Teehan

    It seems, at least to me, like you conflate moral realism with moral objectivism.  When you ask for someone to convince you that moral realism is correct are you asking for a proof of moral facts that exist “out in the world”, for lack of a better term, or would a constitutivist account, similar to that forwarded by Christin Korsgaard, suffice?

  • Pingback: Moral Realism « Selfish Meme

  • http://www.selfishmeme.com/ The Watchmaker

    Robert, if this prize were a scheme to generate posts for you to reply to, then you have won. I try to refute you here:

    http://www.selfishmeme.com/246/moral-realism/

    Cheers.

  • Pingback: Money for Morals - Unofficial Network

  • http://www.facebook.com/jake.witmer Jake Witmer

    I have thought about this a lot.  A roomful of people watch “Schindler’s List” or “Uprising” or “Escape From Sobibor.”  They gasp at the nazis shooting innocent people in the heads as the kneel before open graves.  They cry when they see children murdered on the screen.  The spend a total of maybe an hour or so trying to figure out what policies allow mass murder by government, less time than they spent watching those movies.  They come to demonstrably stupid conclusions.  They learn no history, they come to no rational conclusion about what kinds of systems are incompatible with democide.

    But there’s a worse problem than ignorance and apathy.  Most people know that the evil of totalitarianism tends to manipulate the public.  There’s a large indication that the Germans who followed Hitler, and the Soviets who followed Stalin were no better genetically than the people of the good ‘ol USA!

    So, yes, most people (certainly, a majority) go through life as unwitting accomplices to evil.

    But how then, can markets produce such good results?  Shouldn’t all markets decay into theft and murder? 

    No, say some libertarians.  Emergence is benevolent order!  Emergence is a consequence of voluntary interaction.  Those murderous systems are a result of centralized interaction.  (But what of when the systems are collapsed, and you have an otherwise industrious and productive weaver like Franz Stangl telling Gitta Sereny “I wasn’t anti-semitic, it was all just about theft.  …Nobody had nothing.” as well as saying “My knees grew weak when I was shiving women and children into the ovens.”)

    He wasn’t 100% a sociopath.  He was on the sociopathic spectrum.  He was a conformist. 

    Most people are approximately as good as the system they are governed by.  Because most people are now governed by bad systems, they behave as bad people.  Neutral conformist people vote “guilty” in nonviolent drug cases, even though they don’t have to, they vote for evil.  Neutral conformist people torture the prisoners in Abu Ghraib, even though they don’t have to, they vote for evil.

    A few people are preoccupied with doing what’s right.  They are a tiny minority.  We call these people “heroes.”  They behave heroically, though the system threatens them, and their friends and neighbors laugh at them. 

    Here are some such people:

    Marcy Brooks -threatened by a judge and told to convict a tax protestor who clearly had no “intent to injure” and where no “injured party” could be produced by his noncompliance.  Yet, she stood up against her fellow jurors and the judge, and was a holdout “not guilty” vote.  She was trained her whole life to conform, but she didn’t conform.  She was a schoolteacher. HEROIC.

    Julian Heicklen -A successful college chemistry professor risked his freedom and was imprisoned for handing out jury rights pamphlets, protesting the fact that the USA now imprisons over 2.4 million people, 60% of whom are imprisoned for first-time victimless crimes.  He was arrested and pumped full of dangerous psychiatric drugs because “He must be crazy for protesting the system.” (just like “One Flew Over The Cuckoo’s Nest” –So much for the free society envisioned by dreamers like Thomas Szasz and Billy Corgan!)  Held without habeas corbus, against his will, to intimidate him, in a concrete cage.  Repeatedly arrested, never deterred.  HEROIC.

    A soldier saw the torturing at Abu Ghraib, and decided it was intolerable, and blew the whistle on it.  He turned in his fellow soldiers for violating the inalienable rights of prisoners, against Geneva.  He turned them in for committing the crime of torture, and lowering the standards of America, in spite of severe social repercussions, and retalation from his chain of command. 
    HEROIC.

    Bradley Manning showed the American public that American long-range guns shot a group of journalists to death.  They were targeted and murdered specifically. He showed us what a war incentivized and chosen by the aggressor looks like.  He’s in prison now, but he’s still a HERO because he followed his own standards, and his own cosncience.

    What makes these and hundreds of other examples happen?

    Education. Nonconformity. More Education.  A willingness to throw away false values, false authority, false “truths,” and bad standards.

    Philip Zimbardo talks more about this in  his famous youtube speech “The Psychology of Evil.”  His solution is that we train ourselves to act against the crowd, when our moral compass dictates that we do so.  That we practice acting against the crowd’s expectations, when our minds and consciences (our mirror neurons) tell us to.  When we see someone being hurt for no good reason, and there is an authority figure telling us “It’s OK, they’re a silly pothead, they deserve it.”  (Our conscience bothers us the same way we’d react if we were told “It’s OK, they’re a Jew.”)

    Another part of the answer is to institutionalize a place where we cannot be punished for being heroes.  In Western Civilization, that place is the jury box.  Now, there are no longer free and random jury trials.  But jury verdicts still stand on their own.  Don’t try to get eliminated from jury duty.  If you believe in a moral duty of any kind, you have a duty to get seated on the jury, and, if it’s a case involving a “malum prohibitum,” vote “NOT GUILTY.”

    And you can combine the jury box with the ballot box and the soap box.  You can vote Libertarian (the only political party that recognizes and respects the historical and moral right of juries to nullify bad laws based on conscientious objection).  You can inform your family and the public about jury nullification of law, and “voir dire.”  Do everything you can to prevent the dire escalation to the cartridge box, but be prepared for it, if and when heroism has no other option.  Don’t listen to the people who want to shame you for being skilled and prepared for the worst.  (When the Nazis were invading the surrounding countries, they avoided Switzerland, because Switzerland had a heroic policy: individuals don’t need to wait for orders to shoot invading soldiers, and all Swiss must be within arm’s reach of a battle rifle.  The Swiss didn’t care what policies less civilized counties adopted: they did things their way, according to their own consciences, respecting the individual above the group.)

    This post is already long enough, but suffice to say learning about proper jury trials is only as far away as “Fully Informed Jury Association” and “International Society for Individual Liberty” and google. Also excellent is Lysander Spooner’s work “An Essay on the Trial By Jury” and “A Defense for Fugitive Slaves” (free online).  So is the book “Send in The Waco Killers” by Vin Suprynowicz, and “Let’s Get Free: A Hip-hop Theory of Justice” by Paul Butler.

    Don’t back down.  Be an American.  America is an idea, not a geography.
    Be a leveller, not King Charles I.  Be a part of the underground railroad, not a snitch.  Be true to yourself, and you’ll never be wrong.  Pick the right side, and listen to your conscience.

    And if you’re a biological sociopath, understand that force and fraud are prohibited among decent people.  Do your best to comprehend that rule, and adhere to that comprehension. 

  • http://www.facebook.com/jake.witmer Jake Witmer

    Guaging people’s beliefs based on the vast majority of their actions (not their words) indicates a universal moral standard, in societies capable of producing technology.  Certain behavior is considered sociopathic -extend that analysis to all situations, even if the person is wearing a special hat or a badge.

    Non-sociopaths don’t decide, while walking down the street to punch someone who appears weaker than themselves in the face, and take their wallet.  Generally, it’s a smaller set of people in society who do this sort of thing.  With most people, even if you’re fairly defenseless, you’re safe.

    But sociopaths have a way of getting such conformists to call sociopaths to judge any demographic that can be clearly separated from the majority.

    Ie: Conformists can be easily tricked into supporting systems that target people unlike themselves.  (Of course, everyone is diverse, and the “unlike themselves” category keeps expanding, until it includes Pastor Martin Niemoller in a category previously reserved for Jews.)

    Us brazen nonconformists can see and comprehend this, and it also fits with a true picture of reality: There is a baseline “morality” of the empath, produced by the guidance of mirror neurons.  Without this pain, there wouldn’t be that slight pressure over millions of years toward progress.  There would never have been the institution of the Jury.

    Even so, the jury keeps getting taken away.  The sociopaths are good at it.  They’re good at forming networks that forgive the abuse of power. 

    We can’t let them keep getting away with it.  But we also shouldn’t kill them (unless it’s clear self-defense, and 12 other possible empaths and sociopaths held to empath standards would agree), because then we become so similar to them that we lose the benefits of being different from them, and they blend in with us.

    This is why the great defender of jury trials, John Lilburne, pleaded for the Levellers to spare King Charles’ life.  They didn’t listen to him, and beheaded him.  Lilburne was a true empath, a true intellectual.  He knew that a disgraced sociopathic rule who no longer had power was a better example than a dead king: after all, wouldn’t future tyrants do whatever was possible to then escape judgment if they had screwed up?

    Ultimately, the urge to punish must be checked.  Getting non-office-seekers involved is a rational way to accomplish that.  It works.  It works for a reason that can be logically-examined and tested.

    Let’s work to restore jury trials to the USA.  That would be rational.  I’m not going to show you the math here.  I expect if you love math, you’ll do it yourself.

    Suffice to say, there will be people who have nonintuitive misconceptions of the math here.  Suffice to say, there will be sociopaths who don’t like the idea of justice here.  There’s a great body of work out there, read it.  Social order and market predictability depend on it.

  • http://juridicalcoherence.blogspot.com/ srdiamond

    Robert Wiblin,

    Here’s my submission on your offer–

    “Utilitarianism twice fails” ( http://tinyurl.com/bfcm89e )

    It should convince you that you are practicing moral realism despite denying it.

  • Pingback: Reason eating itself | Rival Voices