Morality or Manipulation?

Suppose you are a great moral philosopher and you’ve figured out perfectly how to tell right from wrong.  You have some time on your hands, and you want to use it to do good in the world.  One good thing you might do would be to try to make people more moral by teaching them to be moral philosophers like you.  Another good thing would be to combat one of the specific moral evils you’ve identified in your philosophizing, say drunk driving.  You could achieve this by embarking on a campaign of persuasion in which you portray drunk driving as something that stupid losers do, as groups like SADD and MADD have done with what seems to be great success (it’s remarkable how fast drunk driving has gone from being cool to being powerfully uncool).

The socially optimal division of your time between moral education and manipulative persuasion will depend on a lot of things: how good you are at each activity, how many other people are doing each of them, how effective each of them are, and so on.  But you may have private incentives to engage in too little moral education.  The persuasion campaign is likely to have observable results, whereas you won’t easily be able to see the good effects of having more moral philosophers running around.  Also, the benefits of persuasion are likely to be more immediate, whereas a lot of the benefit of moral education may not be realized until you are gone from the scene. 

What brought all this on is the observation that there seems to be almost none of what could be called moral education.  No one buys airtime on TV and uses it to encourage people to universalize their maxims; even philosophically sophisticated advocates of good causes almost invariably go with some version of the SADD/MADD persuasion approach.  It may be that the socially optimal amount of moral education is just very low, but I have a hard time believing that.  I am inclined to believe that under-investment is a serious problem.  If I’m right about this, then it may be a big source of bias: people have too little skill at purging bias from their moral judgments because they’ve gotten too little moral education in the first place; there aren’t that many philosophers out there, and even the ones there are don’t spend their time teaching philosophy.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Is there any evidence that people who better understand moral philosophy are actually more moral? I’ve asked moral philosophers this question and haven’t gotten very encouraging answers. Also, ethics books are more often stolen from libraries.

  • Stuart Armstrong

    And maybe: people just won’t listen. They have so many different moral codes, and we have such little respect for top-down, presprictive moralising, that maybe the best you can do is find a cause many people’s values congeal around, and SADD it up.

    (One could also make the argument that those who know perfectly always how to tell right from wrong are very far from being great moral philosophers :-)

  • David J. Balan

    Robin, The moral philosopher in my example is one who knows right from wrong, wants to avoid wrong (some would say that this necessarily follows from knowing, but I disagree), and also feels inclined to go do some kind of social action. Morally sophisticated people who are evil or indifferent don’t count. The point is that even restricting the universe in this way, they all seem to choose persuasion over moral education.

    Stuart, Your comment suggests that the moral philosopher is smart for going the SADD/MADD approach, because it is always more effective. This may be right, but I still suspect that there is significant under-investment in moral education.

  • rcriii

    David, I think that you are missing a huge source of moral education – churches. They think that they are in fact doing just what you suggest.

    In fact there are any number of organizations with fairly broad moral missions – Boy Scouts, Rotary, etc.

    Maybe the problem is not that no-one is willing to embark on moral teaching, but that few can agree on what a broad program would be.

  • http://neighbors.webcrossing.com/tlundeen Tim Lundeen

    I would recommend a recent book, Made to Stick, that talks about this kind of persuasion. Excellent book, btw, the best I’ve read on how to communicate effectively.

    They make the point that people want to consider themselves part of their group, and that the most effective way to change behavior is to change the norms for belonging to a group. The “friends don’t let friends drive drunk”, as you mention, is a good example. Another one is “for the love of the game” (e.g., we should all have good sportsmanship).

    The book also makes the point that the best teachers use stories to make their points.

  • http://www.pellucid.org Bob Knaus

    rcriii makes a good point. Most religious and many civic organizations have “moral improvement” high on their agenda.

    I’ll bet the average reader of this blog, attending a church service or AA meeting or boy scout campout, would think “These people aren’t being educated, they’re being manipulated.” Maybe, but neither the attendees nor the facilitators feel that way. If they did, they wouldn’t be at the gatherings.

    What a sophisticated person sees as manipulation may well be genuinely educational for someone simpler. I see it as the “market force” behind all the different brands of religious and civic institutions. Some will appeal to broad audiences, others are more narrow. Thus we have more Baptists than Quakers.

  • Doug S.

    There’s a problem with this assumption. Objective morality has about as much substance as objective aesthetics. Humans evolved a sense of morality because individuals in a society in which most people followed a useful moral code was more likely to survive than individuals in a society in which people did not. (Consider the “Tit-For-Tat” strategy to the iterated Prisoner’s Dilemma.) However, moral emotions (guilt, self-righteousness, outrage, etc.) are not necessarily tied to anyone’s idea of correct abstract moral principles. One can certainly form a moral philosophy that contradicts other moral philosophies and successfully persuade others to follow it, but the objective truth of a moral philosophy can’t be proven any more than one can prove the objective truth of a mathematical axiom or the objective beauty of one’s favorite piece of music.

  • josh

    Robin,
    Perhaps we should be more likely to believe that stealing is ethical.

  • http://en.wikipedia.org/wiki/Kohlberg%27s_stages_of_moral_development Anna Obraztsova

    “Moral Education” is esentially what at least one group of citizens receives almost mandatorily: juvenile delinquents, or those identified by the school system as “difficult” and referred for psychological intervention. In fact, a friend runs group “moral” therapy seminars for 17-year-olds, during which traditional Kohlberg Dillemmas are discussed, with the goal of advancing the moral reasoning at or past the Conventional level (for example, the “Heinz Dilemma”: Should a husband who does not have enough money to buy medicine what will save his dying wife break into a pharmacy at night?). Although this does not insure a morally sophisticated humanity in general, it attempts to target, or improve, those who have demonstrated lapses. of course there is no guarantee that the Decider is the moral wizard you described.

  • Stuart Armstrong

    But the question is, why do some moral agents spread the word (churches, Richard Dawkins, communists, etc…) while others don’t?

    The one who go out and try and moralise others seem to be the ones with a whole coherent programm. Once you’ve accepted the core feelings of the program (god exists and that’s important, no he doesn’t and that’s important, the communist utopia is inevitable…) then it’s easy for you to swallow the rest.

    But for most people with a moral system, you’d have to convince people of the morality of each and every idea in it, nearly independently (and most morale systems are not very coherent to outsiders). So it makes sense to SADD one aspect at a time rather than to do general persuasion.

  • TGGP

    I’m with Doug S on this one. You can’t derive ought from is, so reality has no input. What ought to be doesn’t affect what is (unless you believe that God knows what ought to be and causes the world to be that way), so we have something like Dennet’s epiphenomenal gremlins. What we know of the world can be explained without the existence of objective morals and postulating their existence adds nothing to our understanding, so by Occam’s Razor it is sensible not to believe they exist.

  • Matthew

    I agree with several of the other commentors. Morality is a human social construct, and there is no objective scale of right and wrong. Nonetheless, there are some moralities which are vastly preferable to others for most of us. . .

  • Rue Des Quatre Vents

    This post is useless. And it brings me to Robin’s point and one that I want to generalize. Why is there a tradition in academic philosophy that views Ad Hominem arguments as infra dig? Particularly in moral philosophy, I would think these arguments ought to carry the most weight. Sadly, most moral philosophers are crusty academics who live such impoverished lives. I know. I almost was one.

    Assume, if you will my good philosopher, that there are moral truths. What would lead you to believe that the institution of academic moral philosophy is aligned so as to find it? What incentives exist in this branch of philosophy that make it so unique a moral enterprise? As opposed, to say working at Google? Or even at a Dunkin Donuts? I see too much membership signalling going on in academic moral philosophy for anyone to want to get at the truth. How does getting a paper published in Ethics or Mind–wow, that Kolodny piece on Rationality really solved THAT problem–get you closer to the moral truth? Unfortunately, it doesn’t.

    The suspense surrounding Derek Parfit’s new book is astounding. All the priests, high school guidance counselors, policy makers, and mothers against drunk driving are eagerly awaiting its arrival, knowing full well that the moral instruction inside will lead them closer to the truth.

  • David J. Balan

    rcriii and Bob, You are right that a big part of what churches do is something other than the kind of manipulative persuasion I referred to in the post. At least some of them try to ground their hearers in a faith that will allow them to get to the right (by the lights of the church) answer on their own. The problem, of course, is that this is only a good thing if you think that faith is a good way to get to moral truth, which I don’t. But it is both noteworthy and unfortunate that religious types at least do something like this and Enlightenment types generally don’t.

    Tim, There are some papers by Akerlof and Kranton along the same lines.

    Doug, TGGP, and Matthew, Certainly moral philosophy is not necessary for other-regarding behavior, it explains none of it in animals and probably little of it in humans. It evolved somehow (evolutionary psychologists are making progress in figuring out exactly how), and there it is. But this does not mean that there is no such thing as objective moral philosophy. People can and do do good, or refrain from doing bad, even when their inclination would be to do otherwise, because they have decided that a moral principle compels it. We now know that such moral principles cannot be ultimately grounded in pure reason, they need some axioms to get the whole project off the ground. But the axioms can be pretty modest and the philsophizing from there can be very objective. BTW, I’ve heard Dennett say that morality may be in some sense universal just like arithmatic is. See http://meaningoflife.tv/.

    Anna, Welcome! And interesting point about why goes on in juvie.

    Stuart, This might be another reason why the (constrianed) optimal amount of moral education is low. I still think the actual amount is even lower than that.

  • Matthew

    David,

    I tend to believe that man is the rationalizing animal rather than the rational animal. We are so often fooled into believing that our particular beliefs and community of co-believers are right, and that everyone else is wrong, whether we call them “evil” or “irrational”.

    I am very dubious of the prospects of a mind-made moral framework based on supposed “rational” grounds being markedly superior to mind-made moral frameworks based on any other grounds.

    Then again, I actually believe there are things that are more important than morality. . .

  • TGGP

    Mathematics is a useful construct involving manipulating non-existent things that we use to better understand things that do exist. If you aren’t going to be engaging in certain sorts of behavior (say, if you’re a Piraha) then mathematics isn’t much more useful than knowing Sanskrit. A system of morality is only useful given that we want to be moral, which is itself a moral assumption. Moral philosophers haven’t been able to agree on the use of a common moral system as they have with mathematics and it is extremely doubtful that they will ever do so, and without making some moral assumptions it cannot be said that it would be good or bad if they were to do so.

  • rcriii

    David, it sounds like your complaint is that there is too little moral education that you approve of. But if that is the only sort you are willing to countenance, how do you expect to learn anything?

  • TGGP

    One rule of thumb to help distinguish whether or not something is “objective” is to see whether or not you could design a machine that would tell you. Under this standard, we could say that our sense data may be an accurate source of information but our “moral intuitions” are not (painting “lying is unethical” on a rock would not qualify since you are just hard-coding a conclusion you already came to). A chemist could make a machine that tells you the composition of chocolate vs vanilla ice cream, but it can’t determine what tastes better.

    We’ve already got machines able to do more mathematical computations than the average human being (though they can’t tell you whether Euclidian or non-Euclidian geometries are correct). What kind of moral calculations could a machine make? If you assigned weights of utility to different things it could do some summation and rank different outcomes, but it can’t by itself say what utilities exist, whether total or average utilitarianism is better or of course whether utilitarianism is better or worse than deontology. While in the future machines may be able to do more math, I cannot see how their ability to make moral calculations would become greater in the future than it feasibly could be now.

    Under this standard, morality may be even less objective than aesthetics. I presume some of you have already heard of this program ( http://www.israel21c.org/bin/en.jsp?enDispWho=Articles%5El1543&enPage=BlankPage&enDisplay=view&enDispWhat=object ) that takes pictures of faces and makes them, in the opinion of many, more pleasing to the eye. Since (I presume) it cannot repeatedly be applied to a picture over and over, it would regard that final state as maximally attractive and the “distance” between an original picture and it’s altered version could be a sort of measure of unattractiveness. Parents will still be likely to insist their newborns are the most beautiful things in the world though. Could any similar persuasive but not final judgments about morality be determined by machines? I doubt it.

  • David J. Balan

    This kind of skepticism about objective morality is one that almost nobody takes seriously in practice. If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn’t, but that he shouldn’t have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real. The essence of moral philosophy, as I see it, is nothing more than the recognition that the same rules that apply to him apply to you, and then working through the implications of that.

  • Matthew

    If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn’t, but that he shouldn’t have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real.

    No, at this stage of my life my primary reaction to bullies that show up in my life is to view them as interesting specimens of human diversity and challenging interpersonal problems to solve. Of course there is also Matthew’s conditioned reactions to being bullied, but that’s also something interesting to observe as well. That doesn’t mean that I don’t stand up for myself, or avail myself of the available remedies, but I try not to take bullying personally.

    It’s not about morals, it’s about cleaning the scales of emotional reactivity from your eyes so you can see the amazingness of the universe, especially the human social interaction aspects of the universe.

  • TGGP

    No, David, when people do things to me I strongly wish they had not I do not consider it objectively wrong, just as I don’t consider people who tell me that Citizen Kane, Gone with Wind or Lawrence of Arabia are good movies to be objectively wrong. I still have the same instincts that most people do that because I dislike something it must be really bad, but just as I can reject the folk zoology that tells me animals species are platonic and unchanging, the folk physics that tells me relativity and quantum mechanics are nonsense and the folk psychology that we have free-will, I can discard the folk morality that my displeasure is somehow a reflection of the violation of rule written in the heavens or a reduction in the supply of “utils” rather than the product of a mind created by evolution to ensure the propagation of its genes.

  • David J. Balan

    Matthew, I don’t find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.

    TGGP, I take your point that your intuition that the bully is immoral doesn’t prove anything. Let me try something else. A long time ago, someone told me about an effort by some philosopher to lay out the axioms that would be necessary to derive some general version of liberal Enlightenment morality. I don’t recall what they were, but one of them would have to be that the welfare of the other guy is in some sense your concern; you’re not allowed to gouge his eyes out if doing so would benefit you a penny’s worth. Thta’s an axiom, not a result derived from first principles. So if you run across a guy who doesn’t buy into that, and thinks that it’s OK for him to gouge out your eyes for a penny, he is not strictly speaking being immoral, because he has refused to be part of the game. He’s the enemy of humanity and probably a psychopath, to be dealt with one way or another (by law, by psychiatry, or by being cajoled somehow into accepting the axioms), but not technically immoral. But it seems to me that this is not what matters in the real world. There aren’t too many people, at least in successful societies, who explicitly reject the basic axioms. They are people who accept the basic axioms but are weak or inconsistent in implementing them. Moral education is about helping people be better at the implementation.

  • TGGP

    David, I think most people agree to “be moral” or “abide by the rules of the game”, but they don’t actually all agree on what the rules are. To quote Bob Black, they have merely agreed to call the thing on which they are in agreement by a certain name: “good” or “moral” or “ethical”. Robert LeFevre would agree unequivocally with your eye-gouging example (although at least, unlike Kant, he wouldn’t prohibit you from lying to the man if he is a murderer looking for his prospective victim), but someone who deontologically believed in self-defense would say that is okay to do it if he attacks you and you don’t have a better method of resisting him, and a utilitarian is unable to know whether or not I am a “utility monster” for whom the smallest slight causes me immense anguish that can only be assuaged by gouging out eyes, and might be okay with it if by gouging out the eyes I cause a penny of benefit for a billion people, a Rawlsian might (I haven’t actually read Rawls so I’m not sure) condone it if the person whose eyes I gouge is the happiest man on the planet and will remain so after I attack him but I am the saddest man and will become happier by gouging, Vox Day would if God told him to, arguing that it would be the moral equivalent of a computer programmer deleting some files, a communist might if the man were a reactionary counter-revolutionary enemy of the people and the Yanomamo might just because killing people is very good for your reproductive fitness in their society and maybe this guy was from another village. All of them would consider themselves morally upright people. What would a moral machine of the kind I described before say? Probably not the Yanomamo conclusion since they don’t invent much. David, if you were both a great inventor and a great moral philosopher, how would your moral machine work? If someone came to me and said they had accepted some basic axioms but needed help applying them, I wouldn’t know what kind of machine could do the job. It would probably just try to match each query with an axiom that seemed relevant, which wouldn’t be much help if the amount of axioms is small and would often seem faulty to the user.

    How society deals with people with different conceptions of morality is another story. You could say that you know best and nuts to those who dissent, but that can be hard to implement. Having the members of society make a contractual agreement (a real one, not the made up “social contract” that was never actually created) would seem a more workable solution, but that still isn’t an “objective morality” and different groups of people would likely create different contracts (Kevin Carson and Keith Preston refer to this as “panarchy”). That would run into a problem with people born into the society (perhaps like the Amish they could be sent outside to see if they want to return) and others unable to make contractual decisions, but as the hubbub over the discount rate in the Stern Report shows, moral philosophy hasn’t created a consensus on how we should take into account future generations.

    I just felt like adding that despite my name-dropping in this post, I’m not an anarchist. Anarchy was the default (everything that exists at one time didn’t, including government) and now states are everywhere, so it seems to be a losing strategy.

  • Matthew

    Matthew, I don’t find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.

    I find all the variations of human behavior interesting. It does not mean that I appreciate being bullied, or do not want the axe murderers locked up. I guess I simply don’t find it helpful to take personal affront to reality. What is, is, and I find a clear seeing more useful than judgementalism.

  • Matthew

    Sorry to keep beating the same horse David, but this one thing you said really bothers me:

    Matthew, I don’t find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them. . .

    On the one hand, you seem to have a deep concern for morality, and for propagating moral behavior. On the other hand, you have no interest in understanding why some people are bullies (I’ll ignore the “you can get a kick out of them” comment).

    I would suggest that the lack of curiosity about human behavior in its more objectionable forms is quite likely to lead to a lack of effectiveness in your goal of reducing immoral behavior.

  • David J. Balan

    Matthew, There is nothing wrong with being curious about people, it can be both fun and useful. The ax-murderer point wasn’t meant as an insult, I just meant that at a certain level of misbehavior interestedness is not likely to be your or anyone else’s primary reaction. Nor, in my view, would it be a virtue if it were.

    TGGP, The main point of your comment, as I see it, is that philosophy is hard. Even if you bought into the results of the dimly recalled philosopher I mentioned above, it certainly wouldn’t equip you to answer every moral question. The whole project may eventually run out of rope. So there may be more than one thing that counts as moral, but that doesn’t mean that everything does.

    As far as your machine example is concerned, here’s my best shot. Whenever you sincerely ask yourself “what should I do?” you are a morality machine. The very fact that you’ve asked yourself the question means that you think that thinking about it will lead to an answer that’s more right than the alternatives. What else is it if not that? So I guess my best answer is that the machine would do what you at least aspire to do, but hopefully better, it would try to get to a conclusion that really does follow from the axioms and the evidence. The computer may not identify a single answer, either because there is residual undertainty (which, if resolved, would point to a single answer), or because there really is more than one choice that follows from the axioms. But that’s still a whole lot better than nothing. I think I would be happy to live in a world where everyone had bought into the axioms, exhausted what moral philosophy could teach them (eliminating the objectively immoral options), and then choose among the remaining (moral) options according to taste or custom or whatever.

  • TGGP

    I never thought of moral philosophy as “hard” before, but it would be placed on that end of the continuum in terms of Jared Diamond’s “difficult/soft science” vs “easy/hard science”. I would place it much farther than, sociology for example, and more near palm-reading or dowsing (however those at least entail falsifiability, though it has had little effect on the field). It is very hard to successfully do palm-reading or dowsing, so many people concentrate their efforts elsewhere. A better example might be theology, which has often been intertwined with moral philosophy. If I told someone I had created a machine to assist people with theological calculations, I would be laughed at. I don’t know what it would mean to “operationalize” a theological concept. There is never going to be a theology machine and I am similarly confident that there will never be one for moral philosophy. That would be a great loss for those who are less adept about moral philosophy if there were some way to demonstrate some people were better at it than others, which I also do not believe will ever happen. Just as they currently have nothing to rely on but their own subjective impressions when deciding what the best name is for their cutest-newborn-in-the-world they will have to decide for themselves how to “do the right thing” rather than relying on the latest findings in the science of moral philosophy. If I am wrong and such a device is created, I declare myself in advance to be eating crow. I’d like to hear a time by which you think one will have been created.