Suppose you are a great moral philosopher and you’ve figured out perfectly how to tell right from wrong. You have some time on your hands, and you want to use it to do good in the world. One good thing you might do would be to try to make people more moral by teaching them to be moral philosophers like you. Another good thing would be to combat one of the specific moral evils you’ve identified in your philosophizing, say drunk driving. You could achieve this by embarking on a campaign of persuasion in which you portray drunk driving as something that stupid losers do, as groups like SADD and MADD have done with what seems to be great success (it’s remarkable how fast drunk driving has gone from being cool to being powerfully uncool).
I never thought of moral philosophy as "hard" before, but it would be placed on that end of the continuum in terms of Jared Diamond's "difficult/soft science" vs "easy/hard science". I would place it much farther than, sociology for example, and more near palm-reading or dowsing (however those at least entail falsifiability, though it has had little effect on the field). It is very hard to successfully do palm-reading or dowsing, so many people concentrate their efforts elsewhere. A better example might be theology, which has often been intertwined with moral philosophy. If I told someone I had created a machine to assist people with theological calculations, I would be laughed at. I don't know what it would mean to "operationalize" a theological concept. There is never going to be a theology machine and I am similarly confident that there will never be one for moral philosophy. That would be a great loss for those who are less adept about moral philosophy if there were some way to demonstrate some people were better at it than others, which I also do not believe will ever happen. Just as they currently have nothing to rely on but their own subjective impressions when deciding what the best name is for their cutest-newborn-in-the-world they will have to decide for themselves how to "do the right thing" rather than relying on the latest findings in the science of moral philosophy. If I am wrong and such a device is created, I declare myself in advance to be eating crow. I'd like to hear a time by which you think one will have been created.
Matthew, There is nothing wrong with being curious about people, it can be both fun and useful. The ax-murderer point wasn't meant as an insult, I just meant that at a certain level of misbehavior interestedness is not likely to be your or anyone else's primary reaction. Nor, in my view, would it be a virtue if it were.
TGGP, The main point of your comment, as I see it, is that philosophy is hard. Even if you bought into the results of the dimly recalled philosopher I mentioned above, it certainly wouldn't equip you to answer every moral question. The whole project may eventually run out of rope. So there may be more than one thing that counts as moral, but that doesn't mean that everything does.
As far as your machine example is concerned, here's my best shot. Whenever you sincerely ask yourself "what should I do?" you are a morality machine. The very fact that you've asked yourself the question means that you think that thinking about it will lead to an answer that's more right than the alternatives. What else is it if not that? So I guess my best answer is that the machine would do what you at least aspire to do, but hopefully better, it would try to get to a conclusion that really does follow from the axioms and the evidence. The computer may not identify a single answer, either because there is residual undertainty (which, if resolved, would point to a single answer), or because there really is more than one choice that follows from the axioms. But that's still a whole lot better than nothing. I think I would be happy to live in a world where everyone had bought into the axioms, exhausted what moral philosophy could teach them (eliminating the objectively immoral options), and then choose among the remaining (moral) options according to taste or custom or whatever.
Sorry to keep beating the same horse David, but this one thing you said really bothers me:
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them. . .
On the one hand, you seem to have a deep concern for morality, and for propagating moral behavior. On the other hand, you have no interest in understanding why some people are bullies (I'll ignore the "you can get a kick out of them" comment).
I would suggest that the lack of curiosity about human behavior in its more objectionable forms is quite likely to lead to a lack of effectiveness in your goal of reducing immoral behavior.
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.
I find all the variations of human behavior interesting. It does not mean that I appreciate being bullied, or do not want the axe murderers locked up. I guess I simply don't find it helpful to take personal affront to reality. What is, is, and I find a clear seeing more useful than judgementalism.
David, I think most people agree to "be moral" or "abide by the rules of the game", but they don't actually all agree on what the rules are. To quote Bob Black, they have merely agreed to call the thing on which they are in agreement by a certain name: "good" or "moral" or "ethical". Robert LeFevre would agree unequivocally with your eye-gouging example (although at least, unlike Kant, he wouldn't prohibit you from lying to the man if he is a murderer looking for his prospective victim), but someone who deontologically believed in self-defense would say that is okay to do it if he attacks you and you don't have a better method of resisting him, and a utilitarian is unable to know whether or not I am a "utility monster" for whom the smallest slight causes me immense anguish that can only be assuaged by gouging out eyes, and might be okay with it if by gouging out the eyes I cause a penny of benefit for a billion people, a Rawlsian might (I haven't actually read Rawls so I'm not sure) condone it if the person whose eyes I gouge is the happiest man on the planet and will remain so after I attack him but I am the saddest man and will become happier by gouging, Vox Day would if God told him to, arguing that it would be the moral equivalent of a computer programmer deleting some files, a communist might if the man were a reactionary counter-revolutionary enemy of the people and the Yanomamo might just because killing people is very good for your reproductive fitness in their society and maybe this guy was from another village. All of them would consider themselves morally upright people. What would a moral machine of the kind I described before say? Probably not the Yanomamo conclusion since they don't invent much. David, if you were both a great inventor and a great moral philosopher, how would your moral machine work? If someone came to me and said they had accepted some basic axioms but needed help applying them, I wouldn't know what kind of machine could do the job. It would probably just try to match each query with an axiom that seemed relevant, which wouldn't be much help if the amount of axioms is small and would often seem faulty to the user.
How society deals with people with different conceptions of morality is another story. You could say that you know best and nuts to those who dissent, but that can be hard to implement. Having the members of society make a contractual agreement (a real one, not the made up "social contract" that was never actually created) would seem a more workable solution, but that still isn't an "objective morality" and different groups of people would likely create different contracts (Kevin Carson and Keith Preston refer to this as "panarchy"). That would run into a problem with people born into the society (perhaps like the Amish they could be sent outside to see if they want to return) and others unable to make contractual decisions, but as the hubbub over the discount rate in the Stern Report shows, moral philosophy hasn't created a consensus on how we should take into account future generations.
I just felt like adding that despite my name-dropping in this post, I'm not an anarchist. Anarchy was the default (everything that exists at one time didn't, including government) and now states are everywhere, so it seems to be a losing strategy.
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.
TGGP, I take your point that your intuition that the bully is immoral doesn't prove anything. Let me try something else. A long time ago, someone told me about an effort by some philosopher to lay out the axioms that would be necessary to derive some general version of liberal Enlightenment morality. I don't recall what they were, but one of them would have to be that the welfare of the other guy is in some sense your concern; you're not allowed to gouge his eyes out if doing so would benefit you a penny's worth. Thta's an axiom, not a result derived from first principles. So if you run across a guy who doesn't buy into that, and thinks that it's OK for him to gouge out your eyes for a penny, he is not strictly speaking being immoral, because he has refused to be part of the game. He's the enemy of humanity and probably a psychopath, to be dealt with one way or another (by law, by psychiatry, or by being cajoled somehow into accepting the axioms), but not technically immoral. But it seems to me that this is not what matters in the real world. There aren't too many people, at least in successful societies, who explicitly reject the basic axioms. They are people who accept the basic axioms but are weak or inconsistent in implementing them. Moral education is about helping people be better at the implementation.
No, David, when people do things to me I strongly wish they had not I do not consider it objectively wrong, just as I don't consider people who tell me that Citizen Kane, Gone with Wind or Lawrence of Arabia are good movies to be objectively wrong. I still have the same instincts that most people do that because I dislike something it must be really bad, but just as I can reject the folk zoology that tells me animals species are platonic and unchanging, the folk physics that tells me relativity and quantum mechanics are nonsense and the folk psychology that we have free-will, I can discard the folk morality that my displeasure is somehow a reflection of the violation of rule written in the heavens or a reduction in the supply of "utils" rather than the product of a mind created by evolution to ensure the propagation of its genes.
If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn't, but that he shouldn't have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real.
No, at this stage of my life my primary reaction to bullies that show up in my life is to view them as interesting specimens of human diversity and challenging interpersonal problems to solve. Of course there is also Matthew's conditioned reactions to being bullied, but that's also something interesting to observe as well. That doesn't mean that I don't stand up for myself, or avail myself of the available remedies, but I try not to take bullying personally.
It's not about morals, it's about cleaning the scales of emotional reactivity from your eyes so you can see the amazingness of the universe, especially the human social interaction aspects of the universe.
This kind of skepticism about objective morality is one that almost nobody takes seriously in practice. If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn't, but that he shouldn't have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real. The essence of moral philosophy, as I see it, is nothing more than the recognition that the same rules that apply to him apply to you, and then working through the implications of that.
One rule of thumb to help distinguish whether or not something is "objective" is to see whether or not you could design a machine that would tell you. Under this standard, we could say that our sense data may be an accurate source of information but our "moral intuitions" are not (painting "lying is unethical" on a rock would not qualify since you are just hard-coding a conclusion you already came to). A chemist could make a machine that tells you the composition of chocolate vs vanilla ice cream, but it can't determine what tastes better.
We've already got machines able to do more mathematical computations than the average human being (though they can't tell you whether Euclidian or non-Euclidian geometries are correct). What kind of moral calculations could a machine make? If you assigned weights of utility to different things it could do some summation and rank different outcomes, but it can't by itself say what utilities exist, whether total or average utilitarianism is better or of course whether utilitarianism is better or worse than deontology. While in the future machines may be able to do more math, I cannot see how their ability to make moral calculations would become greater in the future than it feasibly could be now.
Under this standard, morality may be even less objective than aesthetics. I presume some of you have already heard of this program ( http://www.israel21c.org/bi... ) that takes pictures of faces and makes them, in the opinion of many, more pleasing to the eye. Since (I presume) it cannot repeatedly be applied to a picture over and over, it would regard that final state as maximally attractive and the "distance" between an original picture and it's altered version could be a sort of measure of unattractiveness. Parents will still be likely to insist their newborns are the most beautiful things in the world though. Could any similar persuasive but not final judgments about morality be determined by machines? I doubt it.
David, it sounds like your complaint is that there is too little moral education that you approve of. But if that is the only sort you are willing to countenance, how do you expect to learn anything?
Mathematics is a useful construct involving manipulating non-existent things that we use to better understand things that do exist. If you aren't going to be engaging in certain sorts of behavior (say, if you're a Piraha) then mathematics isn't much more useful than knowing Sanskrit. A system of morality is only useful given that we want to be moral, which is itself a moral assumption. Moral philosophers haven't been able to agree on the use of a common moral system as they have with mathematics and it is extremely doubtful that they will ever do so, and without making some moral assumptions it cannot be said that it would be good or bad if they were to do so.
I tend to believe that man is the rationalizing animal rather than the rational animal. We are so often fooled into believing that our particular beliefs and community of co-believers are right, and that everyone else is wrong, whether we call them "evil" or "irrational".
I am very dubious of the prospects of a mind-made moral framework based on supposed "rational" grounds being markedly superior to mind-made moral frameworks based on any other grounds.
Then again, I actually believe there are things that are more important than morality. . .
rcriii and Bob, You are right that a big part of what churches do is something other than the kind of manipulative persuasion I referred to in the post. At least some of them try to ground their hearers in a faith that will allow them to get to the right (by the lights of the church) answer on their own. The problem, of course, is that this is only a good thing if you think that faith is a good way to get to moral truth, which I don't. But it is both noteworthy and unfortunate that religious types at least do something like this and Enlightenment types generally don't.
Tim, There are some papers by Akerlof and Kranton along the same lines.
Doug, TGGP, and Matthew, Certainly moral philosophy is not necessary for other-regarding behavior, it explains none of it in animals and probably little of it in humans. It evolved somehow (evolutionary psychologists are making progress in figuring out exactly how), and there it is. But this does not mean that there is no such thing as objective moral philosophy. People can and do do good, or refrain from doing bad, even when their inclination would be to do otherwise, because they have decided that a moral principle compels it. We now know that such moral principles cannot be ultimately grounded in pure reason, they need some axioms to get the whole project off the ground. But the axioms can be pretty modest and the philsophizing from there can be very objective. BTW, I've heard Dennett say that morality may be in some sense universal just like arithmatic is. See http://meaningoflife.tv/.
Anna, Welcome! And interesting point about why goes on in juvie.
Stuart, This might be another reason why the (constrianed) optimal amount of moral education is low. I still think the actual amount is even lower than that.
This post is useless. And it brings me to Robin's point and one that I want to generalize. Why is there a tradition in academic philosophy that views Ad Hominem arguments as infra dig? Particularly in moral philosophy, I would think these arguments ought to carry the most weight. Sadly, most moral philosophers are crusty academics who live such impoverished lives. I know. I almost was one.
Assume, if you will my good philosopher, that there are moral truths. What would lead you to believe that the institution of academic moral philosophy is aligned so as to find it? What incentives exist in this branch of philosophy that make it so unique a moral enterprise? As opposed, to say working at Google? Or even at a Dunkin Donuts? I see too much membership signalling going on in academic moral philosophy for anyone to want to get at the truth. How does getting a paper published in Ethics or Mind--wow, that Kolodny piece on Rationality really solved THAT problem--get you closer to the moral truth? Unfortunately, it doesn't.
The suspense surrounding Derek Parfit's new book is astounding. All the priests, high school guidance counselors, policy makers, and mothers against drunk driving are eagerly awaiting its arrival, knowing full well that the moral instruction inside will lead them closer to the truth.
I agree with several of the other commentors. Morality is a human social construct, and there is no objective scale of right and wrong. Nonetheless, there are some moralities which are vastly preferable to others for most of us. . .
I never thought of moral philosophy as "hard" before, but it would be placed on that end of the continuum in terms of Jared Diamond's "difficult/soft science" vs "easy/hard science". I would place it much farther than, sociology for example, and more near palm-reading or dowsing (however those at least entail falsifiability, though it has had little effect on the field). It is very hard to successfully do palm-reading or dowsing, so many people concentrate their efforts elsewhere. A better example might be theology, which has often been intertwined with moral philosophy. If I told someone I had created a machine to assist people with theological calculations, I would be laughed at. I don't know what it would mean to "operationalize" a theological concept. There is never going to be a theology machine and I am similarly confident that there will never be one for moral philosophy. That would be a great loss for those who are less adept about moral philosophy if there were some way to demonstrate some people were better at it than others, which I also do not believe will ever happen. Just as they currently have nothing to rely on but their own subjective impressions when deciding what the best name is for their cutest-newborn-in-the-world they will have to decide for themselves how to "do the right thing" rather than relying on the latest findings in the science of moral philosophy. If I am wrong and such a device is created, I declare myself in advance to be eating crow. I'd like to hear a time by which you think one will have been created.
Matthew, There is nothing wrong with being curious about people, it can be both fun and useful. The ax-murderer point wasn't meant as an insult, I just meant that at a certain level of misbehavior interestedness is not likely to be your or anyone else's primary reaction. Nor, in my view, would it be a virtue if it were.
TGGP, The main point of your comment, as I see it, is that philosophy is hard. Even if you bought into the results of the dimly recalled philosopher I mentioned above, it certainly wouldn't equip you to answer every moral question. The whole project may eventually run out of rope. So there may be more than one thing that counts as moral, but that doesn't mean that everything does.
As far as your machine example is concerned, here's my best shot. Whenever you sincerely ask yourself "what should I do?" you are a morality machine. The very fact that you've asked yourself the question means that you think that thinking about it will lead to an answer that's more right than the alternatives. What else is it if not that? So I guess my best answer is that the machine would do what you at least aspire to do, but hopefully better, it would try to get to a conclusion that really does follow from the axioms and the evidence. The computer may not identify a single answer, either because there is residual undertainty (which, if resolved, would point to a single answer), or because there really is more than one choice that follows from the axioms. But that's still a whole lot better than nothing. I think I would be happy to live in a world where everyone had bought into the axioms, exhausted what moral philosophy could teach them (eliminating the objectively immoral options), and then choose among the remaining (moral) options according to taste or custom or whatever.
Sorry to keep beating the same horse David, but this one thing you said really bothers me:
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them. . .
On the one hand, you seem to have a deep concern for morality, and for propagating moral behavior. On the other hand, you have no interest in understanding why some people are bullies (I'll ignore the "you can get a kick out of them" comment).
I would suggest that the lack of curiosity about human behavior in its more objectionable forms is quite likely to lead to a lack of effectiveness in your goal of reducing immoral behavior.
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.
I find all the variations of human behavior interesting. It does not mean that I appreciate being bullied, or do not want the axe murderers locked up. I guess I simply don't find it helpful to take personal affront to reality. What is, is, and I find a clear seeing more useful than judgementalism.
David, I think most people agree to "be moral" or "abide by the rules of the game", but they don't actually all agree on what the rules are. To quote Bob Black, they have merely agreed to call the thing on which they are in agreement by a certain name: "good" or "moral" or "ethical". Robert LeFevre would agree unequivocally with your eye-gouging example (although at least, unlike Kant, he wouldn't prohibit you from lying to the man if he is a murderer looking for his prospective victim), but someone who deontologically believed in self-defense would say that is okay to do it if he attacks you and you don't have a better method of resisting him, and a utilitarian is unable to know whether or not I am a "utility monster" for whom the smallest slight causes me immense anguish that can only be assuaged by gouging out eyes, and might be okay with it if by gouging out the eyes I cause a penny of benefit for a billion people, a Rawlsian might (I haven't actually read Rawls so I'm not sure) condone it if the person whose eyes I gouge is the happiest man on the planet and will remain so after I attack him but I am the saddest man and will become happier by gouging, Vox Day would if God told him to, arguing that it would be the moral equivalent of a computer programmer deleting some files, a communist might if the man were a reactionary counter-revolutionary enemy of the people and the Yanomamo might just because killing people is very good for your reproductive fitness in their society and maybe this guy was from another village. All of them would consider themselves morally upright people. What would a moral machine of the kind I described before say? Probably not the Yanomamo conclusion since they don't invent much. David, if you were both a great inventor and a great moral philosopher, how would your moral machine work? If someone came to me and said they had accepted some basic axioms but needed help applying them, I wouldn't know what kind of machine could do the job. It would probably just try to match each query with an axiom that seemed relevant, which wouldn't be much help if the amount of axioms is small and would often seem faulty to the user.
How society deals with people with different conceptions of morality is another story. You could say that you know best and nuts to those who dissent, but that can be hard to implement. Having the members of society make a contractual agreement (a real one, not the made up "social contract" that was never actually created) would seem a more workable solution, but that still isn't an "objective morality" and different groups of people would likely create different contracts (Kevin Carson and Keith Preston refer to this as "panarchy"). That would run into a problem with people born into the society (perhaps like the Amish they could be sent outside to see if they want to return) and others unable to make contractual decisions, but as the hubbub over the discount rate in the Stern Report shows, moral philosophy hasn't created a consensus on how we should take into account future generations.
I just felt like adding that despite my name-dropping in this post, I'm not an anarchist. Anarchy was the default (everything that exists at one time didn't, including government) and now states are everywhere, so it seems to be a losing strategy.
Matthew, I don't find bullies interesting at all. And if you think bullies are benign enough that you can get a kick out of them, just substitute ax-murderers.
TGGP, I take your point that your intuition that the bully is immoral doesn't prove anything. Let me try something else. A long time ago, someone told me about an effort by some philosopher to lay out the axioms that would be necessary to derive some general version of liberal Enlightenment morality. I don't recall what they were, but one of them would have to be that the welfare of the other guy is in some sense your concern; you're not allowed to gouge his eyes out if doing so would benefit you a penny's worth. Thta's an axiom, not a result derived from first principles. So if you run across a guy who doesn't buy into that, and thinks that it's OK for him to gouge out your eyes for a penny, he is not strictly speaking being immoral, because he has refused to be part of the game. He's the enemy of humanity and probably a psychopath, to be dealt with one way or another (by law, by psychiatry, or by being cajoled somehow into accepting the axioms), but not technically immoral. But it seems to me that this is not what matters in the real world. There aren't too many people, at least in successful societies, who explicitly reject the basic axioms. They are people who accept the basic axioms but are weak or inconsistent in implementing them. Moral education is about helping people be better at the implementation.
No, David, when people do things to me I strongly wish they had not I do not consider it objectively wrong, just as I don't consider people who tell me that Citizen Kane, Gone with Wind or Lawrence of Arabia are good movies to be objectively wrong. I still have the same instincts that most people do that because I dislike something it must be really bad, but just as I can reject the folk zoology that tells me animals species are platonic and unchanging, the folk physics that tells me relativity and quantum mechanics are nonsense and the folk psychology that we have free-will, I can discard the folk morality that my displeasure is somehow a reflection of the violation of rule written in the heavens or a reduction in the supply of "utils" rather than the product of a mind created by evolution to ensure the propagation of its genes.
If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn't, but that he shouldn't have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real.
No, at this stage of my life my primary reaction to bullies that show up in my life is to view them as interesting specimens of human diversity and challenging interpersonal problems to solve. Of course there is also Matthew's conditioned reactions to being bullied, but that's also something interesting to observe as well. That doesn't mean that I don't stand up for myself, or avail myself of the available remedies, but I try not to take bullying personally.
It's not about morals, it's about cleaning the scales of emotional reactivity from your eyes so you can see the amazingness of the universe, especially the human social interaction aspects of the universe.
This kind of skepticism about objective morality is one that almost nobody takes seriously in practice. If a bully took your lunch money, you would think he was wrong for having done so. Not just that you would have preferred if he hadn't, but that he shouldn't have on some sort of moral grounds which, even though not derived from first principles, are nevertheless real. The essence of moral philosophy, as I see it, is nothing more than the recognition that the same rules that apply to him apply to you, and then working through the implications of that.
One rule of thumb to help distinguish whether or not something is "objective" is to see whether or not you could design a machine that would tell you. Under this standard, we could say that our sense data may be an accurate source of information but our "moral intuitions" are not (painting "lying is unethical" on a rock would not qualify since you are just hard-coding a conclusion you already came to). A chemist could make a machine that tells you the composition of chocolate vs vanilla ice cream, but it can't determine what tastes better.
We've already got machines able to do more mathematical computations than the average human being (though they can't tell you whether Euclidian or non-Euclidian geometries are correct). What kind of moral calculations could a machine make? If you assigned weights of utility to different things it could do some summation and rank different outcomes, but it can't by itself say what utilities exist, whether total or average utilitarianism is better or of course whether utilitarianism is better or worse than deontology. While in the future machines may be able to do more math, I cannot see how their ability to make moral calculations would become greater in the future than it feasibly could be now.
Under this standard, morality may be even less objective than aesthetics. I presume some of you have already heard of this program ( http://www.israel21c.org/bi... ) that takes pictures of faces and makes them, in the opinion of many, more pleasing to the eye. Since (I presume) it cannot repeatedly be applied to a picture over and over, it would regard that final state as maximally attractive and the "distance" between an original picture and it's altered version could be a sort of measure of unattractiveness. Parents will still be likely to insist their newborns are the most beautiful things in the world though. Could any similar persuasive but not final judgments about morality be determined by machines? I doubt it.
David, it sounds like your complaint is that there is too little moral education that you approve of. But if that is the only sort you are willing to countenance, how do you expect to learn anything?
Mathematics is a useful construct involving manipulating non-existent things that we use to better understand things that do exist. If you aren't going to be engaging in certain sorts of behavior (say, if you're a Piraha) then mathematics isn't much more useful than knowing Sanskrit. A system of morality is only useful given that we want to be moral, which is itself a moral assumption. Moral philosophers haven't been able to agree on the use of a common moral system as they have with mathematics and it is extremely doubtful that they will ever do so, and without making some moral assumptions it cannot be said that it would be good or bad if they were to do so.
David,
I tend to believe that man is the rationalizing animal rather than the rational animal. We are so often fooled into believing that our particular beliefs and community of co-believers are right, and that everyone else is wrong, whether we call them "evil" or "irrational".
I am very dubious of the prospects of a mind-made moral framework based on supposed "rational" grounds being markedly superior to mind-made moral frameworks based on any other grounds.
Then again, I actually believe there are things that are more important than morality. . .
rcriii and Bob, You are right that a big part of what churches do is something other than the kind of manipulative persuasion I referred to in the post. At least some of them try to ground their hearers in a faith that will allow them to get to the right (by the lights of the church) answer on their own. The problem, of course, is that this is only a good thing if you think that faith is a good way to get to moral truth, which I don't. But it is both noteworthy and unfortunate that religious types at least do something like this and Enlightenment types generally don't.
Tim, There are some papers by Akerlof and Kranton along the same lines.
Doug, TGGP, and Matthew, Certainly moral philosophy is not necessary for other-regarding behavior, it explains none of it in animals and probably little of it in humans. It evolved somehow (evolutionary psychologists are making progress in figuring out exactly how), and there it is. But this does not mean that there is no such thing as objective moral philosophy. People can and do do good, or refrain from doing bad, even when their inclination would be to do otherwise, because they have decided that a moral principle compels it. We now know that such moral principles cannot be ultimately grounded in pure reason, they need some axioms to get the whole project off the ground. But the axioms can be pretty modest and the philsophizing from there can be very objective. BTW, I've heard Dennett say that morality may be in some sense universal just like arithmatic is. See http://meaningoflife.tv/.
Anna, Welcome! And interesting point about why goes on in juvie.
Stuart, This might be another reason why the (constrianed) optimal amount of moral education is low. I still think the actual amount is even lower than that.
This post is useless. And it brings me to Robin's point and one that I want to generalize. Why is there a tradition in academic philosophy that views Ad Hominem arguments as infra dig? Particularly in moral philosophy, I would think these arguments ought to carry the most weight. Sadly, most moral philosophers are crusty academics who live such impoverished lives. I know. I almost was one.
Assume, if you will my good philosopher, that there are moral truths. What would lead you to believe that the institution of academic moral philosophy is aligned so as to find it? What incentives exist in this branch of philosophy that make it so unique a moral enterprise? As opposed, to say working at Google? Or even at a Dunkin Donuts? I see too much membership signalling going on in academic moral philosophy for anyone to want to get at the truth. How does getting a paper published in Ethics or Mind--wow, that Kolodny piece on Rationality really solved THAT problem--get you closer to the moral truth? Unfortunately, it doesn't.
The suspense surrounding Derek Parfit's new book is astounding. All the priests, high school guidance counselors, policy makers, and mothers against drunk driving are eagerly awaiting its arrival, knowing full well that the moral instruction inside will lead them closer to the truth.
I agree with several of the other commentors. Morality is a human social construct, and there is no objective scale of right and wrong. Nonetheless, there are some moralities which are vastly preferable to others for most of us. . .