Discussion about this post

User's avatar
Overcoming Bias Commenter's avatar

I never thought of moral philosophy as "hard" before, but it would be placed on that end of the continuum in terms of Jared Diamond's "difficult/soft science" vs "easy/hard science". I would place it much farther than, sociology for example, and more near palm-reading or dowsing (however those at least entail falsifiability, though it has had little effect on the field). It is very hard to successfully do palm-reading or dowsing, so many people concentrate their efforts elsewhere. A better example might be theology, which has often been intertwined with moral philosophy. If I told someone I had created a machine to assist people with theological calculations, I would be laughed at. I don't know what it would mean to "operationalize" a theological concept. There is never going to be a theology machine and I am similarly confident that there will never be one for moral philosophy. That would be a great loss for those who are less adept about moral philosophy if there were some way to demonstrate some people were better at it than others, which I also do not believe will ever happen. Just as they currently have nothing to rely on but their own subjective impressions when deciding what the best name is for their cutest-newborn-in-the-world they will have to decide for themselves how to "do the right thing" rather than relying on the latest findings in the science of moral philosophy. If I am wrong and such a device is created, I declare myself in advance to be eating crow. I'd like to hear a time by which you think one will have been created.

Expand full comment
Overcoming Bias Commenter's avatar

Matthew, There is nothing wrong with being curious about people, it can be both fun and useful. The ax-murderer point wasn't meant as an insult, I just meant that at a certain level of misbehavior interestedness is not likely to be your or anyone else's primary reaction. Nor, in my view, would it be a virtue if it were.

TGGP, The main point of your comment, as I see it, is that philosophy is hard. Even if you bought into the results of the dimly recalled philosopher I mentioned above, it certainly wouldn't equip you to answer every moral question. The whole project may eventually run out of rope. So there may be more than one thing that counts as moral, but that doesn't mean that everything does.

As far as your machine example is concerned, here's my best shot. Whenever you sincerely ask yourself "what should I do?" you are a morality machine. The very fact that you've asked yourself the question means that you think that thinking about it will lead to an answer that's more right than the alternatives. What else is it if not that? So I guess my best answer is that the machine would do what you at least aspire to do, but hopefully better, it would try to get to a conclusion that really does follow from the axioms and the evidence. The computer may not identify a single answer, either because there is residual undertainty (which, if resolved, would point to a single answer), or because there really is more than one choice that follows from the axioms. But that's still a whole lot better than nothing. I think I would be happy to live in a world where everyone had bought into the axioms, exhausted what moral philosophy could teach them (eliminating the objectively immoral options), and then choose among the remaining (moral) options according to taste or custom or whatever.

Expand full comment
25 more comments...

No posts