It’s often taken as a disadvantage if your moral values are hodgepodge of impressions, rules of thumb, and narrow cases studies. In fact, universality seems the most desirable attributes of any moral system, with coherence hot on its tale.
Even extreme relativism exalts “there is no truth” to the level of a universal principal, and the “realistic” moral systems solve the issue by saying that moral values are eternal, but how they balance and how they apply depends on circumstances in the world.
I found myself defending this universality and coherence in recent comments, where I argued that future progress in understanding the brain will cause us to question all our free will assumptions. However, I didn’t want to update my own moral system yet, because there implications hadn’t been organized into anything coherent yet.
Why did I feel like that? Maybe there’s a natural human urge to make our moral values clear and absolute, while admitting reality is flawed. We’d much prefer “murder is wrong, but may be allowed to prevent other crimes” to “murder is quite likely to be wrong”, even if they end up being similar in practice.
A coherent system also has the advantage of being much easier to preach to others (and if we can’t convince anyone of the validity of our system, then we feel – probably rightly – that there’s something wrong with it) and also keeps us more “honest” – if we already fail to live up to clear and universal principles, then weak and contingent values might leave us open to rationalizing anything. And we may just end up picking and choosing our mini-values to suit our momentary needs.
Finally, there is a last thing keeping me from moving to an unclear system – I don’t want to be “trapped” into values I despise. I want to see all the implications of any moral value I go for. If reduced free will leads directly, say, to Nazi values, I want to know this before endorsing it.
That’s my strongest and most irrational reason, so it might be worth digging into it a bit more. First, note the hidden assumption – that my moral system will eventually be coherent, so I can’t assume free will is wrong now but reject the consequences of that in the future. Secondly, I seem open to the idea that my moral values may evolve a bit. Indeed, I want them to ‘improve’. But I am repelled by any course of action that would completely change them, and this causes me to not look at some issues only on their merits – but on what their consequences might be for my values.
And, as in many situations, just knowing these biases doesn’t help me overcome them at all.
It all a load of Rubbish!! what makes human's think they can find answers to any questions at all is beyond thought.We are but frogs in a pond and haven't seen the ocean have we? All the talk of morality is not something we can summarise as this chapter should be left to the God Almighty. he will come and kick everyones arse one day and put justice to real action. Morality will be understood. For know lets just stick to the basic Bible as a guide to what our behaviour should be.
In philosophy you never know if you're wrong and there are no costs.But that's precisely why it's strange they never explore these particular alternatives. And they're under constraints to be known, and a beginning like "murder is quite likely to be wrong" would attract attention.
It can certainly be built up into a coherent system, too, so that can't be the problem. So what is?