26 Comments

The idea of an error rate requires the existence of an objective measure of accuracy. If there's no objectively right answer, there can be no real error.

Which would mean that error is simply how far your beliefs/actions deviate from your personal moral values. In which case, the least-error prone morality is that everyone should do whatever they happen to do regardless of anything, since your error will be zero.Or that you're always right no matter what you do, though that doesn't make other people moral.

TLDR: thinking of morality in terms of minimizing error just doesn't work.

Expand full comment

Robin,I wasn't trying to say that you didn't mean your clarification. It just doesn't square well with what you say in other places of the post.

But that wasn't my main worry anyway. What do you think about the non-moral character of the pencil case and the other cases you base the minimal principle on? Shouldn't the simple cases we're basing the moral principle on be moral cases that we'd have moral intuitions about?

Expand full comment

Richard, Joe usually has the intuition that it is morally better for Joe to get what he wants, if no one else cares and no special considerations apply, but Joe also usually has the intuition that it is morally better for Ann to get what she wants, if no one else cares and no special considerations apply.

Unnamed, a simple pattern expressed in terms of complex creatures can still be a simple pattern. The pattern that exactly matches the data is "simple" in some sense, but not in the relevant curve-fitting sense.

Josh, I really meant my clarification; I'm talking primarily about goodness. But once we do talk about rightness a candidate simple comprehensive pattern there is that our intuition is usually that acts that increase goodness most are the most right.

Expand full comment

Robin - I don't see how your addition addresses my second concern. You write: "The simple pattern I see in this is that outcome goodness is increasing in how much each person wants that outcome" But the limited data you point to are equally consistent with the simple pattern that outcome goodness is increasing in how much the agent wants that outcome. So that's no explanation at all of why you prefer the former pattern.

Expand full comment

If the direction of causality is that people want something because they judge it to be good, then your rule is essentially deferring to the judgments of others about what is good. Do you agree that your rule might be doing this, and not see the problem?

One reason why this looks like a problem to me is that it means that you don't have much of an explanation or moral theory - you're not reducing goodness to anything more basic, just giving a rough cue for identifying it. A related issue is that your curve may only appear to be simple because you're letting people's minds do the complex work. Finally, if you're going to incorporate people's judgments about what is good, I don't see a justification for picking out this subset of those judgments. Why not go all the way and make your rule "usually the things that people think are good really are good"? That pattern shows a similarly strong relationship (for the data points that you've chosen to focus on this rule is extremely highly correlated with your "wanting" rule), and it's not clear which rule is simpler (if wanting something requires judging it to be good plus something else, then perhaps judging to be good is simpler).

Expand full comment

You provide an interesting perspective on moral theory here. I raise an issue with the methodology over in the comments at the Experimental Philosophy blog. But I also have a worry about how your proceed even assuming that methodology.

You propose to look at simple cases, e.g. picking up a pencil one has dropped, and think they support the minimal morality principle of doing what one thinks will get one what one (and others?) wants. The idea is supposed to be that our moral intuition here would be that it's okay or morally good or morally permissible to pick the pencil up.

(Note: In your response to Caplan, you say you're not trying to develop principles of right action, only morally good outcomes. But this just seems inconsistent with what you say in the post. You frame all this in terms of what it would be "fine" or okay for the agent to do. But even if you reformulate your claims in terms of outcomes, your position here is supposed to be about moral theory in general. And assuming moral theory primarily involves outcomes is to assume, question-beggingly, some sort of consequentialism. Your points could be taken as providing some indirect argument for a sort of consequentialism, but it shouldn't assume it.)

Back to the case: But intuitions about this pencil case (and the others you initially discuss in support of the minimal principle) don't at all seem to be moral intuitions. This is an issue of prudence (or something similar). So I don't see how our judgment about such a case should provide any support for a moral principle. Perhaps it supports some sort of normative principle about what we have reason to do or what we ought to do, but the reasons here don't at all seem to deal with morality. To construct and evaluate moral principles based on moral intuitions, we'd need to look at simple moral cases, e.g. kicking babies for fun. And there I suspect the best explanation of the moral judgment will not be a principle involving getting what you (and others?) want.

(Note: I put "and others?" in there because it's unclear to me whether you really take your minimal principle to be egoistic [doing what will satisfy the agent's wants] or preference-utilitarian [doing what will satisfy the wants of most people]. You say you're going for the latter at one point in the post, but the initial principle you develop by looking at the simple cases seems to be the former. And it's a big jump to go from the one to the other.)

Expand full comment

>mjgeddes, utility is an internal state which we infer from external behavior, because that’s usually all we have to go on

Economists often sound alarmingly behaviorist to me. Personal aesthetic sensibilities are not necesserily reducible to quantifiable 'goods and services'.

Trying to extract simple generalizations may may simply be the wrong approach - the fact that moral intuitions seem senstive to framing of specific situations may not indicate 'noise' as such, but rather be inherent in true morality itself, if in fact morality is genuinely highly sensitive to the specific context of each given situation. (i.e. complex and case-based, where no simple generalizations are possible).

>I find mjgeddes’ self-aggrandizing hand-waving on analogy trumping Bayes to be annoying

But analogy may well trump Bayes. Probability calculations rely on implicit universal generalizations, but a universal generalization intended to avoid ounterexamples must specify the exclusion of all such possible cases. This may not be possible for moral reasoning, which seems highly sensitive to the specific context of each case (see above). Thus analogical arguments can't be reduced to inductive ones, and analogy beats Bayes.

A universal generalization intended to avoidcounterexamples must specify the exclusion of all such possible cases,not just the relevant features in the actual case.

Expand full comment

Norman, "error" is not by definition "white", i.e., independent. With correlated errors one should be all the more shy about assuming that patterns in noisy data correspond to real patterns behind the data.

Expand full comment

"Norman, bias is error; unreliability is from the expectation that there is error."

Bias is error, but not 'noise' in the sense of white noise, regardless of how large the standard deviation is. Dealing with unspecified movements in the standard deviation of errors and dealing with bias in the estimators are fundamentally different problems.

It seems that your argument is equivalent to an empirical economic study which says "We are aware that our independent variables will be correlated with the error term, generating biased and inconsistent OLS estimates. However, since we are unsure of the source or form of this correlation and thus bias, we choose to use OLS estimates anyway." If you were refereeing such a paper for a distinguished journal, would you really give the authors a pass?

Expand full comment

TGGP, a moral skeptic can see moral talk as code for one common component of what people want. I'm not looking at one intuition - I'm looking at bazillions of case specific intuitions and inferring one simple pattern. In the limit of high noise, curve fitting is not done best by collecting a large "crowd" of patterns you might think you see if you squint your eyes and mind in various ways and averaging them together.

mjgeddes, utility is an internal state which we infer from external behavior, because that's usually all we have to go on.

unnamed, I don't see the relevance of a direction of causation.

Expand full comment

Robin, you see a correlation between how strongly a person wants an outcome and the intuitive goodness of that outcome. In order for this to be evidence for a moral rule similar to preference utilitarianism, it seems like this relationship must have the causal direction: an outcome is better because a person wants it more. But the opposite causal direction also seems plausible: a person wants an outcome more because it is a better outcome. In other words, you say that the pattern is "it is usually good for people do things to get what they want," but the pattern could actually be "people usually want things that it is good to get." For example, scratched itches are better than unscratched itches, so an itchy person wants to scratch.

Expand full comment

I find mjgeddes' self-aggrandizing hand-waving on analogy trumping Bayes to be annoying, but I think he's actually correct, descriptively speaking, about how people deal with morality & aesthetics. As I mentioned in your prior post, I'm a full-blown moral skeptic and I've doubted the existence of aesthetic truth for even longer. I distrust analogies as evidence and the fact that nobody has any reliable evidence in those fields is just another indication to me that no evidence is to be had.

I'd also like to repeat my question from before: Assuming that there is such a thing as moral error and intuitions that give evidence about “true morality”, it doesn’t necessarily seem such a good idea to rely exclusively on one. Analogize our differing intuitions within our heads to different individuals: more precisely, experts as depicted by Tetlock. These experts are unreliable but the best we have. We think all of them are prone to error and overconfident in themselves. Wouldn’t trying to pick “the best” expect and listening exclusively to him/her be a mistake? How can we trust our own ability to determine which expert is best? Shouldn’t “the wisdom of crowds” help the random errors associated with only listening to single expert? If I recall correctly, call a friend gives worse results than asking the audience in Who Wants to Be a Millionaire.

Expand full comment

Your attempt to justify 'tuning out' everything other than efficiency is merely a poor trick to try to get the conclusion you want. No doubt you are so enarmored of this (very Yudkowskyian) viewpoint that the whole basis of morality is giving people what they want, because it fits your economists perspective and ties in neatly with Bayes and decision theory.

But lets look more closely at both (a) Economic efficiency and (b) Bayes.

(a) Utilitarianism is limited because it is only looking at functional (external) behavior. In terms of economic efficiency, there's no difference between a non-sentient robot performing a service, and (for example) a conscious human performing the same service; in purely economic terms the value of the service is the same. This should be indicating the limitations of such a viewpoint.

(b) If morality and intelligence are different, and Bayes deals only with intelligence, what on Earth makes you think it can deal with morality as well? Bayes itself can only deal with external decisions (decisions about behavioral courses of action). Not decisions about internal thoughts.

As you yourself point out, moral reasoning seems to be much more case-based, and what type of reasoning is perfectly suited to this? Why...analogy formation of course!

The limitations of Bayes are quickly exposed in moral reasoning... general moral conclusions need to specify all relevant features of a situation in every possible case. But if you look at how humans actually deal with morality, its all story telling (narrative), metaphor and analogy formation (which are all tied to specific contexts) because only analogy formation can handle the case-based reasoning required.

I maintain that aesthetic sensibilities are the true basis of morality/values, not giving people what they want. Giving people what they want is merely a special case of aesthetic sensibilities. Aesthetic sensibilities are closely tied with analogy formation, since, (as I mentioned), humans use story telling/narrative (metaphor/analogy) extensively in moral reasoning. You yourself have agreed that stories are used to indicate who to blame/praise.

Expand full comment

Richard, I added to the post, hoping to clarify.

Norman, bias is error; unreliability is from the expectation that there is error.

Kevin, you can't reduce your intuition error just by endorsing some grand viewpoint.

Sarang; if you know another comparably simple pattern, do tell.

Expand full comment

Hi Robin, you seem to make two non-sequiturs here:

(1) "usually it is fine to do what you want... [i.e.] it is usually good for people do things to get what they want."

Note that something could be permissible ("fine") without it being good. One test is to ask whether the contrary option would have been "not fine". But that may be too strict. You may simply want to start from the premise that it's better for people to get what they want, no matter the fact that it's also "fine" for them to refrain from doing so.

(2) "This basically picks something close to preference utilitarianism"

Careful. We started off with cases of agent-relative norms: S should do whatever S herself wants. How, exactly, do you propose to move from this to the agent-neutral norm that one ought to promote everyone's preferences? Certainly some argument is required; for on the face of it, the logic of your argument (at least as stated) should lead you to egoism, not utilitarianism.

Expand full comment

1. Not sure I see the point of fitting in the limit of huge noise. The point of fitting data to a curve is to predict and extrapolate; the larger the noise, the lousier the extrapolation; in the limit of very large noise any model, simple or otherwise, has very low predictive power.

2. Your argument seems to assume that universal features are a more important feature of morality than particular features. If one thinks of ethics as more like aesthetics than like science, this doesn't work. All Victorian novels were printed on paper; this is, however, the least interesting thing about them.

3. It seems at least arguable that moral systems are "good" or "bad" depending the extent that they yield a rich and integrated picture of life, and that the correct objects to look at are the links and interactions between precepts, rather than precepts themselves.

4. Do you have any data to indicate that _the_ simplest and most consistent pattern is what you identify, or is that entirely speculative?

Expand full comment