86 Comments

I agree with your larger point, but your discussion of Christian church is missing a lot of what goes on and is forcing the example to fit your point. I wish you would pick a different example, or at least pick one *part* of Christianity instead of trying to summarize the whole thing and make a talking point out of it.

You say that services (I assume you mean church services) are thin on evaluating how "good" behavior would help the world, but I would say that's the whole point of all the discussion and meditation that goes on. If you are doing things that don't help anyone, then you're not really doing good at all, and that's part of why there is so much discussion and meditation on the topic of what it means to be a good person.

You overlook that a lot of people being helped are right there in that room. In true Hayekian style, much of what a church does--for good or for ill--is to help out the actual membership of the church. That includes children, the injured and unemployed, the elderly, and any other manner of person that is not particularly well off. The people are right there and are greatful for the support; when they aren't greatful, the activity stops happening as much.

It's also true that the meetings promote impressive people, but isn't that only rational? The people in the spotlight at a church service are those who have devoted large portions of their lives to get into that position.

Finally, part of how you make people better, is you showcase them a little bit when they do. It can be overdone, but a little bit of pride can be a good thing when it steers people toward being better.

Expand full comment

The foundation of our major social coordinations is self-interest weakly saturated with altruism. Hypocrisy didn't evolve to further societal interests; it evolved out of the striving of individuals to avoid social responsibilities. (That doesn't necessarily mean it can't be turned to different uses, but it makes it less likely.)

Politics is less hypocritical than charity when the central element of self-interest (extended to the interests of others similarly situated and to allies) is frankly admitted. The self-interest of the voter (which isn't pure egoism because it includes the interest of some disparate others) is potentially transparent in political practice, whereas charity depends almost completely on hypocrisy.

I think the best example of rationalist fantasy applied to politics is Political Correctness: if the state encourages hypocritical speech on race, the end result will be adjustment of attitude or at least a less racist society. I don't think the result has been happy, although it has worked to some degree on its own terms.

Expand full comment

> But I think it's a rationalist fantasy to design major social coordinations on a hypocritical foundation.

What do you think the foundations of all our major social coordination are? Surely politics, when not self-interested, is just as hypocritical as charity.

Expand full comment

I think I have slightly higher expectations from altruism than you, but our main difference is that I have far lower expectations of hypocrisy. Hypocritical motives are fine for trivial matters (say, table manners). But I think it's a rationalist fantasy to design major social coordinations on a hypocritical foundation. Effective social coordination is too hard for that approach. Not only is it undermining of the fundamental ethical imperative to honesty (extinguishing the stigma attaching to hypocrisy), but it is ineffectual and misleading. [Which is to say, for example, that you can't rely on a politician like Hillary Clinton to remake herself along new ideological lines and then to be compelled by the force of commitment to execute policy accordingly.]

I think it clear that consistency between abstract and concrete thinking strengthens commitments. But their alignment by hypocritical means undermines them. The difference between the desire to think oneself good and the desire to please others is far from trivial. [It's one of the (avoidable) dangers of Robin's approach that the difference might be minimized.]

Charity is inherently one of the most hypocritical institutions; I wouldn't choose it as a vehicle for societal improvement.

Expand full comment

+1 for consistency, then!

But I think the analogy is broken by the relatively high availability of self-actualizing work. If it were firmly established that basically no human can work for more than a few months on intrinsic motivation, I'd focus on ways to improve external motivation.

Is our disagreement that you think intrinsically-motivated altruism is attainable enough to be worth striving for? I guess my thought is that under enough scrutiny whatever motivation you thought was intrinsically altruistic will *at best* turn out to be a desire to think of oneself as good.

Actually, the analogy to work suggests to me that alignment of selfish and idealistic motives might have a positive role in preserving the latter. If you wanted to strengthen your work ethic, would you rather start a small business or join a commune?

Expand full comment

Actually, I have worried about just that (in a Comment somewhere). But let me emphasize that I don't question that humans have to compromise. I don't propose to deny folks their crutches, or to promote self-denial.

The analogy to EA would be to promote Beeminder as the ideal way to work (or even as remotely close to it). At the least, it shows one's labor is very alienated. If I had to use such a device, it would be with full awareness of its self-damaging consequences.

Expand full comment

Does the use of a commitment aid like Beeminder count as 'fooling oneself'? This is another way of creating near-mode incentives that align with a far-mode goal. Do you worry that Beeminder users will lose the ability to genuinely care about long-term goals as they embrace short-term loss aversion as primary motivation?

Expand full comment

If you adopt prediction markets, you will get rational estimates, even if you train no one in rational behavior.

Expand full comment

Similarly for rationality, there is far more interest in how to spot rational folks, and in rationality training, than in institutions to promote rationality.Isn't rationality training about creating ways to promote rationality?

If you don't have working rationality training and "promote rationality" the result is likely that you promote practices that aren't really about rationality.

Expand full comment

But the idea we're discussing is a trick, motivated by far-mode altruism, to mostly give near-mode the right selfish incentives.

Let me first say that I disagree that "you become an EA because in far-mode you want to do good." Folks become EA for near-mode reasons too (which EA also encourages). But I take it this would still be to misunderstand you. I take the blockquoted comment as the essence of your point.It's the point of contention: I'm against fooling oneself (to the meager degree it can be avoided). Let me try a concrete tack, at the risk of overpersonalizing. What does one do when one discovers that one's motives for charitable endeavor are insincere? My answer: one should view it as a character defect. The EA answer: one should embrace it. The different answers don't have clearly different immediate implications. [I don't say (as does the Neoreactionary commenter) 'adopt a self-regarding ethic.' Motivion isn't unequivocal.]

My contention is that the attitude a movement takes toward such discoveries will have long-term effects on its participants' ethics. [Readers more familiar than I with EAs can evaluate empirically.] I'd predict an EA culture increasingly driven by status-seeking.

Expand full comment

It's curious to consider ideologies that clearly were found wrong--Nazism, socialism, racism--and note that some were abandoned, some weren't. The key is some can be seen as merely flawed tactics within a correct strategy, whereas some were ends in themselves. Also, it's probably true that racists and Nazis over a certain age didn't change their mind so much as die, probably because there wasn't as much for them to gain from changing their minds.

I doubt the big differences today between progressives/egalitarians and classical liberals/conservatives, when they are found in error, will be seen as merely errant minor policies, not relevant to the big picture. For example, it's often said scientists change their mind when confronted with data, unlike religious people. True, but only about minor things. I bet conversions between religions occur as frequently as scientists change their beliefs of 'big picture' ideas in their field.

Expand full comment

"Christianity ... talk a lot about what is good vs bad behavior, but are pretty thin on how more good behavior will help the world. "

This is true as far as it goes, but it misses a key point. Christianity (at least the mainstream varieties), is not primarily concerned with "making the world better" per se, it is concerned with making *people* better, and social uplift is mostly a secondary function/outgrowth. Consider the well-known "turn the other cheek/give to all who ask" passage: Christ does not advise this with the goal of making face-slapping less prevalent, nor in the interest of making face-slappers less violent. Rather, the goal is making the face-slapped person more forgiving/godly. Anything else is gravy.

Or as a theology professor responded to my class when it was pointed out that "turn the other cheek" likely wouldn't "work" if it was used in response to, say, a Nazi invasion: "Christ commanded you to forgive. He did not command you to survive."

That may go for the "ecological spiritual consciousness" folks, too.

Expand full comment

I think you're too generous too Hanson when you say "no one is talking about that." With a million examples to choose from, he chose a narrow-minded broadside against church goers. (You show better breadth of mind, thanks!) But enough of that.

Two of those million:

Trying to Do Good: There's a rumor that a large foundation abandoned its advocacy of creating small-sized schools when it was shown that the reason the best test scores all came from small schools was merely that the variance of small sets of randomly chosen scores is higher than that of large sets. A couple billion dollars was spent before anyone noticed this. Oopsy! (Note the absence of proof of failure! The rational thing is to stop once the supporting theory is undermined. Rational, but not easy.)

Just Being Righteous: There is a rumor than an environmental organization advocates against the use of DDT against mosquitoes in Africa. The basis for this is that DDT is nasty toxic, will kill thousands, renders the area a toxic wasteland, and makes the capitalists money. Some say this leaves tens of thousands of preventable deaths due to malaria, no alternative approach that is likely to actually occur in a generation, and wildly misrepresents the actual risk of DDT. But we're not going to listen to chemical industry shills, we have to draw a bright line, zero tolerance, etc..

(If you disagree with the particulars, that's fine, I'm no expert. I only claim high plausibility, not high probability.)

Expand full comment

Sorry, what I wrote was syntactically unclear. I didn't mean that near-mode is an optimal state for doing what's best for the world, but rather that doing what's best for the world while in near-mode is a goal we should pursue (despite its very high difficulty). But the idea we're discussing is a trick, motivated by far-mode altruism, to mostly give near-mode the right selfish incentives.

The way I read it, Effective Altruism is sincere in far-mode. You become an Effective Altruist because in far-mode you want to do good.

I'm not sure what your sincerity program would look like. If we start rejecting goals of creating appearances, we'll probably be left with naked egoism, and we might even lose a lot of prudence as well (do I actually care about my distant-future finances, or do I just want to appear responsible?).

Expand full comment

I may walk in a forest and be struck with a motivational thought from a bird's song that leads me to think of the missing piece of a software system I was building that is ultimately used to prevent damage to future forests, or to perform robotic surgery better, or something. So the cost-benefit of many seemingly boring actions is quite hard to calculate.

Maybe someone else sings a hymn in church on Sunday and that's what helps her to write some software, or paint a picture, or produce a movie, or whatever, that has a chain reaction of positive effects.

These kinds of calculations are pretty hard to model on an individual-by-individual basis. But nobody's talking about that here. Only group-level first order approximations of major tenets.

The question is: if a group that professes to be focused on doing-good through defining what being-good means is presented with evidence that its being-good-ness is not correlating with doing-good-ness, will the group change its definition of being-good in response?

If a group defines doing-good in terms of being good, and uses some properties like forgiveness to define being good, which actually end up correlating with doing-good, then that group's definition of being good is a good approximation of doing good, and the group doesn't have a lot of work to do to modify itself to keep up with the self-proclaimed goal of doing good, so it wouldn't be a group covered by the thesis of this post (and so Christianity *might not* be such a group, at least not due to its advocacy of forgiveness... or maybe it *might* be such a group because of its advocacy of other things...)

Expand full comment

But I think the point is that if we start from a position like that, and we observe that some things about being good don't actually end up causing outcomes that we would define as doing good, then if our real aim is to do good instead of to be good, we would update our beliefs about what actions/behaviors to do and try again, improving along the way.

But in most systems that equate their brand of being good with the externality of doing good, when they are confronted with evidence that their being-good has not yielded measurable doing-good (or has even caused harm), the responses is not to change in response (like a pursuer of doing-good would) but is instead to rally as a community so as to rationalize why their prized definition of being-good still is == doing-good, and why other, not-that-group's-fault reasons are to blame for the lack of measurable doing-good as a result of their being-good.

Expand full comment