117 Comments

Another example:

If you could either be productive or rent-seeking, and rent-seeking would be slightly better for you all factors considered, be productive instead.

(This assumes that GDP will be used for what you think is good rather than evil; otherwise just flip the argument)

Expand full comment

>If you have market power when you sell your labor, lower your wage a bit.

What's the mechanism you have in mind for how this help works (i.e., why this doesn't just cause a simple transfer from you to your employer)? Is it something like the following:

Suppose you have two potential employers. You know B is willing to pay $10, and A is willing to pay anywhere between $10 and $20 with uniform probability. A wants to know your salary requirement. The more you ask for, the lower the chance of A hiring you but the more money you get if A does hire you. The selfishly optimal choice is to ask for $15 since that maximizes expected utility, assuming utility linear in money, but the socially optimal choice is to ask for $10 since that minimizes the probability of a wasteful outcome where you end up working for B.

Assuming this is what you have in mind, how often do people actually end up not working at their most productive job due to asking for too high a salary, and how much waste does that cause on average when it does happen? If we multiply these two we should find the maximum possible effect of this "charity" for an average person, correct?

Expand full comment

VV:

Well, I somewhat object to description of it as 'signalling'. It's just a trade-off that agents with different beliefs would resolve differently, without explicitly considering it as signalling*. For example, you claim there is a $1 million diamond in a locked box. Now you can't sell this box for less than million minus reasonable box breaking fee, and if you do, it is pretty clear the box is empty. Refusal to sell the box for $1000 is only "signalling" from the perspective of those that know the box is empty.

*Perhaps with the exception of cheaters that would resolve the tradeoffs in the ways indicative of cheating, but need to make their own signals that deceive almost nobody. I.e. the person with a (lack of) diamond in a box may be willing to sell it for $1000, but would want to do all sorts of talk and looks to deceive someone.

Expand full comment

 @google-8a859b151b507f070cefe46a035c0a99:disqus Yes, signalling has to be costly in order to be effective.

Expand full comment

 srdiamond: yes, precisely.

Sidenote: I wonder if social conventions exist to force agents to reveal their true beliefs by forcing agents to make tradeoff decisions of this kind.

Expand full comment

For instance, when a person with not yet accomplished goals makes a dating profile, they do it very differently, especially if they are a public figure.

dmytryl's somewhat obscure comment refers, I believe, to: http://lesswrong.com/lw/fh2...

Expand full comment

Stephen Diamond:Well, Apple could probably much better spend money on larger pay for factory workers than for the programmer...

Expand full comment

dmytryl:

By the 'social curve' you mean the final sum of two curves or what?

The one Hanson labeled "Social." It does "include" the personal curve, in that you are part of society.

your utility curve with regards to the money you demand from Apple would probably be rising with the slope not diminishing to zero, until the cut-off point where you can't get the job, where you have a step where it goes down a lot.

Then the "altruistic" thing for Apple to do is shade the programmer's salary more generous. The programmer lacks the market power to shade his salary upward, and there's certainly no argument that in every case (or in most cases) the employee is the one who should sacrifice.

But in these situations, someone usually has market power. And his (corporations being people and all :)) curve--the one with market power, the one that counts in implementing the theorem--will be smooth in the usual case over the relevant domain--or so it seems to me.

Expand full comment

But from another perspective, there's a market for charitable contributions. If we follow Robin, the market is status for money; in this market, it's easy to diagnose the market failure. The market disfavors charities that are less respectable. So, if you're determine to contribute to charity, one should (following Robin's theorem) shade in the direction of disreputable charities that you have determined don't deserve disrepute (from the standpoint of societal interest). You should sacrifice a little bit of status at the margin to correct the market failure resulting from signaling by contributing to the more prestigious charities.

This market doesn't help decide whether to contribute to charity, as its a market for contributions rather than benefits. But it should dictate shading toward disreputability by someone who is genuinely altruistic and already contributes.

Expand full comment

That's correct, but still, if your job isn't focused on an high altruistic utility payoff task, an efficient charity could in principle make a better use of your marginal resources.

We have a dearth of example in this discussion, but Robin provided one good one. If you build an extra floor on the apartment, you correct for a market failure that fails to reward agglomeration in proportion to its social value. The extent of the market failure is what determines the benefit from a given sacrifice at the margin, not the focus of the business. That is to say, it doesn't appear to me that altruistic focus of the business predicts the value of shading away from self-interest.

But if I'm right that degree of market failure is what determines the room for altruistic benefit by means of shading away from self-interest, then you might make the point that contributing to a charity is still more efficient this way: devote the same marginal resources to correcting a market failure more severe than the one you would correct by shading your personal interest.

Does this alternative strategy disprove Robin's claim that "by far the most cost-effective way to help the world is to shade your selfish choices just a little in the direction of making the world a better place"? What about the alternative strategy of aggregating your marginal resources? One practical point is that this isn't easy to execute: you may be unlikely or it may be inefficient to take the time to contribute resources corresponding painlessly in small amounts to efficient causes. And if you contribute them all at once, it's no longer painless. But I'm not sure this is the kind of "inefficiency" Robin intends. We're focused more on the basic principles than the implementation.

But one reason I see to maintain that Robin's method is superior is to "look for charities that correct even worse market failures" is that we seem to find it   easy, with some economic theory and a little information, to decide where there's a market failure when there's an actual market that fails. (Or in my opinion, but probably not in Robin's, where you have overall economic plan.) "Efficient charities" are demonstrably efficient mainly in the sense of having low administrative costs. How they contribute to the general state of "utility" is really unknown. Robin's theory has the advantage of being implementable, whereas the consequences of a charity really involve complex uncontrolled interaction effects unmeasured by any market or central plan.

Expand full comment

Well, working for such a charity, he would have many ways in which he could convert small private gains into huge public and his future gains. For instance, when a person with not yet accomplished goals makes a dating profile, they do it very differently, especially if they are a public figure.

edit: the threading is failing again. This was reply to VV's comment http://www.overcomingbias.c...

Expand full comment

I think Yudkowsky is trying to say that by using the marginal strategy suggested by Hanson, you can only get a small payoff in terms of altruistic utility, while by donating to an efficient charity they could achieve a larger benefit through professional specialization and economies of scale applied at a task with a large expected payoff.

Obviously, he has a personal interest in arguing this, since he works for a charity that claim to provide a huge payoff (although they are actually a small-scale operation with questionable specialization), but the general argument seems to be correct.

Your point is that professional specialization, economies of scale and trade efficiency also apply to your job, and probably more than to any charity, since your industry is probably bigger and more optimized at doing what it does than any charity. That's correct, but still, if your job isn't focused on an high altruistic utility payoff task, an efficient charity could in principle make a better use of your marginal resources.

Of course, there are large uncertainties involved: finding a charity with an actually high expected altruisitc utility payoff might perhaps not be worth the effort. I suppose that if you wish to donate efficiently you'd better go for a charity that provides services that you can use and hence assess directly (e.g. the Wikimedia Foundation) rather than go for something that promises to help people far removed from you in space, time or social condition.

Expand full comment

The first sentence is equivalent, is it not, to saying that the social curve slopes the other way. Which means Apple production harms at the margin! (Increasing your pay would be a social good.)

By the 'social curve' you mean the final sum of two curves or what?

Hanson has a Masters in Physics from Chicago, so he probably knows some math.

Well, a physicist would be very prone to thinking in terms of smooth curves when the curves do not need to be smooth at all. E.g. your utility curve with regards to the money you demand from Apple would probably be rising with the slope not diminishing to zero, until the cut-off point where you can't get the job, where you have a step where it goes down a lot.

But too much math (in the sense of a determined near-mode focus) can  be a dangerous thing, too.I think you're too focused on the "math."

I think you're too focussed on the near far thing, which is hardly a predictive theory in any case, i.e. you use it retroactively but has no predictive value. What I see is people over-exposed to smooth curves, having generally non informative 'mathematical inspired' intuitions in the world of broken curves. It's harmful to only know of a hammer, as otherwise everything looks like a nail.

Expand full comment

 But it's really the reverse of Yudkowsky's claim. Among foragers there are fewer externalities. Externalities take largely the form of increases in trade, specialization, and scale economics. These are precisely what supply the room to implement your theorem in practical decisions. They are multipliers rather than divisors.

Expand full comment

The clear math is that you can give far more effectively to your own customers where there's market failure favoring you. You might have to cut it very close over many occasions.  In donation, there's no market involved in the transaction, hence no market failure to exploit..

The premise so far is that the utiles of the needy are equally important as the utiles of the well off--while still allowing for diminishing utility with decreased neediness. (Keep your deviations from self-interest small enough, and you predominate over any diminished utility.)

But let's say you want to benefit the needy preferentially (which is contrary to utilitarianism, the dominant local ethical signal). Unless you are totally unconcerned with the utility of everyone, still you can use the diagram. If the utiles of the well-to-do exchange for the utiles of the needy at a 10 to 1 ratio, this ratio can be overcome by keeping the deviations from your self-interest small enough.

But if you only care about the needy--perhaps a common position, but not locally--then you can still do use the diagram by drawing the social curve based on the needy alone.

except that it's what some folk might want to believe

To the contrary, the ones who want to believe (in their own virtue) refuse to understand a demonstrable theorem, despite their sophistication.

Expand full comment

I read you as suggesting 1) that there is only a small integral of the help you can give via this method, before the cost to help rises to noticeable levels, and 2) that the net value of this help is tiny compared to the gains we all get from trade, specialization, and scale economics. That point #2 seems overwhelmingly obvious. But on #1, the net help we actually give others, beyond agreeing to participate with them in an economy with specialization etc., is often pretty small, so it isn't clear that this marginal help would be small relative to that typical help size.

Expand full comment