Monthly Archives: March 2012

Just A Flesh Wound

ARTHUR: You are indeed brave, Sir knight, but the fight is mine.
BLACK KNIGHT: Oh, had enough, eh?
ARTHUR: Look, you stupid bastard, you’ve got no arms left.
BLACK KNIGHT: Yes I have.
ARTHUR: Look!
BLACK KNIGHT: Just a flesh wound. (more)

In the US the top 5% of medical spenders spend an average of $40,682 a year each, and account for 49.5% of all spending. (The bottom half spend an average of $236.) Not too surprisingly, 60.3% of these people are age 55 or older. Perhaps more surprising, on their health self-rating, 28.9% of these folks say they are “good”, 19.9% “very good” and 7.5% “excellent”, for a total of 56.3% with self-rated health of “good” or better (source).

So, are these folks in serious denial, or is most of our medical spending on hardly sick folks?

GD Star Rating
loading...
Tagged as: ,

Saints And Burdens

Let a person’s benefit ratio be the amount of benefit they give to others, divided by their cost to others. Then consider two classes of people:

  • Burdens – Those for whom the ratio is less than one. Such folks are a net burden on the rest of the world.
  • Saints – Those for whom the ratio is far greater than one, such as a thousand or a million. Such folks are fantastic altruists.

While these would seem to be opposite types of people, I think I see a correlation in the world: those who talk the most about trying to be saints also tend to have an unusually large chance of actually being burdens. Why this correlation?

One story is that variance is a good way to increase your chance of very good outcomes, but high variance altruism strategies tend to have more risk of both altruism extremes. So people who try hard to increase the thickness of their high tail of altruism must typically also accept a thicker low tail of being a burden.

A very different story is that people who feel guilty about their high risk of being a net burden compensate by talking more about wanting to be saints. They don’t have much of a chance of actually being saints, but by deluding themselves they can avoid guilt about being a burden.

What evidence would distinguish these theories?

GD Star Rating
loading...
Tagged as: ,

Unpainted For Far

We usually think of very old buildings and statues as plainly colored, with just the color of the stone, but in fact they were usually painted colorfully, as the Greek pictures below show. Mayan temples and gothic churches were also wildly colored. But if so, why don’t we see the remaining buildings, statues, etc. painted like that today, so people can see what they looked like? They are often painted, but with plain stone color paint!

You might say it is because we can’t be sure exactly what colors were where. But we often renovate the buildings themselves extensively, and add in missing statutes, even when we aren’t sure exactly what the original buildings or statues looked like.

Also consider that cities like Paris and Washington DC designed their buildings and building codes to look like ancient buildings, except without the paint. Same for many universities like U. Chicago. You might explain this as due to people believing incorrectly that the ancients didn’t have paint. But paint isn’t remotely a recent invention, why would anyone think it was?

Here’s another explanation: thinking of the distant past evokes our far mental mode, in which we tend to think of objects having fewer relevant surfaces and less texture detail. Unpainted buildings and statues appear to have fewer surfaces and less texture – they look more far. We subconsciously think that unpainted things make more sense as something associated with the distant past.

Since power evokes a far mode, your architecture can evoke a far mode that suggests power if it has fewer relevant surfaces and a simpler texture. So people have seen the unpainted ancient style as more distinguished, and places like Paris and Washington DC required such a style as a way to assert their power.

GD Star Rating
loading...
Tagged as: , ,

Doctors Dominate

We humans pretend to resist domination, but actually tend to submit, and are often consciously unaware of the contradiction. I recently posted on our relating this way to police. We also relate this way to doctors. For example, people are basically scared to post negative web reviews of doctors. No, they don’t consciously feel scared. They’ll talk about how busy they are or they don’t feel qualified to judge. Yet their usual arrogance lets them rate lots of other things they know little about. And they are scared for good reason: doctors do got out of their way to retaliate against negative reviews. Details:

The Web Is Awash in Reviews, but Not for Doctors. Here’s Why.

… It is puzzling that there is no such authoritative collection of reviews for physicians, the highest-stakes choice of service provider that most people make. Sure, various Web sites like HealthGrades and RateMDs have taken their shots, and Yelp and Angie’s List have made a go of it, too. But the listings are often sparse, with few contributors and little of substance. … Not enough people take the time to review their doctors. …

RateMDs now has reviews of more than 1,370,000 doctors in the United States and Canada. But getting in the faces of the previously untouchable professional class has inevitably led to legal threats. He says he gets about one each week over negative reviews and receives subpoenas every month or two for information that can help identify reviewers, who believe they are posting anonymously. …

Several years ago, a physician reputation management service called Medical Justice developed a sort of liability vaccine. Doctors would ask patients to sign an agreement promising not to post about the doctor online; in exchange, patients would get additional privacy protections…. Medical Justice has now turned 180 degrees and embraced the review sites. It helpfully supplies its client doctors with iPads that they can give to patients as they are leaving. Patients write a review, and Medical Justice makes sure that the comments are posted on a review site. Sound coercive? … p

Patients may be steering clear for a far more ordinary reason: if they live in a small town or are only one or two degrees of social separation from physicians or their family members, they may not want to create any awkwardness. … An Angie’s List customer who read my column about the service last week raised a related concern. She said she would never talk negatively about her doctors on the site because there were only two decent hospital systems where she lived and she didn’t want to end up blackballed by doctors at either. …

Others idolize their doctors … Insurance giant WellPoint, … has found that only roughly 20 percent of customers will switch to a generic drug or use a less expensive imaging center, even if there is no health risk. Why? Because their doctor told them so. It is exactly this sort of unquestioning mind-set that may cause such low participation (or disproportionately positive reviews) at many review sites. …

WellPoint tracks doctors’ communication skills, availability, office environment and trust, but it doesn’t yet provide information about medical outcomes. .. It pays many physicians more when they achieve better results. But it’s not ready to share all of its outcome data. .. “The unintended consequences would be if certain surgical specialists would not take on the most challenging, needy and difficult patients.” … the big health care law requires Medicare to share all sorts of such data about doctors starting Jan. 1, 2013, assuming legal challenges don’t get in the way. The A.M.A. has raised many concerns about “risk adjustments.” (more; HT Tyler)

Risk adjustment is an issue for most products, since most have variations in who uses them. Yet we let people rate other products and collect track records on experiences with them. But for docs, we allow risk adjustment as an excuse to avoid accountability. This is an old issue is health econ — the story has always been that of course giving consumers info is a good idea, but we’d have to wait to give patients info until we “solve” the risk adjustment problem, which never happens, and never will. Mark my words, we will long delay publication of doc track records.

GD Star Rating
loading...
Tagged as: ,

Continuous Cooperation

In a prisoner’s dilemma, two sides have an incentive to defect, even though mutual defection is worse for both sides than mutual cooperation. It is well known that in theory and in reality people cooperate more when then expect to interact over more repetitions, and when they care more about the future.

It is hard to make people live longer, or care more about the future. It can be just as helpful, however, and often much easier, to make people interact more frequently. In the limit of continuous interaction, people should cooperate the most. My once co-author Ryan Oprea has a paper with Daniel Friedman in the latest AER, showing this:

We study [lab experiment] prisoners’ dilemmas played in continuous time with flow payoffs accumulated over 60 seconds. In most cases, the median rate of mutual cooperation is about 90%. Control sessions with repeated matchings over 8 subperiods achieve less than half as much cooperation, and cooperation rates approach zero in one-shot control sessions.

They introduce some new theory to explain details of this behavior:

Inspired by a strand of existing theoretical literature, we postulated a particular class of epsilon equilibria and derived formulas predicting how cooperation rates respond to adjustment lags and to payoff parameters. These predictions accounted well for the Continuous, Grid-8 and (trivially) One-Shot data. They also nicely explained a set of second-round data from Grid-n sessions, which varied the number of subperiods from 2 to 60. Thus the formulas correctly predict defection in one shot games, cooperation in continuous time and intermediate results on the path between the two. The underlying intuition is simple. When your opponent can react very quickly, defecting from mutual cooperation is likely to earn you the temptation paypoff only briefly and may cost you the cooperation payoff for the rest of the period.

So do online firms cooperate more when they can vary their prices more frequently? What rapidly-changeable actions would help nations to cooperate more?

GD Star Rating
loading...
Tagged as: ,

Disagreement Experiment

A lab experiment induces common priors, tells each person of the actions of others, and yet still finds disagreement, in conflict with predictions from common knowledge of rationality:

We look at choices in round 1, when individuals should still maintain common priors, being indifferent about the true state. Nonetheless, we see that about 20% of the sample erroneously disagrees and favors one point of view. Moreover, while other errors tend to diminish as the experiment progresses, the fraction making this type of error is nearly constant. One may interpret disagreement in this case as evidence of erroneous or nonrational choices.

Next, we look at the final round where information about disagreement is made public and, under common knowledge of rationality, should be sufficient to eliminate disagreement. Here we find that individuals weigh their own information more than twice that of the five others in their group. When we look separately at those who err by disagreeing in round 1, we find that these people weigh their own information more than 10 times that of others, putting virtually no stock in public information. This indicates a different type of error, that is, a failure of some individuals to learn from each other. This error is quite large and for a nontrivial minority of the population.

Setting aside the subjects who make systematic errors, we find that individuals still put 50% more weight on their own information than they do on the information revealed through the actions of others, although this difference is not statistically significant. (more)

So in this experiment there is a bottom quintile of idiots, and everyone else seems roughly accurate in discounting the opinions of a pool of others containing such idiots. So in this experiment it seems the main reason people think they are better than others is that everyone, even idiots, don’t think they are idiots. I wonder how behavior would change if everyone was shown clearly that the idiots were no longer participating.

GD Star Rating
loading...
Tagged as:

Selling Praise

More from How To Win Friends And Influence People:

Jesse James probably regarded himself as an idealist at heart. … The fact is that all people you meet have a high regard for themselves and like to be fine and unselfish in their own estimation. J. [P.] Morgan observed … that a person usually has two reasons for doing a thing: one that sounds good and a real one. The person himself will think of the real reason. You don’t need to emphasize that. But all of us, being idealists at heart, like to think of motives that sound good. So, in order to change people, appeal to the nobler motives. …

When the late Lord Northcliffe found a newspaper using a picture of him which he didn’t want published, he wrote the editor a letter. But did he say, “Please do not publish that picture of me any more; I don’t like it”? No, he appealed to a nobler motive. He appealed to the respect and love that all of us have for motherhood. He wrote, “Please do not publish that picture of me any more. My mother doesn’t like it.”

I doubt that people have more trouble thinking of ideal vs. non-ideal reasons for doing things. So why do you persuade better by pointing to ideal reasons for something you’d like people to do? Because you implicitly offer a complement to an idealistic act: recognition. People are more eager for others to recognize idealistic acts, vs. other acts. If they follow your suggestion to do something for which you’ve offered an idealistic reason, they know you are available to tell others of their idealism. Which makes that idealism worth much more.

GD Star Rating
loading...
Tagged as:

How To Influence People

I posted before on the how-to-win-friends part of Dale Carnegie’s classic How To Win Friends And Influence People. Today I’ll discuss influencing. Carnegie offers twelve principles, the first three of which are:

  1. The only way to get the best of an argument is to avoid it.
  2. Show respect for the other person’s opinions. Never say, “You’re wrong.”
  3. If you are wrong, admit it quickly and emphatically.

He illustrates principle 1 with a story:

During the dinner, … the [storyteller] mentioned that the quotation was from the Bible. He was wrong. I knew that, I knew it positively. … I appointed myself as an unsolicited and unwelcome committee of one to correct him. He stuck to his guns. … Frank Gammond, an old friend of mine, … had devoted years to the study of Shakespeare, So the storyteller and I agreed to submit the question to Mr. Gammond. Mr. Gammond listened, kicked me under the table, and then said: “Dale, you are wrong. The gentleman is right. It is from the Bible.” On our way home that night, I said to Mr. Gammond: “Frank, you knew that quotation was from Shakespeare,” “Yes, of course. … But we were guests at a festive occasion, my dear Dale. Why prove to a man he is wrong? Is that going to make him like you? Why not let him save his face? He didn’t ask for your opinion. He didn’t want it. Why argue with him?” … I not only had made the storyteller uncomfortable, but had put my friend in an embarrassing situation. How much better it would have been had I not become argumentative.

Carnegie also tells of how Ben Franklin learned a similar lesson:

“I made it a rule,” said Franklin, “to forbear all direct contradiction to the sentiment of others, and all positive assertion of my own.”

This is a hard lesson for me. Humans have many conversation ideals, and usually act as if they uphold such ideals. For example, you aren’t supposed to lie. And if you talk about something as if you think it important, and someone else knows a good clear reason that something important about what you said is wrong, they are supposed to tell you, and you are supposed to listen, and then change your mind. So we commonly talk as if we assume people who said something must believe it, as if people who heard a claim and didn’t object must not have known a good clear reason it was wrong, and as if people who don’t publicly change their minds when others object must not think the reason offered was good and clear.

But we are actually hypocritical about such ideals – we try to avoid visibly violating them, yet are not otherwise eager to follow them against our interests. We often object to unimportant claims by rivals, to gain status at their expense. We often pretend we don’t think reasons offered by others are good, to avoid visibly changing our mind. We often lie. And those of us who are best at arguing and lying are the most eager to uphold conversation ideals, as we can best evade detection of our ideal violations.

So how committed should we be to such ideals? How should we think of Carnegie and Franklin’s violations, refusing to tell others they are wrong, and even lying on occasion to avoid conflict? Given that they will try to admit when they are wrong, I find it hard to find much fault overall in them. Yes, their refusing to disagree on something important could fail to inform others, but I doubt they took this habit to such extremes. I expect that in such situations they disagreed indirectly, but still got their key info across.

GD Star Rating
loading...
Tagged as: ,

Manzi On Trials, Consulting

Arnold Kling on Jim Manzi’s new book Uncontrolled:

Manzi is a fan of randomized controlled experiments in business and public policy (in the latter, examples include the Rand health care study and the Wisconsin income-maintenance studies). I believe that decision-makers will resist this approach, for the same reason that they resist Robin Hanson’s suggestion to use prediction markets. That is, decisions are not necessarily about achieving results. They are often about establishing the status of the decision-maker. For a decision-maker to conduct experiments or to employ prediction markets is to admit ignorance and doubt, which lowers the decision-maker’s status.

Manzi responds:

I agree that this is true, and is a big deal. In the book, I expend a fair amount of effort describing the procedures and methods that have been used to ameliorate this problem (though never eliminate it) in therapeutic medicine, many large businesses, and certain narrow areas of government policy development. I think at a more strategic level, however, this problem is best addressed by decentralizing authority and accountability. Staff businesspeople, academics, and so on have much larger incentives to use “analysis as rhetoric” in the manner that Kling refers to than do people who are responsible for achieving outcomes in a marketplace. If I am paid (or live or die) based on my programs working or not, I am much more likely to care about what really works rather than getting tangled up in what analysis will get me noticed and promoted.

The book isn’t out yet. Kling got an advanced copy, but I did not. I look forward to seeing Manzi’s detailed discussion, but the above response seems to miss the point – authority and accountability won’t be decentralized if that lowers the status of central folks. Just because they should decentralize doesn’t mean they will.

Similarly, a few weeks ago Manzi responded to my post on the puzzles of why firms pay so much for often trite consulting advice, and why such advisors hire so many fresh grads of top schools. I suggested that firms are more buying prestige to bully locals into cooperation than they are buying info per se, and that recent top school grads offer the most prestige per wage dollar. Manzi disagreed: Continue reading "Manzi On Trials, Consulting" »

GD Star Rating
loading...
Tagged as: ,

Why Not Compromise?

Rob Wiblin:

Why is it that rather than celebrate the values of conflict resolution, tolerance and deal-making, which make our advanced societies function so effectively, our favourite stories continue to be about zero-sum conflicts that are impossible to resolve peaceably? … I suspect the answer lies in what we subconsciously want our taste in fiction to say about us. Celebrating the Na’vi allows us to signal how much we value loyalty and justice. Denigrating Melbourne Airport allows us to show our suspicion of greedy and powerful people. In real life, when defending our stated values requires that we make serious sacrifices whether or not we are likely to win, we sensibly value the opportunity to compromise. But when a fictional character will do all the fighting for you, why compromise on anything?

Katja Grace:

I think he might be roughly right. But why wouldn’t finding good deals and balancing compromises well be ideals we would want to celebrate? When there are no costs to yourself, why aren’t you itching to go all out and celebrate the most extravagant tales of successful trading and extreme sagas of mutually beneficial political compromise? I think because there is no point in demonstrating that you will compromise. … It’s often good to look like you won’t easily compromise, so that other people will try to win you over with better deals. … If you somehow convince me that you’re the kind of person who would die fighting for their magic tree, I’ll probably try to come up with a pretty appealing deal for you before I even bring up my interest in checking out the deposits under any trees you have.

Yes, it might be good for your group to seem reluctant to compromise, but how is good for you to support such a group reluctance? That seems to be more about signaling group loyalty. The people in your group who most want compromise are those also  tied to other groups with which your group has conflicts. By opposing compromise, you signal you have weaker conflicting ties. This loyalty signaling theory better explains why we often oppose compromise that is clearly in our group interest.

GD Star Rating
loading...
Tagged as: ,