Monthly Archives: February 2008

If Self-Fulfilling Optimism is Wrong, I Don’t Wanna be Right

Often, I hear claims like the following: "too many people are cynical about electoral politics."  It’s hard to know just what to make of that sort of assertion.  For cynicism is most likely true about electoral politics, and, moreover, as a good little Bayesian, I should count the cynicism of just about everyone else as evidence to strengthen that belief. 

"But!," the anticynic might say, "cynicism is a self-fulfilling prophecy!  If we all believe that politics is run by crooks, we won’t demand better at the voting booth [for example, because we vote strategically for the least offensive guy we think can win rather than the one we trust]!  If enough people are optimistic, your optimism will be self-fulfilling too!" 

So imagine the following belief/payoff correspondences.  If you hold a true cynical belief, you get payoff A.  If you hold a false cynical belief (cynicism in a nice world), you get payoff B.  If you hold a true optimistic belief, you get payoff C, and if you hold a false optimistic belief, you get payoff D.  Suppose C>A>B>D (or C>A>D>B — it doesn’t matter.)  And suppose that the world is nice if M people are optimistic (where N is the number of people in the world, and N>M>1) and nasty otherwise.

Anyone who knows game theory will immediately see that this world amounts to a coordination game with two nash equilibria: everyone optimistic in a nice world and everyone cynical in a nasty world.  And the nice world equilibrium has higher payoffs for all.

Now suppose we’re in a nasty world.  How do we get to the nice world?  It seems like we’d do best if someone came along and deceived at least M people into thinking we’re in the nice world already! 

This shows us that not only can individually rational behavior be collectively suboptimal, so can individually rational (truth-maximizing) belief.  Should we support demagoguery? 

I imagine the self-fulfilling false belief problem works on some individual cases too.  For example, suppose I have more success in dating if I’m confident?  Suppose I’m a person who has poor success in dating.  True beliefs for me are not confident ones, but I’ll do better if I adopt falsely confident beliefs, which will then be retroactively justified by the facts.  Should I engage in self-deception? 

GD Star Rating
Tagged as: ,

Entropy, and Short Codes

Followup toWhere to Draw the Boundary?

Suppose you have a system X that’s equally likely to be in any of 8 possible states:

{X1, X2, X3, X4, X5, X6, X7, X8.}

There’s an extraordinarily ubiquitous quantity – in physics, mathematics, and even biology – called entropy; and the entropy of X is 3 bits.  This means that, on average, we’ll have to ask 3 yes-or-no questions to find out X’s value.  For example, someone could tell us X’s value using this code:

X1: 001    X2: 010    X3: 011    X4: 100
X5: 101    X6: 110    X7: 111    X8: 000

So if I asked "Is the first symbol 1?" and heard "yes", then asked "Is the second symbol 1?" and heard "no", then asked "Is the third symbol 1?" and heard "no", I would know that X was in state 4.

Now suppose that the system Y has four possible states with the following probabilities:

Y1: 1/2 (50%)     Y2: 1/4 (25%)     Y3: 1/8 (12.5%)     Y4: 1/8 (12.5%)

Then the entropy of Y would be 1.75 bits, meaning that we can find out its value by asking 1.75 yes-or-no questions.

Continue reading "Entropy, and Short Codes" »

GD Star Rating

More Moral Wiggle Room

A new lab experiment confirms results reported a year ago:  people prefer to not know how their actions effect others, when such knowledge would induce them to sacrifice to benefit others. 

In the baseline version, each subject chose between five pairs of numbers (x,y), where x is how much money he gets and y is how much money some other subject gets.  In each pair (x,y,) each number was drawn randomly from the set {1,1,4,4,7}.  Here 40 of 63 subjects appeared to put heavy weight on benefits to the other person in making their choices.

In the other treatment, each subject was shown only the x value for each of his five pairs, but could at no cost choose to see the y values.  Of the 40 subjects who in the baseline version heavily weighted benefits to others, only 10 of them chose to see the y values.  The others just picked the best option for them. 

"If only people knew how bad things are here in Z-land, they’d do something."  Yes, and maybe that is why they do not know. 

GD Star Rating
Tagged as:

Where to Draw the Boundary?

Followup toArguing "By Definition"

The one comes to you and says:

Long have I pondered the meaning of the word "Art", and at last I’ve found what seems to me a satisfactory definition: "Art is that which is designed for the purpose of creating a reaction in an audience."

Just because there’s a word "art" doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.

It feels that way, but it is not so.

Wondering how to define a word means you’re looking at the problem the wrong way – searching for the mysterious essence of what is, in fact, a communication signal.

Now, there is a real challenge which a rationalist may legitimately attack, but the challenge is not to find a satisfactory definition of a word.  The real challenge can be played as a single-player game, without speaking aloud.  The challenge is figuring out which things are similar to each other – which things are clustered together – and sometimes, which things have a common cause.

If you define "eluctromugnetism" to include lightning, include compasses, exclude light, and include Mesmer’s "animal magnetism" (what we now call hypnosis), then you will have some trouble asking "How does electromugnetism work?"  You have lumped together things which do not belong together, and excluded others that would be needed to complete a set.  (This example is historically plausible; Mesmer came before Faraday.)

We could say that electromugnetism is a wrong word, a boundary in thingspace that loops around and swerves through the clusters, a cut that fails to carve reality along its natural joints.

Continue reading "Where to Draw the Boundary?" »

GD Star Rating

The Hawthorne Effect

If you took a psychology class in college, you may have run across the so-called “Hawthorne Effect,” which is discussed in many college textbooks (see page 31 of this extensive survey from 2004) and is still cited in various studies.  But the original studies that gave the “Hawthorne Effect” its name have long been discredited, and textbooks don’t always give you the full details. First, a quick definition of the “Hawthorne Effect” from Wikipedia:

The term gets its name from a factory called the Hawthorne Works, where a series of experiments on factory workers were carried out between 1924 and 1932. There were many types of experiments conducted on the employees, but the purpose of the original ones was to study the effect of lighting on workers’ productivity. Researchers found that productivity almost always increased after a change in illumination but later returned to normal levels. This effect was observed for minute increases in illumination. . . . A second set of experiments began and were supervised by Harvard University professors Elton Mayo, Fritz Roethlisberger and William J. Dickson. They experimented on other types of changes in the working environment, using a study group of five young women. Again, no matter the change in conditions, the women nearly always produced more. The researchers reported that they had accidentally found a way to increase productivity.

But is the original research valid?  Does it really prove that workers improve their productivity no matter what changes are made to their environment, or — more broadly — that people tend to improve their performance with any change that is being studied? 

No.  As a 1998 New York Times article pointed out, “only five workers took part in the study, . . . and two were replaced partway through for gross insubordination and low output.”  In addition to the extremely small sample size and attrition, there are two additional problems: 1) the group’s performance didn’t even always increase, and 2) there were many confounding variables, such as the use of incentive pay (!) and rest breaks. In short, as this 1992 article from the American Journal of Sociology pointed out, the original data show “slender or no evidence of a Hawthorne effect.” 

Continue reading "The Hawthorne Effect" »

GD Star Rating
Tagged as:

Arguing “By Definition”

Followup toSneaking in Connotations

"This plucked chicken has two legs and no feathers – therefore, by definition, it is a human!"

When people argue definitions, they usually start with some visible, known, or at least widely believed set of characteristics; then pull out a dictionary, and point out that these characteristics fit the dictionary definition; and so conclude, "Therefore, by definition, atheism is a religion!"

But visible, known, widely believed characteristics are rarely the real point of a dispute.  Just the fact that someone thinks Socrates’s two legs are evident enough to make a good premise for the argument, "Therefore, by definition, Socrates is human!" indicates that bipedalism probably isn’t really what’s at stake – or the listener would reply, "Whaddaya mean Socrates is bipedal?  That’s what we’re arguing about in the first place!"

Now there is an important sense in which we can legitimately move from evident characteristics to not-so-evident ones.  You can, legitimately, see that Socrates is human-shaped, and predict his vulnerability to hemlock.  But this probabilistic inference does not rely on dictionary definitions or common usage; it relies on the universe containing empirical clusters of similar things.

This cluster structure is not going to change depending on how you define your words.  Even if you look up the dictionary definition of "human" and it says "all featherless bipeds except Socrates", that isn’t going to change the actual degree to which Socrates is similar to the rest of us featherless bipeds.

Continue reading "Arguing “By Definition”" »

GD Star Rating

Against Polish

For our academic "knights in shining armor," do we care more that their suits shine, than that they are armor?   From a recent Science:

[Journal peer] reviewers make two common mistakes. The first mistake is to reflexively demand that more be done. Do not require experiments beyond the scope of the paper, unless the scope is too narrow. Avoid demanding that further work apply new techniques and approaches, unless the approaches and techniques used are insufficient to support the conclusions. …The second mistake … Do not reject a manuscript simply because its ideas are not original, if it offers the first strong evidence for an old but important idea. Do not reject a paper with a brilliant new idea simply because the evidence was not as comprehensive as could be imagined. Do not reject a paper simply because it is not of the highest significance, if it is beautifully executed and offers fresh ideas with strong evidence.

Most buildings have "load-bearing" beams and struts, and also extra "flourish" parts and "polish" on those parts, to help the building look good and protect it from the elements.  Similarly, intellectual writings contain both content and polish/flourish. 

Continue reading "Against Polish" »

GD Star Rating
Tagged as: ,

Colorful Character Again

I just learned of a new Scientific American article on prediction markets, which is pretty positive: 

A paper … compares the performance of the IEM as a predictor of presidential elections from 1988 to 2004 with 964 polls over that same period and shows that the market was closer to the outcome of an election 74 percent of the time. … Attracted by the markets’ apparent soothsaying powers, companies such as Hewlett-Packard (HP), Google and Microsoft have established internal markets that allow employees to trade on the prospect of meeting a quarterly sales goal or a deadline for release of a new software product. As in other types of prediction markets, traders frequently seem to do better than the internal forecasts do. … Prediction markets may truly hark back to the future. "My long-run prediction is that newspapers in 2020 will look like newspapers in 1920," Wharton School’s Wolfers says. If that happens, the wisdom of crowds will have arrived at a juncture that truly rivals the musings of the most seasoned pundits.

But I am personally singled out as the colorful character who is way too positive: 

Continue reading "Colorful Character Again" »

GD Star Rating
Tagged as:

Sneaking in Connotations

Followup toCategorizing Has Consequences

Yesterday, we saw that in Japan, blood types have taken the place of astrology – if your blood type is AB, for example, you’re supposed to be "cool and controlled".

So suppose we decided to invent a new word, "wiggin", and defined this word to mean people with green eyes and black hair –

        A green-eyed man with black hair walked into a restaurant.
      "Ha," said Danny, watching from a nearby table, "did you see that?  A wiggin just walked into the room.  Bloody wiggins.  Commit all sorts of crimes, they do."
        His sister Erda sighed.  "You haven’t seen him commit any crimes, have you, Danny?"
      "Don’t need to," Danny said, producing a dictionary.  "See, it says right here in the Oxford English Dictionary.  ‘Wiggin.  (1)  A person with green eyes and black hair.’  He’s got green eyes and black hair, he’s a wiggin.  You’re not going to argue with the Oxford English Dictionary, are you?  By definition, a green-eyed black-haired person is a wiggin."
      "But you called him a wiggin," said Erda.  "That’s a nasty thing to say about someone you don’t even know.  You’ve got no evidence that he puts too much ketchup on his burgers, or that as a kid he used his slingshot to launch baby squirrels."
        "But he is a wiggin," Danny said patiently.  "He’s got green eyes and black hair, right?  Just you watch, as soon as his burger arrives, he’s reaching for the ketchup."

Continue reading "Sneaking in Connotations" »

GD Star Rating

Nephew Versus Nepal Charity

If you had a poor but promising nephew, you might promise to pay his way through college.  You would place some limits on his activities – you probably wouldn’t pay for a semester off to train for Halo championships.  And you might insist he maintain a minimum GPA.  But you probably wouldn’t interfere much in his choice of college or major.  And when you give to your children in your will, you rarely place restrictions on how they can spend what you give them. 

But when we help poor people in far away lands (like Nepal), we almost never just give people money with few strings attached.  We instead fund projects, run mostly by outsiders, to do things for them.  We build them dams, roads, hospitals, bed nets, laptops, irrigation ditches, and so on.  For poor people in our own nation, we act somewhere in between these two extremes.

When we give, why do we interfere so much more with distant poor, and interfere so little with those close to us? 

GD Star Rating
Tagged as: