Category Archives: Disagreement

Astrology Up Close

I recently appeared on the TV show William Shatner’s Weird or What?, episode on Premonitions.  Unusual for me, this time I played the straight man, against astrologer Dr. Turi, who is said to have predicted the 9-11 attacks, Hurricaine Katrina, the Colombia Shuttle disaster, and others with “uncanny accuracy.” I said he makes lot of predictions that don’t turn out, and calls attention to the few that look best.

I didn’t know it when I was filmed, but Dr. Turi agreed to demonstrate his ability, my making “specific predictions” a week in advance. He opened an envelope on camera containing this text:

These predictions will mature within 48 hours centering on the date of January 23.

  • 1 Expect news with earthquake above 6.0 and/or nukes (Japan/ring of fire)
  • 2 Expect nature devastating forces (volcano/tsunami or tornadoes)
  • 3 Expect scientific news from the cosmos and/or NASA
  • 4 Expect shocking news and very large explosions in the Middle East.
  • 5 Expect shocking news with satellites, airlines or airplanes
  • 6 Expect shocking news involving nukes
  • 7 Key words are “shocking news, explosions, surprises, on these days Jan …

He says he feels vindicated, because of the five predictions he gave, four came to pass. (Yeah, I count six too.) There were two quakes over 6.0, a tornado killed people in the US, NASA announced a big solar flare, and 150 were killed in an explosion in the Mideast.

I’d guess that if we applied these very same predictions to all the weeks in the last year, with the same flexibility of interpretation, we’d see a similar accuracy. In most weeks, NASA has press releases, there are earthquakes and tornados somewhere, something blows up in the Mideast, and airplanes have problems. Of course I’ll not bother to do such an evaluation – it is his job to set up careful evaluations of his abilities, if he wants us to believe them.

GD Star Rating
loading...
Tagged as:

CO2 Warming Looks Real

Many have bent my ear over the last few months about global warming skepticism.   So I’ve just done some moderate digging, and conclude:

  1. In the last half billion years, CO2 has at times been 15 times denser, but not more than 10C warmer.  So that is about as bad as warming could get.
  2. In the last million years, CO2 usually rises after warming; clearly warming often causes CO2 increases.
  3. CO2 is clearly way up (~30%) over 150 years, and rising fast, mainly due to human emissions.  CO2 is denser than its been for a half million years.
  4. The direct warming effect of CO2 on warming is mild and saturating; the effects of concern are indirect, e.g., water vapor and clouds, but the magnitude and sign of these indirect effects are far from clear.
  5. Climate model builders make indirect effect assumptions, but most observers are skeptical they’ve got them right.
  6. This uncertainty alone justifies substantial CO2 mitigation (emission cuts or geoengineering), if we are risk-averse enough and if mitigation risks are weaker.
  7. Standard warming records show a real and accelerating rise, roughly matching the CO2 rise.
  8. Such warming episodes seem common in recent history.
  9. The match between recent warming and CO2 rise details is surprisingly close, substantially raising confidence that CO2 is the main cause of recent warming.  (See this great analysis by Pablo Verdes.)  This adds support for mitigation.
  10. Among the few bets on global warming, the consensus is for more warming.
  11. Geoengineering looks far more likely to be feasible and acceptable mitigation than emissions cuts.
  12. Some doubt standard warming records, saying they are biased by urban measuring sites and arbitrary satellite record corrections.   Temperature proxies like tree rings diverge from standard records in the last fifty years. I don’t have time to dig into these disputes, so for now I defer to the usual authorities.

It was mostly skeptics bending my ear, and skeptical arguments are easier to find on the web.  But for now, the other side has convinced me.

Added: The Verdes papers is also here.  Here is his key figure: Continue reading "CO2 Warming Looks Real" »

GD Star Rating
loading...
Tagged as: , ,

Engagement Swaps

In broad disputes, such as we often find in economic policy, each "side" usually has points it thinks are neglected by other sides.  These points may be especially strong arguments for its own conclusions, or especially weak places in arguments for conclusions of other sides.  I can think of several such apparent neglected points favoring my "sides." 

Often members of a side will feel frustrated that their strongest points seem to be ignored.  Your point may seem too simple or unoriginal to be worth a new publication, but you'd sure love to make it clear to observers how weak are the responses to your strongest points.

You might think that if, in addition to wanting to support our side, we also put some weight on wanting to know the truth, we might find "gains from trade" by making "engagement swaps."  These would be deals between sides whereby we each agree to engage some points proposed by other sides.  We might agree to write so many words, or talk for so many minutes, on each point. 

I suggested this idea to my colleague Dan Klein and he actually tried a bit to see how feasible such swaps might be in his corner of the policy dispute world.  Alas, his verdict was negative; it doesn't seem very workable.  Which raises the question: why?

Some possible answers:

  1. Folks care little about truth, so there are no gains from trade.
  2. Even talking with other sides makes you seem disloyal to your side. 
  3. If they propose it, you fear adverse selection in topic, participant choices.
  4. Your responses to their strongest points will be weak, making you look weak.

Added 6May: Dan Klein comments:

It is often whole positions or whole issues that the other side too often ignores. For example, on FDA or occupational licensing, it's not that soc dems ignore individual points, but rather that they evade the whole issue.
GD Star Rating
loading...
Tagged as:

Pox On Both Houses

In the latest American Journal of Economics and Sociology, Arthur Diamond presents a very disturbing result:

Polywater, one of the most famous mistaken scientific research programs of the past half-century, is used as a case study to examine whether polywater researchers later experienced lower citation counts, or less favorable job mobility. The primary result is that simply writing on polywater, either pro or con, has a negative impact on future citations, in comparison with those who never wrote on polywater. The lifetime value of the lost citations is roughly in the range of $13,000 to $19,000. However writing on polywater did not affect the probability of a scientist leaving university employment.

Once polywater was considered a failure, not only were those who had written in its favor punished, but those who had written against it were punished just as strongly!  If this is a typical outcome, we can conclude that academic incentives are to just ignore contrarian claims that you do not believe will become mainstream.  Try to refute a contrarian claim, and even if you succeed you will be treated just like its defenders.  Together with last week's debating result:

If your side is currently favored, you don't want to debate the other side!

we can see that intellectuals have little incentive to engage contrarian views.  One possible cause here may be like "You Can't Not Believe Everything You Read".  Diamond suggests another cause:

Even if a scientist sets out to refute a theory and succeeds, the scientist might pay a penalty in that the refutation may become a forgotten dead end, not generating any further citations to the scientist who correctly authored the refutation.

GD Star Rating
loading...
Tagged as: ,

Why Refuse To Debate

In my debate with Bryan Caplan (vid here), his position was a strong audience favorite before, and less so after.  When I heard that Tyler Cowen just had the same experience in his debate, I suggested to the lunch crowd that maybe in general debates move audiences toward a 50/50 opinion split.  The org that sponsored Tyler’s debate has done 27 of them so far, so I typed that data in and it is easy to see that their data supports my hypothesis:

Disfavored

 

The initially disfavored side almost always gains a lot (vs. the zero gain red line).  Alex Tabarrok weighs theories to explain this; my guess is that hearing half of a long hi-profile argument time devoted to something makes it seem more equally plausible.

But whatever the cause, one implication seems clear: if your side is currently favored, you don’t want to debate the other side!  At least you don’t if your refusal to debate can’t be too easily made public to be held against you.  If we could somehow overcome this problem, we could get a lot more debates, and substantially improve public opinion.  Any ideas?

 

Continue reading "Why Refuse To Debate" »

GD Star Rating
loading...
Tagged as:

Talk Is Not About Info

Tyler points us to a new J Applied Psych meta-analysis of team info sharing:

Meta-analytic results from 72 independent studies (total groups = 4,795; total N = 17,279) demonstrate the importance of information sharing to team performance, cohesion, decision satisfaction, and knowledge integration. Although moderators were identified, information sharing positively predicted team performance across all levels of moderators.

BUT:

Groups tend to spend most of their time discussing the information shared by members, which is therefore redundant, rather than discussing information known only to one or a minority of members. This is important because those groups that do share unique information tend to make better decisions.  … Ironically, … groups that talked more tended to share less unique information.

Why?  My guess: people know they are respected and liked more by other team members when they say things others already agree with.  Saying something new may help the team, but it puts you at risk.

GD Star Rating
loading...
Tagged as:

Echo Chamber Confidence

We should realize that we gain far less info in an echo chamber than from being around folks with diverse views.  The latest Journal of Experimental Psychology says we just don't get this

The experimental task involved estimating the number of calories in measured quantities of different foods (e.g., a cup of yogurt, a bowl of cooked rice). … Participants were asked to generate a calorie estimate for each food and then indicate their confidence in it. … [Then] they were provided with the opinions of three advisors, and were given the opportunity to revise their initial estimates. They were told that they would receive a bonus for making accurate judgments, … [and] were also asked to indicate their confidence in their final (revised) estimates and to bet on their accuracy.  …

On half the trials (independent condition) the [screen] header stated that “these estimates were randomly drawn from a pool of 100 estimates made by participants in a previous study,” whereas on the remaining trials (opinion-dependent condition) the header stated that “these estimates were selected from those closest to your own initial opinion in a pool of 100 estimates made by participants in a previous study.” …

Receiving advice increased participants’ confidence in the dependent condition, but not in the independent condition.  Participants indicated greater confidence in their final estimates in the opinion-dependent than in the independent condition.  In accord with the confidence results, the participants bet more often in the dependent (58%) than in the independent condition (42%).

Please, please, don't let yourself succumb to the very common bias to confidence in views because "everyone" at your favorite website agrees with them, if those people have been selected for this very agreement!  Once you realize that many others elsewhere disagree, that disagreement should weigh heavily on your estimation. 

GD Star Rating
loading...
Tagged as: ,

Toddlers Avoid Dissenters

The majoritarian instinct arrives very early.  The latest Psychological Science says toddlers prefer advice from toddlers who agreed with a majority: 

In two experiments, 3- and 4-year-olds were tested for their sensitivity to agreement and disagreement among informants. In pretest trials, they watched as three of four informants (Experiment 1) or two of three informants (Experiment 2) indicated the same referent for an unfamiliar label; the remaining informant was a lone dissenter who indicated a different referent. Asked for their own judgment, the preschoolers sided with the majority rather than the dissenter. In subsequent test trials, one member of the majority and the dissenter remained present and continued to provide conflicting information about the names of unfamiliar objects. Children remained mistrustful of the dissenter. They preferred to seek and endorse information from the informant who had belonged to the majority. The implications and scope of children's early sensitivity to group consensus are discussed.

Of course this can be interpreted either as an info or conformity strategy.

GD Star Rating
loading...
Tagged as: ,

Break Cryonics Down

The essence of analysis is to "break it down", to take apart vague wholes into clearer parts.  For the same reasons we make point lists to help us make tough job decisions, or ask people who sue for damages to name an amount and break it into components, we should try to break down these important social claims via simple calculations.  And the absence of attempts at this is a sad commentary on something. [Me last July]

Imagine you disagreed with someone about the fastest way to get from your office to Times Square NYC; you said drive, they said fly.  You broke down your time estimates for the two paths into part estimates: times to drive to the airport, wait at the airport, fly, wait for a taxi, ride the taxi, etc.  They refused to offer any component estimates; they just insisted on confidence in their total difference estimate. 

Similarly imagine some someone who disagree about which of two restaurants was better for a certain group, but wouldn't break that down into who would like or dislike what aspects of the two places.  Or imagine someone who claimed their business plan would be profitable, but refused to break this down into how many of what types of units would be sold when, or what various inputs would cost.  Or someone who said US military spending was worth the cost, but refused to break this down into which enemies were how discouraged from what sorts of damage by that last spending increment.

Such silent disputants reject our most powerful tool for resolving disagreements: analysis – breaking vaguer wholes into clearer parts.  Either they have not used this tool to test or refine their estimates, or they are not willing to discuss such parts with you.  I felt Tyler made this analysis-blocking move in our diavlog:

Continue reading "Break Cryonics Down" »

GD Star Rating
loading...
Tagged as: ,

Share likelihood ratios, not posterior beliefs

When I think of Aumann's agreement theorem, my first reflex is to average.  You think A is 80% likely; my initial impression is that it's 60% likely.  After you and I talk, maybe we both should think 70%.  "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic.

But sometimes, not only is the best combination not the average, it's more extreme than either original belief.

Let's say Jane and James are trying to determine whether a particular coin is fair.  They both think there's an 80% chance the coin is fair.  They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.

Jane flips the coin five times, performs a perfect Bayesian update, and concludes there's a 65% chance the coin is unfair.  James flips the coin five times, performs a perfect Bayesian update, and concludes there's a 39% chance the coin is unfair.  The averaging heuristic would suggest that the correct answer is between 65% and 39%.  But a perfect Bayesian, hearing both Jane's and James's estimates – knowing their priors, and deducing what evidence they must have seen - would infer that the coin was 83% likely to be unfair.  [Math footnoted.]

Perhaps Jane and James are combining this information in the middle of a crowded tavern, with no pen and paper in sight.  Maybe they don't have time or memory enough to tell each other all the coins they observed.  So instead they just tell each other their posterior probabilities – a nice, short summary for a harried rationalist pair.  Perhaps this brevity is why we tend to average posterior beliefs.

However, there is an alternative.  Jane and James can trade likelihood ratios.  Like posterior beliefs, likelihood ratios are a condensed summary; and, unlike posterior beliefs, sharing likelihood ratios actually works.

Continue reading "Share likelihood ratios, not posterior beliefs" »

GD Star Rating
loading...
Tagged as: , ,