Tag Archives: Disagreement

Time to Mitigate, Not Contain

In a few hours, I debate “Covid-19: Contain or Mitigate?” with Tomas “Hammer & Dance” Pueyo, author of many popular pandemic posts (e.g. 1 2 3 4 5 6 7). Let me try to summarize my position.

We have long needed a Plan B for the scenario where a big fraction of everyone gets exposed to Covid19, and for this plan I’ve explored variolation and other forms of deliberate exposure. To be ready, variolation just needs a small (~100) short (1-2mo.) trial to verify the quite likely (>75%) case that it works (cuts harm by 3-30x), but alas, while funding and volunteers can be found, med ethics panels have consistently disapproved. (Months later, they haven’t even allowed the widely praised human challenge trials for vaccines, some of which would include variolation trials.)

One big reason we aren’t preparing enough for Plan B is that many of us are mentally stuck in a Plan A “monkey trap.” Like a monkey who gets caught because it won’t let go of a tasty nut held in its fist within a gourd, we are so focused on containing this pandemic that we won’t explore options premised on failing to contain.

Containment seeks to keep infections permanently low, via keeping most from being exposed, for at least several years until a strong lasting vaccine is widely applied. Mitigation, in contrast, accepts that most will be exposed, and seeks only to limit the rate or exposure to keep medical systems from being overwhelmed, and to maintain enough critical workers in key roles.

Succeeding at containment is of course a much bigger win, which is why containment is the usual focus early in a pandemic. Catch it fast enough, and hit it hard enough with testing, tracing, and isolation, and the world is saved. But eventually, if you fail at that Plan A, and it grows big enough across across a wide enough area, you may need to admit failure and switch to a Plan B.

And, alas, that’s where we seem to be now with Covid-19. Over the last 7 weeks since the cases peak the official worldwide case count has been rising slowly, while official deaths are down ~25%. In the US, deaths are down ~1/2 since he peak 6 weeks ago. You might think, “yay, declines, we are winning!” But no, these declines are just too slow, as well as too uncertain.

Most of the US decline has been in New York, which has just now reached bottom, with no more room to decline. And even if US could maintain the rate of declining by 1/3 every 6 weeks, and repeat that 5 times over 30 weeks (= 7 months), not at all a sure thing, that would only bring daily US cases from 31K at the peak down to 4.2K. Which isn’t clearly low enough for test and trace to keep it under control without lockdown. And, more important, we just can’t afford to lockdown for that long.

You see, lockdown is very expensive. On average, around the world, lockdowns seems to cost about one third of local average income. Yet I estimate the cost of failing to contain, and letting half the population get infected with Covid-19 (at IFR ~0.5%), to be about 3 weeks of income. (Each death loses ~8 QALY.) So 9 weeks of strong lockdown produces about the same total harm as failing to contain! And where I live, we have almost had 10 weeks of lockdown.

If without lockdown the death rate would double due to an overloaded medical system, then paying for less than 9 weeks of added lockdown to prevent that is a good deal. But at that point, paying more than an additional 9 weeks of strong lockdown to prevent all deaths would not be a good deal. So our willingness to pay for lockdowns to cut deaths should really be quite limited. Sure, if we were at a tipping point where spending just a bit more would make all the difference between success and failure, then sure we should spend that bit more. But that’s just not plausibly where we are now.

Yes, sometimes we pay more to prevent harm than we suffer on average from such harms. For example, we pay more for door locks, and security guards, than we lose on average from theft. But those are very effective ways to prevent harm; paying 10% more there cuts harms by much more than 10%. Yet according to my Twitters polls, most see 10% more spent on lockdown as producing much less than 10% fewer deaths. If so, we should spend much less on lockdowns than we suffer from pandemic deaths.

Now if Pueyo is reading this, I suspect he’s screaming “But we’ve been doing it all wrong! Other possible policies exist that are far more effective, and if we use them containment becomes cost-effective. See Taiwan or South Korea.” And yes, other places have achieved better outcomes via better policies. We might not be able to do as well as them now that we’ve lost so much time, but we might well do much better than currently. Pueyo has sketched out plans, and they even seem to be good sketches.

So if we suddenly made Tomas Pueyo into a policy czar tomorrow, with an unlimited budget and able to run roughshod across most related laws or policies, we’d probably get much better Covid-19 outcomes, perhaps even cost-effective containment. But once such a precedent was set, I’d fear for the effectiveness of future czars. Ambitious politicians and rent-seekers would seek to manufacture crises and pseudo-Pueyo-czar candidates, all to get access to those unlimited budgets and legal powers.

Which is a big part of why we have the political systems we do. All around the world, we have created public health agencies tasked with pandemic policy, and political systems that oversee them. These agencies are staffed with experts trained in various schools of thought, who consult with academic experts of varying degrees of prestige. And all are constrained by local legal precedent, and by public perceptions, distrust, and axes of political conflict. These are the people and systems that have produced the varying policies we have now, all around the world.

Yes, those of us in places which have seen worse outcomes should ask ourselves how strong a critique that fact offers of our existing institutions and political cultures, and what we might do to reform them. But there is no easy and fast answer there; good reforms will have to be carefully considered, tested, and debated. We can hope to eventually improve via reforms, but, and this is the key point, we have no good reason to expect much better pandemic policy in the near future than we have seen in the near past. Even when policy makers have access to well-considered policy analyses by folks like Pueyo.

Now it might be possible to get faster political action if Pueyo and many other elites would coordinate and publicly back one specific standard plan, say the “John Hopkins Plan”, that specifies many details on how to do testing, tracing, isolation, etc. Especially if this plan pretty directly copied a particular successful policy package from, say, Taiwan. If enough people yelled in unison “We must do this or millions will die!”, why then politicians might well cave and make it happen.

But that is just not what is happening. Instead, we have dozens of essays and white papers pushing for dozens of related but different proposals. So there’s no clear political threat for politicians to fear defying. Whatever they do, come re-election time politicians can point to some who pushed for some of what they did. So all these divergent essays have mainly limited short term political effects, though they may do much more to raise the status of their authors.

So if political complexity argues against containment now in many places, why doesn’t that same argument apply equally well to mitigation? After all, mitigation can also be done well or badly, and it must be overseen by the same agencies and politicians that would oversee containment. As there is no escaping the fact that many detailed policy choices must be made, why not push for the best detailed packages of choices that we know?

Imagine that you were driving from A to B, and your first instinct was to take a simple route via two interstate freeways, both in big valleys. Your friend instead suggests that you take a backroad mountain short cut, using eight different road segments, many of them with only one lane, and some very wiggly. (Assume no phone or GPS.) That plan might look like it would take less total time, but you should worry about your competence to follow it. If you are very tired, bad at following road directions, or bad at sticking to wiggly roads, you might prefer to instead take the interstates. Especially if it was your tired 16 year old teen son who will do the driving.

Like the wiggly backroad short cut, containment is a more fragile plan, more sensitive to details; it has to be done more exactly right to work. To contain well, we need the right combination of the right rules about who can work and shop, and with what masks, gloves, and distance, tests going to the right people run by the right testing orgs, the right tracing done the right way by the right orgs with the right supporting apps, and the right rules requiring who gets isolated where upon what indications of possible infection. All run by the right sort of people using the right sort of local orgs and legal authority. And coordinated to right degree with neighboring jurisdictions, to avoid “peeing section of the pool” problems.

Yes, we might relax lockdown badly, but we are relaxing toward a known standard policy: no lockdown. So there are fewer ways to go wrong there. In contrast, there are just more ways to go wrong in trying to lockdown even more strictly. And that’s why it can make sense for the public to say to the government, “you guys haven’t been doing so well at containment, so let’s quit that and relax lockdown faster, shooting only for mitigation.” Yes, that might go badly, but it can’t go quite as badly as the worse possible scenario, where we trash the economy with long painful lockdowns, and yet still fail and most everyone gets exposed.

And that’s my argument for mitigation, relative to containment.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Reply to Cowen On Variolation

In the last nine days I’ve done two online debates on variolation, with Zvi Mowshowitz and Gregory Cochran. In both cases my debate partners seemed to basically agree with me; disagreements were minor. Last night Tyler Cowen posted 1000+ words on “Why I do not favor variolation for Covid-19”. Yet oddly he also doesn’t seem to disagree with my main claims that (1) we are likely to need a Plan B for all-too-likely scenario where most of the world seems likely to get infected soon, and (2) variolation is simple, mechanically feasible, and could cut Covid19 Deaths by a factor of 3-30.

Tyler lists 8 points, but really makes 11. If he had one strong argument, he’d have focused on that, and then so could I in my response. Alas, this way I can’t respond except at a similar length; you are warned. Continue reading "Reply to Cowen On Variolation" »

GD Star Rating
a WordPress rating system
Tagged as: ,

Crush Contrarians Time?

If you are a contrarian who sees yourself as consistently able to identify contrary but true positions, covid19 offers the exciting chance to take contrary positions and then be proven right in just a few months. As opposed to typically taking decades or more to be shown right.

But, what if non-contrarian conformists know that (certain types of) contrarians can often be more right, but conformists see that they tend to win by getting more attention & affirmation in the moment by staying in the Overton window and saying stuff near what most others think at the time?

In that case conformists may usually tolerate & engage contrarians exactly because they know contrarians take so long to be proven right. So if conformists see that now contrarians will be proven right fast, they may see it as in their interest to more strictly shun contrarians.

Consider Europe at WWI start. Many had been anti-war for decades, but that contrarian view was suddenly suppressed much more than usual. Conformists knew that skeptical views of war might be proven right in just a few years. Contrarians lost on average, even though proven right.

Humans may well have a common norm of liberally tolerating contrarians when the stakes are low and it would take decades to be proven right, but shunning and worse to contrarians when stakes are high and events are moving fast.

GD Star Rating
a WordPress rating system
Tagged as:

Common Useless Objections

As I’m often in the habit of proposing reforms, I hear many objections. Some are thoughtful and helpful but, alas, most are not. Humans are too much in the habit of quickly throwing out simple intuitive criticisms to bother to notice whether they have much of an evidential impact on the criticized claim.

Here are some common but relatively useless objections to a proposed reform. I presume a moment’s reflection on each will show why:

  1. Your short summary didn’t explicitly consider issue/objection X.
  2. You are not qualified to discuss this without Ph.D.s in all related areas.
  3. Someone with evil intent might propose this to achieve evil ends.
  4. You too quickly talked details, instead of proving you share our values.
  5. Less capable/cooperative folks more like radical proposals; so you too.
  6. Most proposals for change are worse than status quo; yours too.
  7. There would be costs to change from our current system to this.
  8. We know less about how this would work, vs. status quo.
  9. If this was a good idea, it would have already been adopted.
  10. We have no reason to think our current system isn’t the best possible.
  11. Nothing ever changes much; why pretend change is possible?
  12. No supporting analysis of type X exists (none also for status quo).
  13. Supporting analyses makes assumptions which might be wrong.
  14. Supporting analysis neglect effect X (as do most related analyses).
  15. Such situations are so complex that all explicit analysis misleads.
  16. A simple variation on proposal has problem X; so must all variations.
  17. It would be better to do X (when one can do both X and this).
  18. If this improves X, other bad systems might use that to hurt Y.

Many useless objections begin with “Under your proposal,”:

  1. we might see problem X (which we also see in status quo).
  2. people might sometimes die, or be unhappy.
  3. people might make choices without being fully informed.
  4. poor folks might be worse off than rich folks.
  5. poor folks may pick more risk or inconvenience to get more $.
  6. not all decisions are made with full democratic participation.
  7. governments sometimes coerce citizens.
  8. some people would end up worse off than otherwise.
  9. some people would suffer X, so you lack moral standing if you do not immediately make yourself suffer X.

So what do useful objections look like? Try these:

  1. I reject your goals, and so see no value in your method.
  2. We can only do one thing now, and payoff from fixing this is too small, vs. other bigger easy fix X.
  3. A naive application of your proposal has problem X; can anyone think of better variations?
  4. Problem X seems robustly larger given your proposal vs. status quo.
  5. Benefit X seems robustly smaller given your proposal vs. status quo.
  6. I’d bet that if we added effect X to your supporting analysis, we’d see your proposal is worse on metric Y.
  7. According to this analysis I now provide, your proposal looks worse on many metrics, better on only a few.
  8. Here is why the parameter space where your proposal looks good is unusually small, making it unusually fragile.
  9. This reform was unusually likely to have been considered and tried  before, making it is especially important to know why not.
GD Star Rating
a WordPress rating system
Tagged as: , ,

How Bees Argue

The book Honeybee Democracy, published in 2010, has been sitting on my shelf for many years. Getting back into the topic of disagreement, I’ve finally read it. And browsing media articles about the book from back then, they just don’t seem to get it right. So let me try to do better.

In late spring and early summer, … colonies [of ordinary honeybees] become overcrowded … and then cast a swarm. … About a third of the worker bees stay at home and rear a new queen … while two-thirds of the workforce – a group of some ten thousand – rushes off with the old queen to create a daughter colony. The migrants travel only 100 feet or so before coalescing into a beardlike cluster, where they literally hang out together for several hours or a few days. .. [They then] field several hundred house [scouts] to explore some 30 square miles … for potential homesites. (p.6)

These 300-500 scouts are the oldest most experienced bees in the swarm. To start, some of them go searching for sites. Initially a scout takes 13-56 minutes to inspect a site, in part via 10-30 walking journeys inside the cavity. After inspecting a site, a scout returns to the main swarm cluster and then usually wanders around its surface doing many brief “waggle dances” which encode the direction and distance of the site. (All scouting activity stops at night, and in the rain.)

Roughly a dozen sites are discovered via scouts searching on their own. Most scouts, however, are recruited to tout a site via watching another scout dance about it, and then heading out to inspect it. Each dance is only seen by a few immediately adjacent bees. These recruited scouts seem to pick a dance at random from among the one’s they’ve seen lately. While initial scouts, those not recruited via a dance, have an 86% chance of touting their site via dances, recruited scouts only have a 55% chance of doing so.

Once recruited to tout a site, each scout alternates between dancing about it at the home cluster and then returning to the site to inspect it again. After the first visit, re-inspections take only 10-20 minutes. The number of dances between site visits declines with the number of visits, and when it gets near zero, after one to six trips, the bee just stops doing any scouting activity.

This decline in touting is accelerated by direct conflict. Bees that tout one site will sometimes head-butt (and beep at) bees touting other sites. After getting hit ten times, a scout usually quits. (From what I’ve read, it isn’t clear to me if any scout, once recruited to tout a site, is ever recruited again later to tout a different site.)

When scouts are inspecting a site, they make sure to touch the other bees inspecting that site. When they see 20-30 scouts inspecting a site at once, that generally implies that a clear majority of the currently active touting scouts are favoring this site. Scouts from this winning site then return to the main cluster and make a special sound which declares the search to be over. Waiting another hour or so gives enough time for scouts to return from other sites, and then the entire cluster heads off together to this new site.

The process I’ve described so far is enough to get all the bees to pick a site together and then go there, but it isn’t enough to make that be a good site. Yet, in fact, bee swarms seem to pick the best site available to them about 95% of the time. Site quality depends on cavity size, entrance size and height, cavity orientation relative to entrance, and wall health. How do they do pick the best site?

Each scout who inspects a site estimates its quality, and encodes that estimate in its dance about that site. These quality estimates are error-prone; there’s only an 80% chance that a scout will rate a much better site as better. The key that enables swarms to pick better sites is this: between their visits to a site, scouts do a lot more dances for sites they estimate to be higher quality. A scout does a total of 30 dances for a lousy site, but 90 dances for great site.

And that’s how bee swarms argue, re picking a new site. The process only includes an elite of the most experienced 3-5% of bees. That elite all starts out with no opinion, and then slowly some of them acquire opinions, at first directly and randomly via inspecting options, and then more indirectly via randomly copying opinions expressed near them. Individual bees may never change their acquired opinions. The key is that when bees have an opinion, they tend to express them more often when those are better opinions. Individual opinions fade with time, and the whole process stops when enough of a random sample of those expresssing opinions all express the same opinion.

Now that I know all this, it isn’t clear how relevant it is for human disagreement. But it does seem a nice simple example to keep in mind. With bees, a community typically goes from wide disagreement to apparent strong agreement, without requiring particular individuals to ever giving up their strongly held opinions.

GD Star Rating
a WordPress rating system
Tagged as: ,

Disagreement on Disagreement

I’m seriously considering returning to the topic of disagreement in one of my next two books. So I’ve been reviewing literatures, and I just tried some polls. For example:

These results surprised me. Experience I can understand, but why are IQ and credentials so low, especially relative to conversation style? And why is this so different from the cues that media, academia, and government use to decide who to believe?

To dig further, I expanded my search. I collected 16 indicators, and asked people to pick their top 4 out of these, and also for each to say “if it tends to make you look better than rivals when you disagree.” I had intended this last question to be about if you personally tend to look better by that criteria, but I think most people just read it as asking if that indicator is especially potent in setting your perceived status in the context of a disagreement.

Here are the 16 indicators, sorted by the 2nd column, which gives % who say that indicator is in their top 4. (The average of this top 4 % is almost exactly 5/16, so these are actually stats on the top 5 indicators.)

The top 5 items on this list are all chosen by 55-62% of subjects, a pretty narrow % range, and the next 2 are each chosen by 48%. We thus see quite a wide range of opinion on what are the best indicators to judge who is right in a disagreement. The top 7 of the 16 indicators tried are similarly popular, and for each one 37-52% of subjects did not put it in their personal top 5 indicators. This suggests trying future polls with an even larger sets of candidate indicators, where we may see even wider preference variation.

The most popular indicators here seem quite different from what media, academia, and government use to decide who to believe in the context of disagreements. And if these poll participants were representative and honest about what actually persuades them, then these results suggest that speakers should adopt quite different strategies if their priority is to persuade audiences. Instead of collecting formal credentials, adopting middle-of-road positions, impugning rival motives, and offering long complex arguments, advocates should instead offer bets, adopt rational talking styles and take many tests, such as on IQ, related facts, and rival arguments.

More likely, not only do these poll respondents differ from the general population, they probably aren’t being honest about, or just don’t know, what actually persuades them. We might explore these issues via new wider polls that present vignettes of disagreements, and then ask people to pick sides. (Let me know if you’d like to work on that with me.)

The other 3 columns in the table above show the % who say an indicator gives status, the correlation across subjects between status and top 4 choices, and the number of respondents for each indicator. The overall correlation across indicators between the top 5 and status columns is 0.90. The obvious interpretation of these results is that status is closely related to persuasiveness. Whatever indicators people say persuades them, they also say give status.

GD Star Rating
a WordPress rating system
Tagged as: ,

Might Disagreement Fade Like Violence?

Violence was quite common during much of the ancient farming era. While farmers retained even-more-ancient norms against being the first to start a fight, it was often not easy for observers to tell who started a fight. And it was even harder to get those who did know to honestly report that to neutral outsiders. Fighters were typically celebrated for showing strength and bravery, And also loyalty when they claimed to fight “them” in service of defending “us”. Fighting was said to be good for societies, such as to help prepare for war. The net effect was that the norm against starting fights was not very effective at discouraging fights during the farming era, especially when many “us” and “them” were in close proximity.

Today, norms against starting fights are enforced far more strongly. Fights are much rarer, and when they do happen we try much harder to figure out who started them, and to more reliably punish starters. We have created much larger groups of “us” (e.g., nations), and use law to increase the resources we devote to enforcing norms against fighting, and the neutrality of many who spend those resources. Furthermore, we have and enforce stronger norms against retaliating overly strongly to apparent provocations that may have been accidental. We are less impressed by fighters, and prefer for people to use other ways to show off their strength and bravery. We see fighting as socially destructive, to be discouraged. And as fighting is rare, we infer undesired features about the few rare exceptions, such impulsiveness and a lack of empathy.

Now consider disagreement. I have done a lot of research on this topic and am pretty confident of the following claim (which I won’t defend here): People who are mainly trying to present accurate beliefs that are informative to observers, without giving much weight to other considerations (aside from minimizing thinking effort), do not foresee disagreements. That is, while A and B may often present differing opinions, A cannot publicly predict how a future opinion that B will present on X will differ on average from A’s current opinion on X. (Formally, A’s expectation of B’s future expectation nearly equals A’s current expectation.)

Of course today such foreseeing to disagree is quite commonplace. Which implies that in any such disagreement, one or both parties is not mainly trying to present accurate estimates. Which is a violation of our usual conversational norms for honesty. But it often isn’t easy to tell which party is not being fully honest. Especially as observers aren’t trying very hard very to tell, nor to report what they see honestly when they feel inclined to support “our” side in a disagreement with “them”. Furthermore, we are often quite impressed by disagreers who are smart, knowledgeable, passionate, and unyielding. And many say that disagreements are good for innovation, or for defending our ideologies against their rivals. All of which helps explain why disagreement is so common today.

But the analogy with the history of violent physical fights suggests that other equilibria may be possible. Imagine that disagreement were much less common, and that we could spend far more resources to investigate each one, using relatively neutral people. Imagine a norm of finding disagreement surprising and expecting the participants to act surprised and dig into it. Imagine that we saw ourselves much less as closely mixed groups of “us” and “them” regarding these topics, and that we preferred other ways for people to show off loyalty, smarts, knowledge, passion, and determination.

Imagine that we saw disagreement as socially destructive, to be discouraged. And imagine that the few people who still disagreed thereby revealed undesirable features such as impulsiveness and ignorance. If it is possible to imagine all these things, then it is possible to imagine a world which has far less foreseeable disagreement than our world, comparable to how we now have much less violence than did the ancient farming world.

When confronted with such an imaged future scenario, many people today claim to see it as stifling and repressive. They very much enjoy their freedom today to freely disagree with anyone at any time. But many ancients probably also greatly enjoyed the freedom to hit anyone they liked at anytime. Back then, it was probably the stronger better fighters, with the most fighting allies, who enjoyed this freedom most. Just like today it is probably the people who are best at arguing to make their opponents look stupid who enjoy our freedom to disagree today. Doesn’t mean this alternate world wouldn’t be better.

GD Star Rating
a WordPress rating system
Tagged as: ,

We Agree On So Much

In a standard Bayesian model of beliefs, an agent starts out with a prior distribution over a set of possible states, and then updates to a new distribution, in principle using all the info that agent has ever acquired. Using this new distribution over possible states, this agent can in principle calculate new beliefs on any desired topic. 

Regarding their belief on a particular topic then, an agent’s current belief is the result of applying their info to update their prior belief on that topic. And using standard info theory, one can count the (non-negative) number of info bits that it took to create this new belief, relative to the prior belief.  (The exact formula is Sumi pi log2(pi/qi), where pi is the new belief, qi is the prior, and i ranges over possible answers to this topic question.)  

How much info an agent acquires on a topic is closely related to how confident they become on that topic. Unless a prior starts out very confident, high confidence later can only come via updating on a great many info bits. 

Humans typically acquire vast numbers of info bits over their lifetime. By one estimate, we are exposed to 34GB per day. Yes, as a practical matter we can’t remotely make full use of all this info, but we do use a lot of it, and so our beliefs do over time embody a lot of info. And even if our beliefs don’t reflect all our available info, we can still talk about the number of bits are embodied in any given level of confidence an agent has on a particular topic. 

On many topics of great interest to us, we acquire a huge volume of info, and so become very confident. For example, consider how confident you are at the moment about whether you are alive, whether the sun is shining, that you have ten fingers, etc. You are typically VERY confident about such things, because have access to a great many relevant bits.

On a great many other topics, however, we hardly know anything. Consider, for example, many details about the nearest alien species. Or even about the life of your ancestors ten generations back. On such topics, if we put in sufficient effort we may be able to muster many very weak clues, clues that can push our beliefs in one direction or another. But being weak, these clues don’t add up to much; our beliefs after considering such info aren’t that different from our previous beliefs. That is, on these topics we have less than one bit of info. 

Let us now collect a large broad set of such topics, and ask: what distribution should we expect to see over the number of bits per topic? This number must be positive, for many familiar topics it is much much larger than one, while for other large sets of topics, it is less than one. 

The distribution most commonly observed for numbers that must be positive yet range over many orders of magnitude is: lognormal. And so I suggest that we tentatively assume a (large-sigma) lognormal distribution over the number of info bits that an agent learns per topic. This may not be exactly right, but it should be qualitatively in the ballpark.  

One obvious implication of this assumption is: few topics have nearly one bit of info. That is, most topics are ones where either we hardly know anything, or where we know so much that we are very confident. 

Note that these typical topics are not worth much thought, discussion, or work to cut biases. For example, when making decisions to maximize expected utility, or when refining the contribution that probabilities on one topic make to other topic probabilities, getting 10% of one’s bits wrong just won’t make much of difference here. Changing 10% of 0.01 bit makes still leaves one’s probabilities very close to one’s prior. And changing 10% of a million bits still leaves one with very confident probabilities.  

Only when the number of bits on a topic is of order unity do one’s probabilities vary substantially with 10% of one’s bits. These are the topics where it can be worth paying a fixed cost per topic to refine one’s probabilities, either to help make a decision or to help update other probability estimates. And these are the topics where we tend to think, talk, argue, and worry about our biases.

It makes sense that we tend to focus on pondering such “talkable topics”, where such thought can most improve our estimates and decisions. But don’t let this fool you into thinking we hardly agree on anything. For the vast majority of topics, we agree either that we hardly know anything, or that we quite confidently know the answer. We only meaningfully disagree on the narrow range of topics where our info is on the order of one bit, topics where it is in fact worth the bother to explore our disagreements. 

Note also that for these key talkable topics, making an analysis mistake on just one bit of relevant info is typically sufficient to induce large probability changes, and thus large apparent disagreements. And for most topics it is quite hard to think and talk without making at least one bit’s worth of error. Especially if we consume 34GB per day! So its completely to be expected that we will often find ourselves disagreeing on talkable topics at the level of few bits.

So maybe cut yourself and others a bit more slack about your disagreements? And maybe you should be more okay with our using mechanisms like betting markets to average out these errors. You really can’t be that confident that it is you who has made the fewest analysis errors. 

GD Star Rating
a WordPress rating system
Tagged as:

Huemer On Disagreement

Mike Huemer on disagreement:

I participated in a panel discussion on “Peer Disagreement”. … the other person is about equally well positioned for forming an opinion about that issue — e.g., about as well informed, intelligent, and diligent as you. … discussion fails to produce agreement … Should you just stick with your own intuitions/judgments? Should you compromise by moving your credences toward the other person’s credences? …

about the problem specifically of philosophical disagreement among experts (that is, professional philosophers): it seems initially that there is something weird going on, … look how much disagreement there is, … I think it’s not so hard to understand a lot of philosophical disagreement. … we often suck as truth-seekers: Bad motives: We feel that we have to defend a view, because it’s what we’ve said in print in the past. … We lack knowledge (esp. empirical evidence) relevant to our beliefs, when that knowledge is outside the narrow confines of our academic discipline. … We often just ignore major objections to our view, even though those objections have been published long ago. … Differing intuitions. Sometimes, there are just two or more ways to “see” something. …

You might think: “But I’m a philosopher too [if you are], so does that mean I should discount my own judgments too?” Answer: it depends on whether you’re doing the things I just described. If you’re doing most of those things, it’s not that hard to tell.

Philosophy isn’t really that different from most other topic areas; disagreement is endemic most everywhere. The main ways that it is avoided is via extreme restrictions on topic, or strong authorities who can force others to adopt their views.

Huemer is reasonable here right up until his last five words. Sure we can find lots of weak indicators of who might be more informed and careful in general, and also on particular topics. Especially important are clues on if a person listens well to others, and updates on the likely info value of others’ opinions.

But most everyone already knows this, and so typically tries to justify their disagreement by pointing to positive indicators about themselves, and negative indicators about those who disagree with them. If we could agree on the relative weight of these indicators, and act on them, then we wouldn’t actually disagree much. (Formally we wouldn’t foresee to disagree.)

But clearly we are severely biased in our estimates of these relative indicator weight, to favor ourselves. These estimates come to us quite intuitively, without needing much thought, and are typically quite confident, making us not very anxious about their errors. And we mostly seem to be quite sincere; we aren’t usually much aware that we might be favoring ourselves. Or if we are somewhat aware, we tend to feel especially confident that those others with whom we disagree are at least as biased as we. I see no easy introspective fix here.

The main way I know to deal with this problem is to give yourself much stronger incentives to be right: bet on it. As soon as you start to think about how much you’d be willing to bet, and at what odds, you’ll find yourself suddenly much more aware of the many ways you might be wrong. Yes, people who bet still disagree more than is accuracy-rational, but they are much closer to the ideal. And they get even closer as they start to lose bets and update their estimates re how good they are on what topics.

GD Star Rating
a WordPress rating system
Tagged as:

To Oppose Polarization, Tug Sideways

Just over 42% of the people in each party view the opposition as “downright evil.” … nearly one out of five Republicans and Democrats agree with the statement that their political adversaries “lack the traits to be considered fully human — they behave like animals.” … “Do you ever think: ‘we’d be better off as a country if large numbers of the opposing party in the public today just died’?” Some 20% of Democrats and 16% of Republicans do think [so]. … “What if the opposing party wins the 2020 presidential election. How much do you feel violence would be justified then?” 18.3% of Democrats and 13.8% of Republicans said [between] “a little” to “a lot.” (more)

Pundits keep lamenting our increasing political polarization. And their preferred fix seems to be to write more tsk-tsk op-eds. But I can suggest a stronger fix: pull policy ropes sideways. Let me explain.

Pundit writings typically recommend some policies relative to others. In polarized times such as ours, these policy positions tend to be relatively predictable given a pundit’s political value positions, i.e., the positions they share with their political allies relative to their political enemies. And much of the content of their writings work to clarify any remaining ambiguities, i.e., to explain why their policy position is in fact a natural result of political positions they share with their allies. So only people with evil values would oppose it. So readers can say “yay us, boo them”.

Twelve years ago I described this as a huge tug-o-war:

The policy world can thought of as consisting of a few Tug-O-War “ropes” set up in [a] high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are “one of them,” you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it. (more)

To oppose this tendency, one idea is to encourage pundits to sometimes recommend policies that are surprising or the opposite of what their political positions might suggest. That is, go pull on the opposite side of a rope sometimes, to show us that you think for yourself, and aren’t driven only by political loyalty. And yes doing this may help. But as the space of political values that we fight over is multi-dimensional, surprising pundit positions can often be framed as a choice to prioritize some values over others, i.e., as a bid to realign the existing political coalitions in value space. Yes, this may weakens the existing dominant political axis, but it may not do much to make our overall conversation less political.

Instead, I suggest that we encourage pundits to grab a policy tug-o-war rope and pull it sideways. That is, take positions that are perpendicular to the usual political value axes, in areas where one has not yet taken explicit value-oriented positions. For example, a pundit who has not yet taken a position on whether we should have more or less military spending might argue for more navy relative to army, and then insist that this is not a covert way to push a larger or smaller military. Most credibly by continuing to not take a position on overall military spending. (And by not coming from a navy family, for whom navy is a key value.)

Similarly, someone with no position on if we should punish crime more or less than we currently do might argue for replacing jail-based punishments with fines, torture, or exile. Or, given no position on more or less immigration, argue for a particular new system to decide which candidates are more worthy of admission. Or given no position on how hard we should work to compensate for past racism, argue for cash reparations relative to affirmative action.

Tugging policy ropes sideways will frustrate and infuriate loyalists who seek mainly to praise their political allies and criticize their enemies. Such loyalists will be tempted to assume the worse about you, and claim that you are trying to covertly promote enemy positions. And so they may impose a price on you for this stance. But to the extent that observers respect you, loyalists will pay a price for attacking you in this way, and raising their overall costs of making everything political. And so on average by paying this price you can buy an overall intellectual conversation that’s a bit less political. Which is the goal here.

In addition, pulling ropes sideways is on average just a better way to improve policy. As I said twelve years ago:

If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy. On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled. (more)

Yes, there is a sense in which arguments for “sideways” choices do typically appeal to a shared value: “efficiency”. For example, one would typically argue for navy over army spending in terms of cost-effectiveness in military conflicts and deterrence. Or might argue for punishment via fines in terms of cost-effectiveness for the goals of deterrence or rehabilitation. But all else equal we all like cost-effectiveness; political coalitions rarely want to embrace blatant anti-efficiency positions. So the more our policy debates emphasize efficiency, the less political polarized they should be.

Of course my suggestion here isn’t especially novel; most pundits are aware that they have the option to take the sort of sideways positions that I’ve recommended. Most are also aware that by doing so, they’d less enflame the usual political battles. Yet how often have you heard pundits protest that others falsely attributed larger value positions to them, when they really just tried to argue for cost-effectiveness of A over B using widely shared effectiveness concepts? That scenario seems quite rare to me.

So the main hope I can see here is of a new signaling equilibria where people tug sideways and brag about it, or have others brag on their behalf, to show their support for cutting political polarization. And thereby gain support from an audience who wants to reward cutters. Which of course only works if enough pundits actually believe a substantial such audience exists. So what do you say, is there much of an audience who wants to cut political polarization?

GD Star Rating
a WordPress rating system
Tagged as: , ,