Tag Archives: Disagreement

Crush Contrarians Time?

If you are a contrarian who sees yourself as consistently able to identify contrary but true positions, covid19 offers the exciting chance to take contrary positions and then be proven right in just a few months. As opposed to typically taking decades or more to be shown right.

But, what if non-contrarian conformists know that (certain types of) contrarians can often be more right, but conformists see that they tend to win by getting more attention & affirmation in the moment by staying in the Overton window and saying stuff near what most others think at the time?

In that case conformists may usually tolerate & engage contrarians exactly because they know contrarians take so long to be proven right. So if conformists see that now contrarians will be proven right fast, they may see it as in their interest to more strictly shun contrarians.

Consider Europe at WWI start. Many had been anti-war for decades, but that contrarian view was suddenly suppressed much more than usual. Conformists knew that skeptical views of war might be proven right in just a few years. Contrarians lost on average, even though proven right.

Humans may well have a common norm of liberally tolerating contrarians when the stakes are low and it would take decades to be proven right, but shunning and worse to contrarians when stakes are high and events are moving fast.

GD Star Rating
loading...
Tagged as:

Common Useless Objections

As I’m often in the habit of proposing reforms, I hear many objections. Some are thoughtful and helpful but, alas, most are not. Humans are too much in the habit of quickly throwing out simple intuitive criticisms to bother to notice whether they have much of an evidential impact on the criticized claim.

Here are some common but relatively useless objections to a proposed reform. I presume a moment’s reflection on each will show why:

  1. Your short summary didn’t explicitly consider issue/objection X.
  2. You are not qualified to discuss this without Ph.D.s in all related areas.
  3. Someone with evil intent might propose this to achieve evil ends.
  4. You too quickly talked details, instead of proving you share our values.
  5. Less capable/cooperative folks more like radical proposals; so you too.
  6. Most proposals for change are worse than status quo; yours too.
  7. There would be costs to change from our current system to this.
  8. We know less about how this would work, vs. status quo.
  9. If this was a good idea, it would have already been adopted.
  10. We have no reason to think our current system isn’t the best possible.
  11. Nothing ever changes much; why pretend change is possible?
  12. No supporting analysis of type X exists (none also for status quo).
  13. Supporting analyses makes assumptions which might be wrong.
  14. Supporting analysis neglect effect X (as do most related analyses).
  15. Such situations are so complex that all explicit analysis misleads.
  16. A simple variation on proposal has problem X; so must all variations.
  17. It would be better to do X (when one can do both X and this).
  18. If this improves X, other bad systems might use that to hurt Y.

Many useless objections begin with “Under your proposal,”:

  1. we might see problem X (which we also see in status quo).
  2. people might sometimes die, or be unhappy.
  3. people might make choices without being fully informed.
  4. poor folks might be worse off than rich folks.
  5. poor folks may pick more risk or inconvenience to get more $.
  6. not all decisions are made with full democratic participation.
  7. governments sometimes coerce citizens.
  8. some people would end up worse off than otherwise.
  9. some people would suffer X, so you lack moral standing if you do not immediately make yourself suffer X.

So what do useful objections look like? Try these:

  1. I reject your goals, and so see no value in your method.
  2. We can only do one thing now, and payoff from fixing this is too small, vs. other bigger easy fix X.
  3. A naive application of your proposal has problem X; can anyone think of better variations?
  4. Problem X seems robustly larger given your proposal vs. status quo.
  5. Benefit X seems robustly smaller given your proposal vs. status quo.
  6. I’d bet that if we added effect X to your supporting analysis, we’d see your proposal is worse on metric Y.
  7. According to this analysis I now provide, your proposal looks worse on many metrics, better on only a few.
  8. Here is why the parameter space where your proposal looks good is unusually small, making it unusually fragile.
  9. This reform was unusually likely to have been considered and tried  before, making it is especially important to know why not.
GD Star Rating
loading...
Tagged as: , ,

How Bees Argue

The book Honeybee Democracy, published in 2010, has been sitting on my shelf for many years. Getting back into the topic of disagreement, I’ve finally read it. And browsing media articles about the book from back then, they just don’t seem to get it right. So let me try to do better.

In late spring and early summer, … colonies [of ordinary honeybees] become overcrowded … and then cast a swarm. … About a third of the worker bees stay at home and rear a new queen … while two-thirds of the workforce – a group of some ten thousand – rushes off with the old queen to create a daughter colony. The migrants travel only 100 feet or so before coalescing into a beardlike cluster, where they literally hang out together for several hours or a few days. .. [They then] field several hundred house [scouts] to explore some 30 square miles … for potential homesites. (p.6)

These 300-500 scouts are the oldest most experienced bees in the swarm. To start, some of them go searching for sites. Initially a scout takes 13-56 minutes to inspect a site, in part via 10-30 walking journeys inside the cavity. After inspecting a site, a scout returns to the main swarm cluster and then usually wanders around its surface doing many brief “waggle dances” which encode the direction and distance of the site. (All scouting activity stops at night, and in the rain.)

Roughly a dozen sites are discovered via scouts searching on their own. Most scouts, however, are recruited to tout a site via watching another scout dance about it, and then heading out to inspect it. Each dance is only seen by a few immediately adjacent bees. These recruited scouts seem to pick a dance at random from among the one’s they’ve seen lately. While initial scouts, those not recruited via a dance, have an 86% chance of touting their site via dances, recruited scouts only have a 55% chance of doing so.

Once recruited to tout a site, each scout alternates between dancing about it at the home cluster and then returning to the site to inspect it again. After the first visit, re-inspections take only 10-20 minutes. The number of dances between site visits declines with the number of visits, and when it gets near zero, after one to six trips, the bee just stops doing any scouting activity.

This decline in touting is accelerated by direct conflict. Bees that tout one site will sometimes head-butt (and beep at) bees touting other sites. After getting hit ten times, a scout usually quits. (From what I’ve read, it isn’t clear to me if any scout, once recruited to tout a site, is ever recruited again later to tout a different site.)

When scouts are inspecting a site, they make sure to touch the other bees inspecting that site. When they see 20-30 scouts inspecting a site at once, that generally implies that a clear majority of the currently active touting scouts are favoring this site. Scouts from this winning site then return to the main cluster and make a special sound which declares the search to be over. Waiting another hour or so gives enough time for scouts to return from other sites, and then the entire cluster heads off together to this new site.

The process I’ve described so far is enough to get all the bees to pick a site together and then go there, but it isn’t enough to make that be a good site. Yet, in fact, bee swarms seem to pick the best site available to them about 95% of the time. Site quality depends on cavity size, entrance size and height, cavity orientation relative to entrance, and wall health. How do they do pick the best site?

Each scout who inspects a site estimates its quality, and encodes that estimate in its dance about that site. These quality estimates are error-prone; there’s only an 80% chance that a scout will rate a much better site as better. The key that enables swarms to pick better sites is this: between their visits to a site, scouts do a lot more dances for sites they estimate to be higher quality. A scout does a total of 30 dances for a lousy site, but 90 dances for great site.

And that’s how bee swarms argue, re picking a new site. The process only includes an elite of the most experienced 3-5% of bees. That elite all starts out with no opinion, and then slowly some of them acquire opinions, at first directly and randomly via inspecting options, and then more indirectly via randomly copying opinions expressed near them. Individual bees may never change their acquired opinions. The key is that when bees have an opinion, they tend to express them more often when those are better opinions. Individual opinions fade with time, and the whole process stops when enough of a random sample of those expresssing opinions all express the same opinion.

Now that I know all this, it isn’t clear how relevant it is for human disagreement. But it does seem a nice simple example to keep in mind. With bees, a community typically goes from wide disagreement to apparent strong agreement, without requiring particular individuals to ever giving up their strongly held opinions.

GD Star Rating
loading...
Tagged as: ,

Disagreement on Disagreement

I’m seriously considering returning to the topic of disagreement in one of my next two books. So I’ve been reviewing literatures, and I just tried some polls. For example:

These results surprised me. Experience I can understand, but why are IQ and credentials so low, especially relative to conversation style? And why is this so different from the cues that media, academia, and government use to decide who to believe?

To dig further, I expanded my search. I collected 16 indicators, and asked people to pick their top 4 out of these, and also for each to say “if it tends to make you look better than rivals when you disagree.” I had intended this last question to be about if you personally tend to look better by that criteria, but I think most people just read it as asking if that indicator is especially potent in setting your perceived status in the context of a disagreement.

Here are the 16 indicators, sorted by the 2nd column, which gives % who say that indicator is in their top 4. (The average of this top 4 % is almost exactly 5/16, so these are actually stats on the top 5 indicators.)

The top 5 items on this list are all chosen by 55-62% of subjects, a pretty narrow % range, and the next 2 are each chosen by 48%. We thus see quite a wide range of opinion on what are the best indicators to judge who is right in a disagreement. The top 7 of the 16 indicators tried are similarly popular, and for each one 37-52% of subjects did not put it in their personal top 5 indicators. This suggests trying future polls with an even larger sets of candidate indicators, where we may see even wider preference variation.

The most popular indicators here seem quite different from what media, academia, and government use to decide who to believe in the context of disagreements. And if these poll participants were representative and honest about what actually persuades them, then these results suggest that speakers should adopt quite different strategies if their priority is to persuade audiences. Instead of collecting formal credentials, adopting middle-of-road positions, impugning rival motives, and offering long complex arguments, advocates should instead offer bets, adopt rational talking styles and take many tests, such as on IQ, related facts, and rival arguments.

More likely, not only do these poll respondents differ from the general population, they probably aren’t being honest about, or just don’t know, what actually persuades them. We might explore these issues via new wider polls that present vignettes of disagreements, and then ask people to pick sides. (Let me know if you’d like to work on that with me.)

The other 3 columns in the table above show the % who say an indicator gives status, the correlation across subjects between status and top 4 choices, and the number of respondents for each indicator. The overall correlation across indicators between the top 5 and status columns is 0.90. The obvious interpretation of these results is that status is closely related to persuasiveness. Whatever indicators people say persuades them, they also say give status.

GD Star Rating
loading...
Tagged as: ,

Might Disagreement Fade Like Violence?

Violence was quite common during much of the ancient farming era. While farmers retained even-more-ancient norms against being the first to start a fight, it was often not easy for observers to tell who started a fight. And it was even harder to get those who did know to honestly report that to neutral outsiders. Fighters were typically celebrated for showing strength and bravery, And also loyalty when they claimed to fight “them” in service of defending “us”. Fighting was said to be good for societies, such as to help prepare for war. The net effect was that the norm against starting fights was not very effective at discouraging fights during the farming era, especially when many “us” and “them” were in close proximity.

Today, norms against starting fights are enforced far more strongly. Fights are much rarer, and when they do happen we try much harder to figure out who started them, and to more reliably punish starters. We have created much larger groups of “us” (e.g., nations), and use law to increase the resources we devote to enforcing norms against fighting, and the neutrality of many who spend those resources. Furthermore, we have and enforce stronger norms against retaliating overly strongly to apparent provocations that may have been accidental. We are less impressed by fighters, and prefer for people to use other ways to show off their strength and bravery. We see fighting as socially destructive, to be discouraged. And as fighting is rare, we infer undesired features about the few rare exceptions, such impulsiveness and a lack of empathy.

Now consider disagreement. I have done a lot of research on this topic and am pretty confident of the following claim (which I won’t defend here): People who are mainly trying to present accurate beliefs that are informative to observers, without giving much weight to other considerations (aside from minimizing thinking effort), do not foresee disagreements. That is, while A and B may often present differing opinions, A cannot publicly predict how a future opinion that B will present on X will differ on average from A’s current opinion on X. (Formally, A’s expectation of B’s future expectation nearly equals A’s current expectation.)

Of course today such foreseeing to disagree is quite commonplace. Which implies that in any such disagreement, one or both parties is not mainly trying to present accurate estimates. Which is a violation of our usual conversational norms for honesty. But it often isn’t easy to tell which party is not being fully honest. Especially as observers aren’t trying very hard very to tell, nor to report what they see honestly when they feel inclined to support “our” side in a disagreement with “them”. Furthermore, we are often quite impressed by disagreers who are smart, knowledgeable, passionate, and unyielding. And many say that disagreements are good for innovation, or for defending our ideologies against their rivals. All of which helps explain why disagreement is so common today.

But the analogy with the history of violent physical fights suggests that other equilibria may be possible. Imagine that disagreement were much less common, and that we could spend far more resources to investigate each one, using relatively neutral people. Imagine a norm of finding disagreement surprising and expecting the participants to act surprised and dig into it. Imagine that we saw ourselves much less as closely mixed groups of “us” and “them” regarding these topics, and that we preferred other ways for people to show off loyalty, smarts, knowledge, passion, and determination.

Imagine that we saw disagreement as socially destructive, to be discouraged. And imagine that the few people who still disagreed thereby revealed undesirable features such as impulsiveness and ignorance. If it is possible to imagine all these things, then it is possible to imagine a world which has far less foreseeable disagreement than our world, comparable to how we now have much less violence than did the ancient farming world.

When confronted with such an imaged future scenario, many people today claim to see it as stifling and repressive. They very much enjoy their freedom today to freely disagree with anyone at any time. But many ancients probably also greatly enjoyed the freedom to hit anyone they liked at anytime. Back then, it was probably the stronger better fighters, with the most fighting allies, who enjoyed this freedom most. Just like today it is probably the people who are best at arguing to make their opponents look stupid who enjoy our freedom to disagree today. Doesn’t mean this alternate world wouldn’t be better.

GD Star Rating
loading...
Tagged as: ,

We Agree On So Much

In a standard Bayesian model of beliefs, an agent starts out with a prior distribution over a set of possible states, and then updates to a new distribution, in principle using all the info that agent has ever acquired. Using this new distribution over possible states, this agent can in principle calculate new beliefs on any desired topic. 

Regarding their belief on a particular topic then, an agent’s current belief is the result of applying their info to update their prior belief on that topic. And using standard info theory, one can count the (non-negative) number of info bits that it took to create this new belief, relative to the prior belief.  (The exact formula is Sumi pi log2(pi/qi), where pi is the new belief, qi is the prior, and i ranges over possible answers to this topic question.)  

How much info an agent acquires on a topic is closely related to how confident they become on that topic. Unless a prior starts out very confident, high confidence later can only come via updating on a great many info bits. 

Humans typically acquire vast numbers of info bits over their lifetime. By one estimate, we are exposed to 34GB per day. Yes, as a practical matter we can’t remotely make full use of all this info, but we do use a lot of it, and so our beliefs do over time embody a lot of info. And even if our beliefs don’t reflect all our available info, we can still talk about the number of bits are embodied in any given level of confidence an agent has on a particular topic. 

On many topics of great interest to us, we acquire a huge volume of info, and so become very confident. For example, consider how confident you are at the moment about whether you are alive, whether the sun is shining, that you have ten fingers, etc. You are typically VERY confident about such things, because have access to a great many relevant bits.

On a great many other topics, however, we hardly know anything. Consider, for example, many details about the nearest alien species. Or even about the life of your ancestors ten generations back. On such topics, if we put in sufficient effort we may be able to muster many very weak clues, clues that can push our beliefs in one direction or another. But being weak, these clues don’t add up to much; our beliefs after considering such info aren’t that different from our previous beliefs. That is, on these topics we have less than one bit of info. 

Let us now collect a large broad set of such topics, and ask: what distribution should we expect to see over the number of bits per topic? This number must be positive, for many familiar topics it is much much larger than one, while for other large sets of topics, it is less than one. 

The distribution most commonly observed for numbers that must be positive yet range over many orders of magnitude is: lognormal. And so I suggest that we tentatively assume a (large-sigma) lognormal distribution over the number of info bits that an agent learns per topic. This may not be exactly right, but it should be qualitatively in the ballpark.  

One obvious implication of this assumption is: few topics have nearly one bit of info. That is, most topics are ones where either we hardly know anything, or where we know so much that we are very confident. 

Note that these typical topics are not worth much thought, discussion, or work to cut biases. For example, when making decisions to maximize expected utility, or when refining the contribution that probabilities on one topic make to other topic probabilities, getting 10% of one’s bits wrong just won’t make much of difference here. Changing 10% of 0.01 bit makes still leaves one’s probabilities very close to one’s prior. And changing 10% of a million bits still leaves one with very confident probabilities.  

Only when the number of bits on a topic is of order unity do one’s probabilities vary substantially with 10% of one’s bits. These are the topics where it can be worth paying a fixed cost per topic to refine one’s probabilities, either to help make a decision or to help update other probability estimates. And these are the topics where we tend to think, talk, argue, and worry about our biases.

It makes sense that we tend to focus on pondering such “talkable topics”, where such thought can most improve our estimates and decisions. But don’t let this fool you into thinking we hardly agree on anything. For the vast majority of topics, we agree either that we hardly know anything, or that we quite confidently know the answer. We only meaningfully disagree on the narrow range of topics where our info is on the order of one bit, topics where it is in fact worth the bother to explore our disagreements. 

Note also that for these key talkable topics, making an analysis mistake on just one bit of relevant info is typically sufficient to induce large probability changes, and thus large apparent disagreements. And for most topics it is quite hard to think and talk without making at least one bit’s worth of error. Especially if we consume 34GB per day! So its completely to be expected that we will often find ourselves disagreeing on talkable topics at the level of few bits.

So maybe cut yourself and others a bit more slack about your disagreements? And maybe you should be more okay with our using mechanisms like betting markets to average out these errors. You really can’t be that confident that it is you who has made the fewest analysis errors. 

GD Star Rating
loading...
Tagged as:

Huemer On Disagreement

Mike Huemer on disagreement:

I participated in a panel discussion on “Peer Disagreement”. … the other person is about equally well positioned for forming an opinion about that issue — e.g., about as well informed, intelligent, and diligent as you. … discussion fails to produce agreement … Should you just stick with your own intuitions/judgments? Should you compromise by moving your credences toward the other person’s credences? …

about the problem specifically of philosophical disagreement among experts (that is, professional philosophers): it seems initially that there is something weird going on, … look how much disagreement there is, … I think it’s not so hard to understand a lot of philosophical disagreement. … we often suck as truth-seekers: Bad motives: We feel that we have to defend a view, because it’s what we’ve said in print in the past. … We lack knowledge (esp. empirical evidence) relevant to our beliefs, when that knowledge is outside the narrow confines of our academic discipline. … We often just ignore major objections to our view, even though those objections have been published long ago. … Differing intuitions. Sometimes, there are just two or more ways to “see” something. …

You might think: “But I’m a philosopher too [if you are], so does that mean I should discount my own judgments too?” Answer: it depends on whether you’re doing the things I just described. If you’re doing most of those things, it’s not that hard to tell.

Philosophy isn’t really that different from most other topic areas; disagreement is endemic most everywhere. The main ways that it is avoided is via extreme restrictions on topic, or strong authorities who can force others to adopt their views.

Huemer is reasonable here right up until his last five words. Sure we can find lots of weak indicators of who might be more informed and careful in general, and also on particular topics. Especially important are clues on if a person listens well to others, and updates on the likely info value of others’ opinions.

But most everyone already knows this, and so typically tries to justify their disagreement by pointing to positive indicators about themselves, and negative indicators about those who disagree with them. If we could agree on the relative weight of these indicators, and act on them, then we wouldn’t actually disagree much. (Formally we wouldn’t foresee to disagree.)

But clearly we are severely biased in our estimates of these relative indicator weight, to favor ourselves. These estimates come to us quite intuitively, without needing much thought, and are typically quite confident, making us not very anxious about their errors. And we mostly seem to be quite sincere; we aren’t usually much aware that we might be favoring ourselves. Or if we are somewhat aware, we tend to feel especially confident that those others with whom we disagree are at least as biased as we. I see no easy introspective fix here.

The main way I know to deal with this problem is to give yourself much stronger incentives to be right: bet on it. As soon as you start to think about how much you’d be willing to bet, and at what odds, you’ll find yourself suddenly much more aware of the many ways you might be wrong. Yes, people who bet still disagree more than is accuracy-rational, but they are much closer to the ideal. And they get even closer as they start to lose bets and update their estimates re how good they are on what topics.

GD Star Rating
loading...
Tagged as:

To Oppose Polarization, Tug Sideways

Just over 42% of the people in each party view the opposition as “downright evil.” … nearly one out of five Republicans and Democrats agree with the statement that their political adversaries “lack the traits to be considered fully human — they behave like animals.” … “Do you ever think: ‘we’d be better off as a country if large numbers of the opposing party in the public today just died’?” Some 20% of Democrats and 16% of Republicans do think [so]. … “What if the opposing party wins the 2020 presidential election. How much do you feel violence would be justified then?” 18.3% of Democrats and 13.8% of Republicans said [between] “a little” to “a lot.” (more)

Pundits keep lamenting our increasing political polarization. And their preferred fix seems to be to write more tsk-tsk op-eds. But I can suggest a stronger fix: pull policy ropes sideways. Let me explain.

Pundit writings typically recommend some policies relative to others. In polarized times such as ours, these policy positions tend to be relatively predictable given a pundit’s political value positions, i.e., the positions they share with their political allies relative to their political enemies. And much of the content of their writings work to clarify any remaining ambiguities, i.e., to explain why their policy position is in fact a natural result of political positions they share with their allies. So only people with evil values would oppose it. So readers can say “yay us, boo them”.

Twelve years ago I described this as a huge tug-o-war:

The policy world can thought of as consisting of a few Tug-O-War “ropes” set up in [a] high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are “one of them,” you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it. (more)

To oppose this tendency, one idea is to encourage pundits to sometimes recommend policies that are surprising or the opposite of what their political positions might suggest. That is, go pull on the opposite side of a rope sometimes, to show us that you think for yourself, and aren’t driven only by political loyalty. And yes doing this may help. But as the space of political values that we fight over is multi-dimensional, surprising pundit positions can often be framed as a choice to prioritize some values over others, i.e., as a bid to realign the existing political coalitions in value space. Yes, this may weakens the existing dominant political axis, but it may not do much to make our overall conversation less political.

Instead, I suggest that we encourage pundits to grab a policy tug-o-war rope and pull it sideways. That is, take positions that are perpendicular to the usual political value axes, in areas where one has not yet taken explicit value-oriented positions. For example, a pundit who has not yet taken a position on whether we should have more or less military spending might argue for more navy relative to army, and then insist that this is not a covert way to push a larger or smaller military. Most credibly by continuing to not take a position on overall military spending. (And by not coming from a navy family, for whom navy is a key value.)

Similarly, someone with no position on if we should punish crime more or less than we currently do might argue for replacing jail-based punishments with fines, torture, or exile. Or, given no position on more or less immigration, argue for a particular new system to decide which candidates are more worthy of admission. Or given no position on how hard we should work to compensate for past racism, argue for cash reparations relative to affirmative action.

Tugging policy ropes sideways will frustrate and infuriate loyalists who seek mainly to praise their political allies and criticize their enemies. Such loyalists will be tempted to assume the worse about you, and claim that you are trying to covertly promote enemy positions. And so they may impose a price on you for this stance. But to the extent that observers respect you, loyalists will pay a price for attacking you in this way, and raising their overall costs of making everything political. And so on average by paying this price you can buy an overall intellectual conversation that’s a bit less political. Which is the goal here.

In addition, pulling ropes sideways is on average just a better way to improve policy. As I said twelve years ago:

If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy. On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled. (more)

Yes, there is a sense in which arguments for “sideways” choices do typically appeal to a shared value: “efficiency”. For example, one would typically argue for navy over army spending in terms of cost-effectiveness in military conflicts and deterrence. Or might argue for punishment via fines in terms of cost-effectiveness for the goals of deterrence or rehabilitation. But all else equal we all like cost-effectiveness; political coalitions rarely want to embrace blatant anti-efficiency positions. So the more our policy debates emphasize efficiency, the less political polarized they should be.

Of course my suggestion here isn’t especially novel; most pundits are aware that they have the option to take the sort of sideways positions that I’ve recommended. Most are also aware that by doing so, they’d less enflame the usual political battles. Yet how often have you heard pundits protest that others falsely attributed larger value positions to them, when they really just tried to argue for cost-effectiveness of A over B using widely shared effectiveness concepts? That scenario seems quite rare to me.

So the main hope I can see here is of a new signaling equilibria where people tug sideways and brag about it, or have others brag on their behalf, to show their support for cutting political polarization. And thereby gain support from an audience who wants to reward cutters. Which of course only works if enough pundits actually believe a substantial such audience exists. So what do you say, is there much of an audience who wants to cut political polarization?

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,