Tag Archives: Disagreement

Heresy Helps You Think

The main social functions of school seem to be to help students show off their smarts, conformity, and conscientiousness. Schools also babysit, socialize, and indoctrinate. But run my experience the two stated functions that school fans tout most often are (A) teaching students particular useful facts and theories,  and(B) teaching students how to think for themselves. 

When teaching students to think for themselves, it is not enough to just assign though-provoking essays or stories; students at some point must practice generating, supporting, and debating their opinions on particular example topics. And it doesn’t work to use topics with obvious agreed-on answers, like “is the sky blue?” No, to practice thinking for themselves, students need to engage topics where plausible arguments and evidence can be found on at least two sides. 

One standard set of example topics is offered by philosophy, topics such as free will, determinism, infinity, solipsism, or nihilism. But these topics tend to be pretty far from the interests and experiences of most students. Students are much more easily and usefully engaged on topics that are currently considered “controversial” in their world. But most schools are quite reluctant to let their students debate most such topics. Why?

When people listen to a debate on a topic, their opinions consistently tend to move toward the middle of the range of possible opinions on that topic. Thus increasing public attention to a topic is a reliable way to influence public opinion on it. And thus the eagerness of authorities to allow student attention on a topic depends greatly on whether this predictable movement is or is not in their favored direction.

For example, fans of intelligent design push schools to “teach the controversy”, while its opponents want the topic ignored. Vaccine skeptics would love students to consider vaccine skepticism, while the usual elites would not. And progressive teachers happily encourage students to discuss progressive proposals currently unpopular with most citizens, such as race reparations, universal basic income, or a wealth tax. 

Thus schools that are responsive to parents, politicians, or academic elites mostly do not allow students to debate topics where such powers dislike middle positions there, relative to status quo opinions. But most “controversial” topics are exactly of this form; some existing confident position, like “vaccines are safe”, is challenged by some contrarians, who win even if audiences only move to middle positions of uncertainty.

Thus while school fans claim that an important function of school is to help students learn to “think for themselves”, school authorities mostly won’t let students practice such thinking on the controversial topics most suitable for such practice. Me, I’d be happy to use public polls or votes to select the topics students are allowed to engage in public schools. But I expect that most public school authorities, including most teachers, would strongly opposed such a proposal.

Added 5Dec: Yes, there’s a decent case for the view that schools mostly select for skills, rather than actually improving them. Even so, thinking for yourself seems one of those skills that schools should be selecting for, even if they don’t improve them.

GD Star Rating
a WordPress rating system
Tagged as: ,

Thinkers Must Be Heretics

When we form opinions on topics, the depth of our efforts vary. On some topics we put in no effort, and hold no opinions. On other topics, we notice what are the opinions of standard authorities, and adopt those. We often go further to learn of some arguments offered by such authorities, and mostly accept those arguments.

Sometimes we feel contrarian and make up an opinion we know to be contrary to standard ones. Sometimes we instead seek out non-standard authorities that we more respect, and adopt their opinions and maybe also arguments. Contrarian authorities often explicitly mention and rebut the arguments of standard authorities, and sometimes we also learn and adopt those counter-arguments.

Sometimes we try to learn about many arguments on a topic from many sides, and then try to compare and evaluate them more directly, paying less attention to how much we respect their sources. Sometimes we generate our own arguments to add to this mix. Sometimes we do this alone, and sometimes in collaboration with close associates. Compared to the other approaches mentioned above, this last set of approaches can be described as more “thinking for ourselves”.

In general, arguments try to draw conclusions from widely accept claims and assumptions. So to dig deeper, we can recurse, by taking the claims X used to support arguments on topic T, and treat some of those X as new topics to consider in this same way.

Our associates are interested in judging how well we think, and we are eager to impress them. And as all of these effort levels are appropriate in various practical cases, in principle our associates should want to judge our abilities at all of these different levels. However, as we tend to see deeper thinking as harder, where our thinking skills matter more, and the more usual practical task, as authorities haven’t spoken to most of the practical issues we face, we are more eager to demonstrate and judge abilities to do deeper thinking.

Thus we all tend to present ourselves as thinking more deeply than we actually do. Not arbitrarily deeply, which isn’t believable. But maybe as deep as is plausible in a given case. So we tend to present ourselves, when possible, as “thinking for ourselves”.

Note that this thinking-for-yourself approach plausibly produces less accurate and reliable beliefs on each particular topic. Most people are usually less able to integrate info and arguments into an accurate total opinion than is the collective action of the usual authorities. Even so, showing off your abilities, and improving them via practice, often matters more to us than accuracy on each topic. We might be collectively better off due to us all doing more thinking, but this isn’t obvious.

We could of course get both accuracy and practice in thinking if we’d do our own analysis, but then adopt authority opinions even when that disagreed with our personal analysis. But we rarely do that, as we consider it “insincere” and “two-faced”.

Thinking-for-yourself, however, has a big problem on topics where there are orthodox opinions, opinions on which all good thinking people in some community are supposed to agree. The problem is that thinking for yourself is usually noisy and context-dependent. That is, the process of thinking for ourselves doesn’t consistently produce the same outputs given the same inputs. Many random factors re what arguments we notice, and how we framed or ordered our thoughts, often substantially influence our conclusions. And thus people who think for themselves must be expected to reach contrarian conclusions a substantial (~5-50%) fraction of the time.

Note that people who want to create the impression that they think for themselves, without putting in the effort of actually doing so, can just randomly adopt contrarian conclusions at roughly this rate. And this does seem to be the strategy of most ordinary people, who have quite high rates of variation in their opinions, and yet who don’t seem to think very deeply. Their opinions even vary widely across time, as they usually can’t recall the random opinions that they generated even a few months before.

However, this rate of variation is a much bigger problem for people whose opinions are more prominent. If someone publicly states their think-for-themself conclusions on twenty orthodox-adjacent topics, they should expect an average of ~1-10 heressy-adjacent opinions in that set. Yet often a prominent enough person publicly seeming to endorse even a single heresy is enough to get them cancelled in a community. Such as losing their job, or any chance for advancement or entry into that community. What to do?

One traditional solution has been for the usual authorities to present themselves as focused on particular topics associated with their positions of authority, and not thinking for themselves on most other topics. Especially re most orthodox topics. This was long the usual position of CEOs, for example. Another traditional solution was for scholars, who do often specialize as thinkers on topics at least adjacent to orthodox ones, to speak esoterically, i.e., evasively in public, and only frankly in private to other scholars.

In our society today, however, a great many people present themselves as

  1. relatively prominent and thus worth cancelling,
  2. largely thinking for themselves even on orthodox-adjacent topics,
  3. offering their opinions in public on many such topics, and yet
  4. none of these public opinions are heresies.

In fact they often express outrage when they encounter another such person expressing even a single heresy. But if they offer non-heresy opinions on twenty such topics, it is quite hard to believe that all those opinions are a random sample of their opinions generated by thinking for themselves; the natural rate of opinion variation due to thinking for yourself is just too high to produce such a result. Such people are probably being selective in what they say, or deceiving themselves into seeing themselves as thinking for themselves more than they actually do.

And thus we reach the thesis in my title: thinkers must be heretics. If you see people with many opinions none of which are heretical, this just can’t be a random sample of topics on which they are mostly thinking for themselves. And if you plan to manage a herd of deep thinkers in our world today, people who spend a lot of time showing off how well they can think for themselves, you need to either need to keep them away from orthodox-adjacent topics, or keep their discussions internal and private; don’t let them speak on such things in public. Or be securely insulated from cancellation, if that’s really possible.

Note that there might exist a minority of thinkers good enough that their think-for-themselves estimates are actually more accurate than the official opinions of the usual authorities. After all, existing institutions often allow entrenched powers to, for a time, resist switching to better estimates. In this case, we might coordinate to make such better estimates more visible, such as via prediction markets. But such entrenched powers have so far prevented this reform.

Note also that I’ve avoided listing particular heresies here, for fear of seeming to endorse them. Which suggests how strong social pressures regarding them may be.

Added 1Dec: Here I describe myself as a “think for myself polymath.”

GD Star Rating
a WordPress rating system
Tagged as: ,

Argument Selection Bias

One strategy to decide what to believe about X is to add up all the pro and con arguments that one is aware of regarding X, weighing each by its internal strength. Yes, it might not be obvious how to judge and combine arguments. But this strategy has a bigger problem; it risks a selection bias. What if the process that makes you aware of arguments has selected non-randomly from all the possible arguments?

One solution is to focus on very simple arguments. You might be able to exhaustively consider all arguments below some threshold of simplicity. However, here you still have to worry that simple arguments tend to favor a particular side of X. For example, if the question is “Is there some complex technical solution to simple problem X”, it may not work well to exclude all complex technical solution proposals.

We often see situations where far more effort seems to go into finding, honing, and publicizing pro-X arguments, relative to anti-X arguments. In this case the key question is what processes induced those asymmetric efforts. For example, as the left tends to dominate the high end of academia, very academic policy arguments strongly favor left policies. So the question is: what process induced such people to become left?

If new academics started out equally distributed on the left and right, and then searched among academic arguments, becoming more left only as they discovered mainly only left arguments in that space, then we wouldn’t have so much of a selection bias to worry about. However, if the initial distribution of academics leans heavily left for non-argument reasons, then there could be a big selection bias among very academic arguments, even if not perhaps among the arguments that induced people to become academics in the first place.

Often there are claims X where not only does most everyone support X, most everyone is also eager to repeat arguments favoring X, to identify and repudiate any who oppose X, and to ridicule their supporting arguments. In these cases, there is far less energy and effort available to find, hone, and express anti-X claims. For example, consider topics related to racism, sexism, pedophilia, inequality, IQ, genes, or the value of school and medicine. In these cases we should expect strong selection biases favoring X, and thus for weight-of-argument purposes we should adjust our opinions to less favor these X.

However, sometimes there are contrarian claims X where far more effort goes into finding, honing, and expressing arguments supporting X. Consider the claims of 911-truthers, for example. Here we should expect a bias against X among the simple arguments that most people would use to justify their dismissing X, but a bias favoring X among the more complex arguments that 911-truthers would find when studying the many details close to the issue.

What if a topic is local, of interest only to your immediate associates? In this case you should expect a bias favoring those who are more motivated to want others to believe X, and favoring those who are just generally better at finding, honing, and expressing arguments. Thus being known to be good at arguing should generally make one less effective at persuading associates.

In larger social worlds, however, where arguments can pass through many intermediaries, it won’t work as well to discount arguments by the abilities of their sources. In that case one will have to discount arguments based on overall features of the communities who favor and oppose X. Here those who are especially good at arguing will be especially tempted to join such discussions, as their audience is less able to apply personal discounts regarding their arguing abilities.

In all of these cases, we would ideally adjust our standards for discounting beliefs continuously, with the many parameters by which we estimate context-dependent selection biases. But we may sometimes instead feel constrained in our abilities to make such adjustments. Our lower level mental processes may just weigh up the arguments they hear without applying enough discounts.

In which case we might just want to limit our exposure to the sources that we expect to be unusually subject to favorable selection biases. This may sometimes justify common practices of sticking one’s head in the sand, and fingers in one’s ears, regarding suspect sources. And we might also reasonably show a “perverse” forbidden-fruit fascination with hearing arguments that favor forbidden views.

GD Star Rating
a WordPress rating system
Tagged as: ,

On Disagreement, Again

The usual party chat rule says to not spend too long on any one topic, but instead to flit among topics unpredictably. Many thinkers also seem to follow a rule where if they think about a topic and then write up an opinion, they are done and don’t need to ever revisit the topic again. In contrast, I have great patience for returning again and again to the most important topics, even if they seem crazy hard. And for spending a lot time on each topic, even if I’m at a party.

A long while ago I spend years studying the rationality of disagreement, though I haven’t thought much about it lately. But rereading Yudkowsky’s Inadequate Equilibria recently inspires me to return to the topic. And I think I have a new take to report: unusual for me, I adopt a mixed intermediate position.

This topic forces one to try to choose between two opposing but persuasive sets of arguments. On the one side there is formal theory, to which I’ve contributed, which says that rational agents with different information and calculation strategies can’t have a common belief in, nor an ability to foresee, the sign of the difference in their opinions on any “random variable”. (That is, a parameter that can be different in each different state of the world.) For example, they can’t say “I expect your next estimate of the chance of rain here tomorrow to be higher than the estimate I just now told you.”

Yes, this requires that they’d have the same ignorant expectations given a common belief that they both knew nothing. (That is, the same “priors”.) And they must be listening to and taking seriously what the other says. But these seem reasonable assumptions.

An informal version of the argument asks you to imagine that you and someone similarly smart, thoughtful, and qualified each become aware that your independent thoughts and analyses on some question had come to substantially different conclusions. Yes, you might know things that they do not, but they may also know things that you do not. So as you discuss the topic and respond to each others’ arguments, you should expect to on average come to more similar opinions near some more intermediate conclusion. Neither has a good reason to prefer your initial analysis over the others’.

Yes, maybe you will discover that you just have a lot more relevant info and analysis. But if they see that, they should then defer more to you, as you would if you learned that they are more expert than you. And if you realized that you were more at risk of being proud and stubborn, that should tell you to reconsider your position and become more open to their arguments.

According to this theory, if you actually end up with common knowledge of or an ability to foresee differences of opinion, then at least one of you must be failing to satisfy the theory assumptions. At least one of you is not listening enough to, and taking seriously enough, the opinions of the other. Someone is being stubbornly irrational.

Okay, perhaps you are both afflicted by pride, stubbornness, partisanship, and biases of various sorts. What then?

You may find it much easier to identify more biases in them than you can find in yourself. You might even be able to verify that you suffer less from each of the biases that you suspect in them. And that you are also better able to pass specific intelligence, rationality, and knowledge tests of which you are fond. Even so, isn’t that roughly what you should expect even if the two of you were similarly biased, but just in different ways? On what basis can you reasonably conclude that you are less biased, even if stubborn, and so should stick more to your guns?

A key test is: do you in fact reliably defer to most others who can pass more of your tests, and who seem even smarter and more knowledgeable than you? If not, maybe you should admit that you typically suffer from accuracy-compromising stubbornness and pride, and so for accuracy purposes should listen a lot more to others. Even if you are listening about the right amount for other purposes.

Note that in a world where many others have widely differing opinions, it is simply not possible to agree with them all. The best that could be expected from a rational agent is to not consistently disagree with some average across them all, some average with appropriate weights for knowledge, intelligence, stubbornness, rationality, etc. But even our best people seem to consistently violate this standard.

All that we’ve discussed so far has been regarding just one of the two opposing but persuasive sets of arguments I mentioned. The other argument set centers around some examples where disagreement seems pretty reasonable. For example, fifteen years ago I said to “disagree with suicide rock”. A rock painted with words to pretend it was a sentient creature listening carefully to your words, but offering no evidence that it actually listened, should be treated like a simple painted rock. In that case, you have strong evidence to down-weight its claims.

A second example involves sleep. While we are sleeping we don’t usually have an opinion on if we are sleeping, as that issue doesn’t occur to us. But if the subject does come up, we often mistakenly assume that we are awake. Yet a person who is actually awake can have high confidence in that fact; they can know that while a dreaming mind is seriously broken, their mind is not so broken.

An application to disagreement comes when my wife awakes in the night, hears me snoring, and tells me that I’m snoring and should turn my head. Responding half asleep, I often deny that I’m snoring, as I then don’t remember hearing myself snore recently, and I assume that I’d hear such a thing. In this case, if my wife is in fact awake, she can comfortably disagree with me. She can be pretty sure that she did hear me snore and that I’m just less reliable due to being only half awake.

Yudkowsky uses a third example, which I also find persuasive, but at which many of you will balk. That is the majority of people who say they have direct personal evidence for God or other supernatural powers. Evidence that’s mainly in their feelings and minds, or in subtle patterns in how their personal life outcomes are correlated with their prayers and sins. Even though most people claim to believe in God, and point to this sort of evidence, Yudkowsky and I think that we can pretty confidently say that this evidence just isn’t strong enough to support that conclusion. Just as we can similarly say that personal anecdotes are usually insufficient to support the usual confidence in the health value of modern medicine.

Sure, its hard to say with much confidence that there isn’t a huge smart power somewhere out there in the universe. And yes, if this power did more obvious stuff here on Earth back in the day, that might have left a trail of testimony and other evidence, to which advocates might point. But there’s just no way that either of those considerations can remotely support the usual level of widespread confidence in a God meddling in detail with their heads and lives.

The most straightforward explanation I can see here is social desirability bias a bias that not only introduces predictable errors but also one’s willingness to notice and correlate such errors. By attributing their belief to “faith”, many of them do seem to acknowledge quite directly that their argument won’t stand up to the usual evaluation standards. They are instead believing because they want to believe. Because their social world rewards them for the “courage” and “affirmation” of such a belief.

And that pretty closely fits a social desirability bias. Their minds have turned off their rationality on this topic, and are not willing to consider the evidence I’d present, or the fact that the smartest most accomplished intellectuals today tend to be atheists. Much like the sleeper who just can’t or won’t see that their mind is broken and unable to notice that they are asleep.

In fact, it seems to me that this scenario matches a great many of the disagreements I’m willing to have with others. As I tend to be willing to consider hypotheses that others find distasteful or low status. Many people tell me that the pictures I paint in my two books are ugly, disrespectful, and demotivating, but far fewer offer any opposing concrete evidence. Even though most people seem able to notice the fact that social desirability would tend to make them less willing to consider such hypotheses, they just don’t want to go there.

Yes, there is an opposite problem: many people are especially attracted to socially undesirable hypotheses. A minority of folks see themselves as courageous “freethinkers” who by rights should be celebrated for their willingness to “think outside the box” and embrace a large fraction of the contrarian hypotheses that come their way. Alas, by being insufficiently picky about the contrarian stories they embrace, they encourage, not discourage, everyone else to embrace social desirability biases. On average, social desirability only causes modest biases in the social consensus, and thus only justifies modest disagreements from those who are especially rational. Going all in on a great many contrarian takes at once is a sign of an opposite problem.

Yes, the stance I’m taking implies that contrarian views, i.e., views that seem socially undesirable to embrace, are on average neglected, and thus more likely than the consensus is willing to acknowledge. But that is of course far from endorsing most of them with high confidence. For example, UFOs as aliens are indeed more likely than the usual prestigious consensus will admit, but could still be pretty unlikely. And assigning a somewhat higher chance to claims like that the moon landings were faked it is not at all the same as endorsing such claims.

So here’s my new take on the rationality of disagreement. When you have a similar level of expertise to others, you can justify disagreeing with an apparent social consensus only if you can identity a particularly strong way that the minds of most of those who think about the topic tend to get broken by the topic. Such as due to being asleep or suffering from a strong social desirability bias. (A few weak clues won’t do.)

I see this position as mildly supported by polls showing that people think that those in certain emotional states are less likely to be accurate in the context of a disagreement; different emotions plausibly trigger different degrees of willingness to be fair or rational. (Here are some other poll results on what people think predicts who is right in a disagreement.)

But beware of going too wild embracing most socially undesirable views. And you can’t just in general presume that others disagree with each of your many positions due to their minds being broken in some way that you can’t yet see. That way lies unjustified arrogance. You instead want specific concrete evidence of strongly broken minds.

Imagine that you specialize in a topic so much that you know nearly as much as the person in the world who knows the most, but do not have the sort of credentials or ways to prove your views that the world would easily accept. And this is not the sort of topic where insight can be quickly and easily translated into big wins, wins in either money or status. So if others had come to your conclusions before, they would not have gained much personally, nor found easy ways to persuade many others.

In this sort of case, I think you should feel more free to disagree. Though you should respect base rates, and try to test your views as fast and strongly as possible. As the world is just not listening to you, you can’t expect them yet to credit what you know. Just also don’t expect the world to reward you or pay you much attention, even if you are right.

GD Star Rating
a WordPress rating system
Tagged as:

Discussion Contests

My last post outlined how to make a better “sport” wherein people compete on, and are ranked by, their ability to persuade audiences of claims. Which might be a nice way to find/make sales-folk.

But what I’d really like is to find/make people good at informative discussion. That is, we the audience want to listen to people who are good at taking the floor of our attention and talking so as to more rapidly move our estimates toward higher-confidence values. And we want this more for the case where we are a reasonable rational audience, relative to our being easily swayed by demagoguery. We want to listen to people who will more rapidly change our reasonable minds.

Here’s an idea using betting markets. Imagine a topic for which we will later have some ex post objective measure of truth. We can thus create (possibly subsidized) betting markets over this space of outcomes. Also imagine having some info weights regarding different possible probability distribution over outcomes. Using these weights, we can create a single number saying how informative are any given set of prices. Thus we can say how much info was added (or subtracted) to those prices during any given time period.

So if we have a center of attention “stage” wherein one speaker talks at a time, and if the audience participates in a betting market while they listen, then we can get a measure of the info added by each speaker while they spoke. So we can score each speaker on their info given per second of talking.

Okay, yes, there may be a delay between when a speaker says something and when a listener comes to realize its implications and then makes a resulting market trade. This is a reason to have speakers talk for longer durations, so that their score over this duration can include this delayed realization effect.

Now one way to use this is debate style. Give each speaker the same amount of total time, in the same-length time blocks, and see which one added the most info by the end. Repeat in many pairwise contests. But another approach is to instead just pay to try to get the most info out of any given set of potential speakers.

Imagine an auction for each short period of speaking. If you bid the most per second, you get to the center stage to talk, and then you will be paid in proportion to the info you end up contributing, according to market price changes. Speakers could bid on themselves, or investors might pay for speaker bids. (Let speakers bid for future time periods long enough to include the delayed realization effect.)

Even if there were other sources of info possible, besides this center stage, this auction would still give a credible reason for most of the audience to pay some attention to the center stage. After all, the auction would have selected for the one person expected to be most worth listening to, at least on average.

So now, to induce an informative discussion on a topic, one both subsidizes prediction markets on that topic, and commits to pay each person who wins an auction to speak from a center stage a reward proportional to the info added to those prediction markets while they speak.

What if different time periods are expected to add different amounts of info to the market prices through channels other than the center stage speaker? This could bias the debate structure, but isn’t a problem for the auction structure. Auction bidders would bid more for those extra info time periods, but the winner would still be the speaker expected to add the most info.

This should be pretty easy to test in lab experiments. Who wants to help set them up?

GD Star Rating
a WordPress rating system
Tagged as: ,

New Sport of Debate?

Someone recently told me “Hey, you seem good at debate.” Which made me think “Yeah, the world needs more debate. Let’s design a better online debate forum.” Here’s an initial concept sketch.

Audience – These are people allowed to propose and rate debate claims, to propose matches, and to rate performance in them. Each declares their acceptable languages and formats (e.g., text, audio, video). Maybe want to ensure each human can only vote once per issue. To rate a debate, maybe they need to show that they heard the debate live.

Claim – A list of possible claims to debate. Are some topics off limits? Do editors curate the list to edit wordings and cut redundancies?

Debaters – People who have volunteered to debate particular claims. Each one can say which sides (pro or con) of which claims they would defend, in what languages and formats, at what day/times, and who they refuse to debate. (Can “math heavy” or “stat heavy” be languages?)

Debates – Two (or four?) participants publicly debate a given claim online at a given pre-announced time, in a given language and format, with some way to allocate speaking time roughly equally between participants. (Maybe Equatalk?) Some rule decides if debate is cancelled or postponed due to no-shows or health/tech/etc. issues.

Civility – Some process rules, e.g., if debaters can hurl insults, or introduce links for audience to check.

Opinions – Each audience member at a debate gives degree(s?) of support for the claim just before and just after the debate. Maybe state opinions before they know debate participants?

Matching – A process (algorithm?) to pick who debates whom when on what claim in what language, based on the claims that debaters have selected, debater ranks, popularity of claims and matches, and audience participation rates. Maybe do this to max predicted future debate audiences, or info to adjust rankings, or info that changes opinions.

Ranking – A process (algorithm?) to rank value (plus uncertainty?) of each debater, relative to others, based on no-show rates and the opinions expressed at their debates. Maybe opinions of higher ranked debaters count more. Maybe more debates, or being willing to debate more claims, counts more. Ideally the ranking rule is simple, public, and robust to criticism.

Seems the next step here is to propose, critique, and choose more specific rules. Then someone can write or adapt software.

I see big gains from such a forum becoming popular. A good debate forum could become an alternate credentialing framework, to show that some people are good at real debate. (Not like those fake high school debates.) Maybe some new kinds of schools would form to teach people how to do well in such debates.

A related forum might rate participants more in terms of how well they “discuss” claims, and less in terms of persuading an audience toward some pre-defined conclusion. Maybe rate each on how much they moved audience members in any directions, as proxy for being informative? The big question there seems to me: how can we do that rating, and who gets more weight in such ratings.


GD Star Rating
a WordPress rating system
Tagged as:

Best Case Contrarians

Consider opinions distributed over a continuous parameter, like the chance of rain tomorrow. Averaging over many topics, accuracy is highest at the median, and falls away for other percentile ranks. This is bad news for contrarians, who sit at extreme percentile ranks. If you want to think you are right as a contrarian, you have to think your case is an exception to this overall pattern, due to some unusual feature of you or your situation. A feature that suggests you know more than them.

Yet I am often tempted to hold contrarian opinions. In this post I want to describe the best case for being a contrarian. I’m not saying that most contrarians are actually in this best case. I’m saying that this is the case you most want to be in as a contrarian, as it can most justify your position.

I recently posted on how innovation is highest for more fragmented species, as species so often go wrong via conformity traps. For example, peacocks are now going wrong together with overly long tails. To win their local competitions, each peacock needs to have and pick the tails that are sexy to other peacocks, even if that makes them all more vulnerable to predators.

Salmon go wrong by having to swim up hard hazard-filled rivers to get to their mating groups. Only a third of them survive to return from that trip. Now imagine a salmon sitting in the ocean at the mouth of the river, saying to the other salmon:

We are suffering from a conformity trap here. I’m gonna stay and mate here, instead of going up river. If you stay here and mate with me, then we can avoid all those river hazards. We’ll survive, with more energy to help our kids, and win out over the others. Who’s with me?

Now salmon listening to his should wonder if genetic losers are especially likely to make such contrarian speeches. After all, they are the least likely to survive the river, and so the most desperate to avoid it. For all its harms, the river does function to sort out the salmon with the best genes. If you make it to the end, you know your mating partner will also be unusually fit.

So yes, those less likely to pass the river test are more likely to become salmon contrarians. But they aren’t the only ones. Also more likely are:
A) those who can better sort good from bad mates in other ways,
B) those who can better see the conformity traps, and see they are especially big,
C) those who can better see which are the best places to start alternatives to the conformity traps, and
D) those who happen to have invested less in, and thus are less tied to, existing traps. Like the young.

Our world suffers from myriad conformity traps. Like investors who must coordinate with other investors (e.g., via the different levels of venture capital), may feel they must do crypto, as that’s what the others are doing. Even if they don’t think that much of crypto. Like academics in fields that use too much math feel they also need to do too much math if they are to be respected there. Like journalists and think tank pundits feel they must write on the topics on which everyone else is talking, even if other topics are more important.

In all of these cases, it can make sense to try to initiate a contrarian alternative. If many others know about the existing conformity traps, they may also be looking for a chance to escape. The questions are then: when is the right time and place to initiate a contrarian move to escape such a trap. Who is best place to initiate, and how? And, what is the ratio of the gains of success to the costs of failure?

In situations like this, the people who actually try contrarian initiatives may not be at all wrong on their estimates about the truth. They will be different in some ways yes, but not necessarily overall on truth accuracy. In fact, they are likely to be more informed on average in the sense of being better able to judge the overall conformity trap situation, and to evaluate partners in unusual ways.

That is, they can better judge how bad is the overall conformity trap, where are promising alternatives, and who are promising partners. Even if, yes, they are also probably worse on average at winning within the usual conformity-trapped system. Compared to others, contrarians are on average better at being contrarians, and worse at being conformists. Duh.

And that’s the best case for being a contrarian. Not so much because you are just better able to see truth in general. But because you are likely better in particular at seeing when it is time to bail on a collective that is all going wrong together. If the gains from success are high relative to the costs of failure, then most such bids should fail, making the contrarian bid “wrong” most of the time. But not making most bids themselves into mistakes.

GD Star Rating
a WordPress rating system
Tagged as:

Three Types of General Thinkers

Ours is an era of rising ideological fervor, moving toward something like the Chinese cultural revolution, with elements of both religious revival and witch hunt repression. While good things may come of this, we risk exaggeration races, wherein people try to outdo themselves to show loyalty via ever more extreme and implausible claims, policies, and witch indicators.

One robust check on such exaggeration races could be a healthy community of intellectual generalists. Smart thoughtful people who are widely respected on many topics, who can clearly see the exaggerations, see that others of their calibre also see them, and who crave such associates’ respect enough to then call out those exaggerations. Like the child who said the emperor wore no clothes.

So are our generalists up to this challenge? As such communities matter to us for this and many other reasons, let us consider more who they are and how they are organized. I see three kinds of intellectual generalists: philosophers, polymaths, and public intellectuals.

Public intellectuals seem easiest to analyze. Compared to other intellectuals, these mix with and are selected more by a wider public and a wider world of elites, and thus pander more to such groups. They less use specialized intellectual tools or language, their arguments are shorter and simpler, they impress more via status, eloquent language, and cultural references, and they must speak primarily to the topics currently in public talk fashion.

Professional philosophers, in contrast, focus more on pleasing each other than a wider world. Compared to public intellectuals, they are more willing to use specialized language for particular topics, to develop intricate arguments, and to participate in back and forth debates. As the habits and tools that they learn can be applied to a pretty wide range of topics, philosophers are in that sense generalists.

But philosophers are also very tied to their particular history. More so than in other disciplines, particular historical philosophers are revered as heroes and models. Frequent readings and discussions of their classic texts pushes philosophers to try to retain their words, concepts, positions, arguments, and analysis styles.

As I use the term, polymaths are intellectuals who meet the usual qualifications to be seen as expert in many different intellectual disciplines. For example, they may publish in discipline-specific venues for many disciplines. More points for a wider range of disciplines, and for intellectual projects that combine expertise from multiple disciplines. Learning and integrating many diverse disciplines can force them to generalize from discipline specific insights.

Such polymaths tend less to write off topics as beyond the scope of their expertise. But they also just write less about everything, as our society offers far fewer homes to polymaths than to philosophers or public intellectuals. They must mostly survive on the edge of particular disciplines, or as unusually-expert public intellectuals.

If the disciplines that specialize in thinking about X tend to have the best tools and analysis styles for thinking about X, then we should prefer to support and listen to polymaths, compared to other types of generalist intellectuals. But until we manage to fund them better, they are rarely available to hear from.

Public intellectuals have the big advantage that they can better get the larger world to listen to their advice. And while philosophers suffer their historical baggage, they have the big advantage of stable funding and freedoms to think about non-fashionable topics, to consider complex arguments, and to pander less to the public or elites.

Aside from more support for polymaths, I’d prefer public intellectuals to focus more on impressing each other, instead of wider publics or elites. And I’d rather they tried to impress each other more with arguments, than with their eliteness and culture references. As for philosophers, I’d rather that they paid less homage to their heritage, and instead more adopted the intellectual styles and habits that are now common across most other disciples. The way polymaths do. I don’t want to cut all differences, but some cuts seem wise.

As to whether any of these groups will effectively call out the exaggerations of the coming era of ideological fervor, I alas have grave doubts.

I wrote this post as my Christmas present to Tyler Cowen; this topic was the closest I could manage to the topic he requested.

GD Star Rating
a WordPress rating system
Tagged as: ,

Argument Foreplay

The most prestigious articles in popular media tend to argue for a (value-adjacent) claim. And such articles tend to be long. Even so, most can’t be bothered to define their terms carefully, or to identify and respond to the main plausible counter-arguments to their argument. Such articles are instead filled with anecdotes, literary allusions, and the author’s history of thoughts on the subject. A similar thing happens even in many academic philosophy papers; they leave little space for their main positive argument, which is then short and weakly defended.

Consider also that while a pastor usually considers his or her sermon to be the “meat” of their service, that sermon takes a minority of the time, and is preceded by a great many other rituals, such as singing. And internally such sermons are usually structured like those prestigious media articles. The main argument is preceded by many not-logically-necessary points, leaving little time to address ambiguities or counter-arguments.

And consider sexual foreplay. Even people in a state where they are pretty excited, attracted, and willing are often put off by a partner pushing for too direct or rapid a transition to the actual sex act. They instead want a gradual series of increasingly intense and close interactions, which allow each party to verify that the other party has similar feelings and intentions.

In meals, we don’t want to get straight to a “main dish”, but prefer instead a series of dishes of increasing intensity. The main performers in concerts and political rallies are often preceded by opening acts. Movies in theaters used to be preceded by news and short films, and today are preceded by previews. Conversations often make use of starters and icebreakers; practical conversations are supposed to be preceded by small-talk. And revolutions may be preceded by increasingly dramatic riots and demonstrations.

What is going on here? Randall Collins’ book Interaction Ritual Chains explained this all for me. We humans often want to sync our actions and attention, to assure each other than we feel and think the same. And also that our partners are sufficiently skilled and impressive at this process.
The more important is this assurance, the more we make sure to sync, and the more intensely and intricately we sync. And where shared values and attitudes are important to us, we make sure that those are strongly salient and relevant to our synced actions.

Regarding media articles and sermons, a direct if perhaps surprising implication of all this is that most of us are often not very open to hearing and being persuaded by arguments until speakers show us that they sufficiently share our values, and are sufficiently impressive in this performance. So getting straight to the argument point (as I often do) is often seen as rude and offensive, like a would-be seducer going straight to “can I put it in.”

The lack of attention to argument precision and to counter-arguments bothers them less, as they are relatively wiling to accept a claim just on the basis of the impressiveness and shared values of the speaker. Yes, they want to be given at least one supporting argument, in case they need justify their new position to challengers. But the main goal is to share beliefs with impressive value allies.

GD Star Rating
a WordPress rating system
Tagged as: ,

Status Explains Lots

Some complain that I try to explain too much of human behavior via signaling. But the social brain hypothesis and common observations suggest that we quite often do things with an eye to how they will make us look to others.

Here’s another big influence on human behavior strongly supported by both theory and common sense: status. While it seems obvious that dominance and prestige matter greatly in human behavior, even so it seems to me that we social scientists neglect them, just as we neglect signaling. In this post, I will try to support this claim.

Humans have only domesticated a tiny fraction of animal species, even smart primates. In fact, apes seem plenty smart and dexterous enough to support a real Planet of the Apes scenario, wherein apes do many useful jobs. The main problem is that apes see our giving them orders as an attempt to dominate them, which they sometimes fiercely resist.

And humans are if anything more sensitive to domination than are other primates. After all, while other primates had visible accepted dominance hierarchies, human foragers created “reverse dominance hierarchies” wherein the whole band (of ~20-50) coordinated to take down anyone who would try to overtly dominate them. Which both makes it plausible that dominance matters a lot to humans, and also raises the question of how it is that we’ve come to accept so much of it.

Farmers accepted more domination that did foragers; farmers had kings, classes, wealth inequality, slavery, and generals in war. But most farmers didn’t actually spend much time being directly dominated. War wasn’t the usual condition, most workers had no bosses, and most of their interactions were with people at their same level.

But in the modern world, most workers put up with far more than would most foragers or farmers. Our performance is frequently evaluated, we are ranked in great detail compared to many others around us, and we are given many detailed orders, and not just during an apprenticeship period. All of which allows our complex modern organizations and social interactions, the key to industrial-era wealth, but which raises the key question: how did we get Dom-averse humans to accept all this?

Bosses: It might seem odd to ask what bosses are for, as they have so many plausible functions to perform in orgs. Yet to explain many details, such as the kinds of people we pick for management, and the ways they spend their time, we must still ask which of these functions are the most important. And my guess is that one of the most important is to give workers excuses to obey them.

Here’s the simple story: we often have a choice about whether to frame an interaction as due to dominance or prestige. Humans are supposed to hate dominance, but to love prestige. So if we can frame our boss as prestigious, not dominant, we can tell ourselves and others that we are following their lead out of admiration and wanting to learn from them, not from fear of being fired. If so, firms will want to spend extra on hiring prestigious bosses, who are handsome, articulate, tall, well-educated, pro-social, smooth, etc., even if those features don’t that much improve management decisions. Which does in fact seem to be the case.

School: I’ve discussed several times my story that schools use prestige to train people to take orders:

When firms and managers from rich places try to transplant rich practices to poor places, giving poor place workers exactly the same equipment, materials, procedures, etc., one of the main things that goes wrong is that poor place workers just refuse to do what they are told. They won’t show up for work reliably on time, have many problematic superstitions, hate direct orders, won’t accept tasks and roles that that deviate from their non-work relative status with co-workers, and won’t accept being told to do tasks differently than they had done them before, especially when new ways seem harder. … How did the industrial era get at least some workers to accept more domination, inequality, and ambiguity, and why hasn’t that worked equally well everywhere? … prestigious schools. … if humans hate industrial workplace practices when they see them as bosses dominating, but love to copy the practices of prestigious folks, an obvious solution is to habituate kids into modern workplace practices in contexts that look more like the latter than the former. … while early jobs threaten to trip the triggers than make most animals run from domination, schools try to frame a similar habit practice in more acceptable terms, as more like copying prestigious people. … Start with prestigious teachers [teaching prestigious topics]. … Have students take several classes at at a time, so they have no single “boss” … Make class attendance optional, and let students pick their classes.… give … complex assignments with new ambiguous instructions,… lots of students per teacher, … to create social proof that other students accept all of this. Frequently and publicly rank student performance, using the excuse of helping students to learn.

In two recent twitter polls, I found a 7-2 ratio saying college teachers were more impressive/prestigious than one’s job supervisor then, and a 2-1 ratio for high school teachers. Many descriptions of teaching describe the impressiveness and status of teachers as central to the teaching process.

Governance: we are even more sensitive to dominance in our political leaders than in our workplace bosses. Which was why all though history, each place tended to think they had a noble king, while neighbors had despicable tyrants. And why prestige was so important for kings. In the last few centuries we upped the ante via democracy, a supposedly prestigious mechanism wherein we pretend that all of us are really “ultimately” in control of the government, allowing us to claim that we are not being dominated by our leaders.

The main emotional drive toward socialism, regulation of business, and redistribution from the rich seems to me to be resentment of domination, which is how most people frame the fact that some have more money than others. Our ability to use democracy to frame government as prestige not domination lets us not see government agencies who regulate and redistribute as domination. Furthermore, aversion to dominance by foreigners is the main cause of world poverty today:

Most nations today would be richer if they had long ago just submitted wholesale to a rich nation, allowing that rich nation to change their laws, customs, etc., and just do everything their way. But this idea greatly offends national and cultural pride. So nations stay poor.

Disagreement: I spent many years studying the topic of rational disagreement, and I’m now confident both that rational agents who mainly wanted accurate beliefs would not knowingly disagree, and that humans often knowingly disagree. Why implies that humans have some higher priorities than accuracy. And the strongest of these priorities seems to me to be to avoid domination. People often interpret being persuaded to move toward someone else’s position as being dominated by them. Why is why leaders so often ignore good advice given publicly by rivals. Pride is one of our main obstacles to rationality; it is the main reason we disagree. Prediction markets are able to induce an accurate consensus even in the presence of such pride, but pride prevents such markets from being allowed or adopted.

Mating: Dominance and submission seen central to mating; relations are often broken due to one party being either too dominant, or not dominant enough. See also clear evidence in BDSM:

~30% of participants in BDSM activities are females. … 89% of heterosexual females who are active in BDSM [prefer] the submissive-recipient role … [&] a dominant male, … 71% of heterosexual males preferred a dominant-initiator role … 19.2% of men and 27.8% of women express a desire to attempt in masochistic behavior

So in this post I’ve outlined how status is central to bosses, school, governance, disagreement, and mating, more central than you might have realized. Status really does explain lots.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,