Tag Archives: Disagreement

On Disagreement, Again

The usual party chat rule says to not spend too long on any one topic, but instead to flit among topics unpredictably. Many thinkers also seem to follow a rule where if they think about a topic and then write up an opinion, they are done and don’t need to ever revisit the topic again. In contrast, I have great patience for returning again and again to the most important topics, even if they seem crazy hard. And for spending a lot time on each topic, even if I’m at a party.

A long while ago I spend years studying the rationality of disagreement, though I haven’t thought much about it lately. But rereading Yudkowsky’s Inadequate Equilibria recently inspires me to return to the topic. And I think I have a new take to report: unusual for me, I adopt a mixed intermediate position.

This topic forces one to try to choose between two opposing but persuasive sets of arguments. On the one side there is formal theory, to which I’ve contributed, which says that rational agents with different information and calculation strategies can’t have a common belief in, nor an ability to foresee, the sign of the difference in their opinions on any “random variable”. (That is, a parameter that can be different in each different state of the world.) For example, they can’t say “I expect your next estimate of the chance of rain here tomorrow to be higher than the estimate I just now told you.”

Yes, this requires that they’d have the same ignorant expectations given a common belief that they both knew nothing. (That is, the same “priors”.) And they must be listening to and taking seriously what the other says. But these seem reasonable assumptions.

An informal version of the argument asks you to imagine that you and someone similarly smart, thoughtful, and qualified each become aware that your independent thoughts and analyses on some question had come to substantially different conclusions. Yes, you might know things that they do not, but they may also know things that you do not. So as you discuss the topic and respond to each others’ arguments, you should expect to on average come to more similar opinions near some more intermediate conclusion. Neither has a good reason to prefer your initial analysis over the others’.

Yes, maybe you will discover that you just have a lot more relevant info and analysis. But if they see that, they should then defer more to you, as you would if you learned that they are more expert than you. And if you realized that you were more at risk of being proud and stubborn, that should tell you to reconsider your position and become more open to their arguments.

According to this theory, if you actually end up with common knowledge of or an ability to foresee differences of opinion, then at least one of you must be failing to satisfy the theory assumptions. At least one of you is not listening enough to, and taking seriously enough, the opinions of the other. Someone is being stubbornly irrational.

Okay, perhaps you are both afflicted by pride, stubbornness, partisanship, and biases of various sorts. What then?

You may find it much easier to identify more biases in them than you can find in yourself. You might even be able to verify that you suffer less from each of the biases that you suspect in them. And that you are also better able to pass specific intelligence, rationality, and knowledge tests of which you are fond. Even so, isn’t that roughly what you should expect even if the two of you were similarly biased, but just in different ways? On what basis can you reasonably conclude that you are less biased, even if stubborn, and so should stick more to your guns?

A key test is: do you in fact reliably defer to most others who can pass more of your tests, and who seem even smarter and more knowledgeable than you? If not, maybe you should admit that you typically suffer from accuracy-compromising stubbornness and pride, and so for accuracy purposes should listen a lot more to others. Even if you are listening about the right amount for other purposes.

Note that in a world where many others have widely differing opinions, it is simply not possible to agree with them all. The best that could be expected from a rational agent is to not consistently disagree with some average across them all, some average with appropriate weights for knowledge, intelligence, stubbornness, rationality, etc. But even our best people seem to consistently violate this standard.

All that we’ve discussed so far has been regarding just one of the two opposing but persuasive sets of arguments I mentioned. The other argument set centers around some examples where disagreement seems pretty reasonable. For example, fifteen years ago I said to “disagree with suicide rock”. A rock painted with words to pretend it was a sentient creature listening carefully to your words, but offering no evidence that it actually listened, should be treated like a simple painted rock. In that case, you have strong evidence to down-weight its claims.

A second example involves sleep. While we are sleeping we don’t usually have an opinion on if we are sleeping, as that issue doesn’t occur to us. But if the subject does come up, we often mistakenly assume that we are awake. Yet a person who is actually awake can have high confidence in that fact; they can know that while a dreaming mind is seriously broken, their mind is not so broken.

An application to disagreement comes when my wife awakes in the night, hears me snoring, and tells me that I’m snoring and should turn my head. Responding half asleep, I often deny that I’m snoring, as I then don’t remember hearing myself snore recently, and I assume that I’d hear such a thing. In this case, if my wife is in fact awake, she can comfortably disagree with me. She can be pretty sure that she did hear me snore and that I’m just less reliable due to being only half awake.

Yudkowsky uses a third example, which I also find persuasive, but at which many of you will balk. That is the majority of people who say they have direct personal evidence for God or other supernatural powers. Evidence that’s mainly in their feelings and minds, or in subtle patterns in how their personal life outcomes are correlated with their prayers and sins. Even though most people claim to believe in God, and point to this sort of evidence, Yudkowsky and I think that we can pretty confidently say that this evidence just isn’t strong enough to support that conclusion. Just as we can similarly say that personal anecdotes are usually insufficient to support the usual confidence in the health value of modern medicine.

Sure, its hard to say with much confidence that there isn’t a huge smart power somewhere out there in the universe. And yes, if this power did more obvious stuff here on Earth back in the day, that might have left a trail of testimony and other evidence, to which advocates might point. But there’s just no way that either of those considerations can remotely support the usual level of widespread confidence in a God meddling in detail with their heads and lives.

The most straightforward explanation I can see here is social desirability bias a bias that not only introduces predictable errors but also one’s willingness to notice and correlate such errors. By attributing their belief to “faith”, many of them do seem to acknowledge quite directly that their argument won’t stand up to the usual evaluation standards. They are instead believing because they want to believe. Because their social world rewards them for the “courage” and “affirmation” of such a belief.

And that pretty closely fits a social desirability bias. Their minds have turned off their rationality on this topic, and are not willing to consider the evidence I’d present, or the fact that the smartest most accomplished intellectuals today tend to be atheists. Much like the sleeper who just can’t or won’t see that their mind is broken and unable to notice that they are asleep.

In fact, it seems to me that this scenario matches a great many of the disagreements I’m willing to have with others. As I tend to be willing to consider hypotheses that others find distasteful or low status. Many people tell me that the pictures I paint in my two books are ugly, disrespectful, and demotivating, but far fewer offer any opposing concrete evidence. Even though most people seem able to notice the fact that social desirability would tend to make them less willing to consider such hypotheses, they just don’t want to go there.

Yes, there is an opposite problem: many people are especially attracted to socially undesirable hypotheses. A minority of folks see themselves as courageous “freethinkers” who by rights should be celebrated for their willingness to “think outside the box” and embrace a large fraction of the contrarian hypotheses that come their way. Alas, by being insufficiently picky about the contrarian stories they embrace, they encourage, not discourage, everyone else to embrace social desirability biases. On average, social desirability only causes modest biases in the social consensus, and thus only justifies modest disagreements from those who are especially rational. Going all in on a great many contrarian takes at once is a sign of an opposite problem.

Yes, the stance I’m taking implies that contrarian views, i.e., views that seem socially undesirable to embrace, are on average neglected, and thus more likely than the consensus is willing to acknowledge. But that is of course far from endorsing most of them with high confidence. For example, UFOs as aliens are indeed more likely than the usual prestigious consensus will admit, but could still be pretty unlikely. And assigning a somewhat higher chance to claims like that the moon landings were faked it is not at all the same as endorsing such claims.

So here’s my new take on the rationality of disagreement. When you have a similar level of expertise to others, you can justify disagreeing with an apparent social consensus only if you can identity a particularly strong way that the minds of most of those who think about the topic tend to get broken by the topic. Such as due to being asleep or suffering from a strong social desirability bias. (A few weak clues won’t do.)

I see this position as mildly supported by polls showing that people think that those in certain emotional states are less likely to be accurate in the context of a disagreement; different emotions plausibly trigger different degrees of willingness to be fair or rational. (Here are some other poll results on what people think predicts who is right in a disagreement.)

But beware of going too wild embracing most socially undesirable views. And you can’t just in general presume that others disagree with each of your many positions due to their minds being broken in some way that you can’t yet see. That way lies unjustified arrogance. You instead want specific concrete evidence of strongly broken minds.

Imagine that you specialize in a topic so much that you know nearly as much as the person in the world who knows the most, but do not have the sort of credentials or ways to prove your views that the world would easily accept. And this is not the sort of topic where insight can be quickly and easily translated into big wins, wins in either money or status. So if others had come to your conclusions before, they would not have gained much personally, nor found easy ways to persuade many others.

In this sort of case, I think you should feel more free to disagree. Though you should respect base rates, and try to test your views as fast and strongly as possible. As the world is just not listening to you, you can’t expect them yet to credit what you know. Just also don’t expect the world to reward you or pay you much attention, even if you are right.

GD Star Rating
a WordPress rating system
Tagged as:

Discussion Contests

My last post outlined how to make a better “sport” wherein people compete on, and are ranked by, their ability to persuade audiences of claims. Which might be a nice way to find/make sales-folk.

But what I’d really like is to find/make people good at informative discussion. That is, we the audience want to listen to people who are good at taking the floor of our attention and talking so as to more rapidly move our estimates toward higher-confidence values. And we want this more for the case where we are a reasonable rational audience, relative to our being easily swayed by demagoguery. We want to listen to people who will more rapidly change our reasonable minds.

Here’s an idea using betting markets. Imagine a topic for which we will later have some ex post objective measure of truth. We can thus create (possibly subsidized) betting markets over this space of outcomes. Also imagine having some info weights regarding different possible probability distribution over outcomes. Using these weights, we can create a single number saying how informative are any given set of prices. Thus we can say how much info was added (or subtracted) to those prices during any given time period.

So if we have a center of attention “stage” wherein one speaker talks at a time, and if the audience participates in a betting market while they listen, then we can get a measure of the info added by each speaker while they spoke. So we can score each speaker on their info given per second of talking.

Okay, yes, there may be a delay between when a speaker says something and when a listener comes to realize its implications and then makes a resulting market trade. This is a reason to have speakers talk for longer durations, so that their score over this duration can include this delayed realization effect.

Now one way to use this is debate style. Give each speaker the same amount of total time, in the same-length time blocks, and see which one added the most info by the end. Repeat in many pairwise contests. But another approach is to instead just pay to try to get the most info out of any given set of potential speakers.

Imagine an auction for each short period of speaking. If you bid the most per second, you get to the center stage to talk, and then you will be paid in proportion to the info you end up contributing, according to market price changes. Speakers could bid on themselves, or investors might pay for speaker bids. (Let speakers bid for future time periods long enough to include the delayed realization effect.)

Even if there were other sources of info possible, besides this center stage, this auction would still give a credible reason for most of the audience to pay some attention to the center stage. After all, the auction would have selected for the one person expected to be most worth listening to, at least on average.

So now, to induce an informative discussion on a topic, one both subsidizes prediction markets on that topic, and commits to pay each person who wins an auction to speak from a center stage a reward proportional to the info added to those prediction markets while they speak.

What if different time periods are expected to add different amounts of info to the market prices through channels other than the center stage speaker? This could bias the debate structure, but isn’t a problem for the auction structure. Auction bidders would bid more for those extra info time periods, but the winner would still be the speaker expected to add the most info.

This should be pretty easy to test in lab experiments. Who wants to help set them up?

GD Star Rating
a WordPress rating system
Tagged as: ,

New Sport of Debate?

Someone recently told me “Hey, you seem good at debate.” Which made me think “Yeah, the world needs more debate. Let’s design a better online debate forum.” Here’s an initial concept sketch.

Audience – These are people allowed to propose and rate debate claims, to propose matches, and to rate performance in them. Each declares their acceptable languages and formats (e.g., text, audio, video). Maybe want to ensure each human can only vote once per issue. To rate a debate, maybe they need to show that they heard the debate live.

Claim – A list of possible claims to debate. Are some topics off limits? Do editors curate the list to edit wordings and cut redundancies?

Debaters – People who have volunteered to debate particular claims. Each one can say which sides (pro or con) of which claims they would defend, in what languages and formats, at what day/times, and who they refuse to debate. (Can “math heavy” or “stat heavy” be languages?)

Debates – Two (or four?) participants publicly debate a given claim online at a given pre-announced time, in a given language and format, with some way to allocate speaking time roughly equally between participants. (Maybe Equatalk?) Some rule decides if debate is cancelled or postponed due to no-shows or health/tech/etc. issues.

Civility – Some process rules, e.g., if debaters can hurl insults, or introduce links for audience to check.

Opinions – Each audience member at a debate gives degree(s?) of support for the claim just before and just after the debate. Maybe state opinions before they know debate participants?

Matching – A process (algorithm?) to pick who debates whom when on what claim in what language, based on the claims that debaters have selected, debater ranks, popularity of claims and matches, and audience participation rates. Maybe do this to max predicted future debate audiences, or info to adjust rankings, or info that changes opinions.

Ranking – A process (algorithm?) to rank value (plus uncertainty?) of each debater, relative to others, based on no-show rates and the opinions expressed at their debates. Maybe opinions of higher ranked debaters count more. Maybe more debates, or being willing to debate more claims, counts more. Ideally the ranking rule is simple, public, and robust to criticism.

Seems the next step here is to propose, critique, and choose more specific rules. Then someone can write or adapt software.

I see big gains from such a forum becoming popular. A good debate forum could become an alternate credentialing framework, to show that some people are good at real debate. (Not like those fake high school debates.) Maybe some new kinds of schools would form to teach people how to do well in such debates.

A related forum might rate participants more in terms of how well they “discuss” claims, and less in terms of persuading an audience toward some pre-defined conclusion. Maybe rate each on how much they moved audience members in any directions, as proxy for being informative? The big question there seems to me: how can we do that rating, and who gets more weight in such ratings.

 

GD Star Rating
a WordPress rating system
Tagged as:

Best Case Contrarians

Consider opinions distributed over a continuous parameter, like the chance of rain tomorrow. Averaging over many topics, accuracy is highest at the median, and falls away for other percentile ranks. This is bad news for contrarians, who sit at extreme percentile ranks. If you want to think you are right as a contrarian, you have to think your case is an exception to this overall pattern, due to some unusual feature of you or your situation. A feature that suggests you know more than them.

Yet I am often tempted to hold contrarian opinions. In this post I want to describe the best case for being a contrarian. I’m not saying that most contrarians are actually in this best case. I’m saying that this is the case you most want to be in as a contrarian, as it can most justify your position.

I recently posted on how innovation is highest for more fragmented species, as species so often go wrong via conformity traps. For example, peacocks are now going wrong together with overly long tails. To win their local competitions, each peacock needs to have and pick the tails that are sexy to other peacocks, even if that makes them all more vulnerable to predators.

Salmon go wrong by having to swim up hard hazard-filled rivers to get to their mating groups. Only a third of them survive to return from that trip. Now imagine a salmon sitting in the ocean at the mouth of the river, saying to the other salmon:

We are suffering from a conformity trap here. I’m gonna stay and mate here, instead of going up river. If you stay here and mate with me, then we can avoid all those river hazards. We’ll survive, with more energy to help our kids, and win out over the others. Who’s with me?

Now salmon listening to his should wonder if genetic losers are especially likely to make such contrarian speeches. After all, they are the least likely to survive the river, and so the most desperate to avoid it. For all its harms, the river does function to sort out the salmon with the best genes. If you make it to the end, you know your mating partner will also be unusually fit.

So yes, those less likely to pass the river test are more likely to become salmon contrarians. But they aren’t the only ones. Also more likely are:
A) those who can better sort good from bad mates in other ways,
B) those who can better see the conformity traps, and see they are especially big,
C) those who can better see which are the best places to start alternatives to the conformity traps, and
D) those who happen to have invested less in, and thus are less tied to, existing traps. Like the young.

Our world suffers from myriad conformity traps. Like investors who must coordinate with other investors (e.g., via the different levels of venture capital), may feel they must do crypto, as that’s what the others are doing. Even if they don’t think that much of crypto. Like academics in fields that use too much math feel they also need to do too much math if they are to be respected there. Like journalists and think tank pundits feel they must write on the topics on which everyone else is talking, even if other topics are more important.

In all of these cases, it can make sense to try to initiate a contrarian alternative. If many others know about the existing conformity traps, they may also be looking for a chance to escape. The questions are then: when is the right time and place to initiate a contrarian move to escape such a trap. Who is best place to initiate, and how? And, what is the ratio of the gains of success to the costs of failure?

In situations like this, the people who actually try contrarian initiatives may not be at all wrong on their estimates about the truth. They will be different in some ways yes, but not necessarily overall on truth accuracy. In fact, they are likely to be more informed on average in the sense of being better able to judge the overall conformity trap situation, and to evaluate partners in unusual ways.

That is, they can better judge how bad is the overall conformity trap, where are promising alternatives, and who are promising partners. Even if, yes, they are also probably worse on average at winning within the usual conformity-trapped system. Compared to others, contrarians are on average better at being contrarians, and worse at being conformists. Duh.

And that’s the best case for being a contrarian. Not so much because you are just better able to see truth in general. But because you are likely better in particular at seeing when it is time to bail on a collective that is all going wrong together. If the gains from success are high relative to the costs of failure, then most such bids should fail, making the contrarian bid “wrong” most of the time. But not making most bids themselves into mistakes.

GD Star Rating
a WordPress rating system
Tagged as:

Three Types of General Thinkers

Ours is an era of rising ideological fervor, moving toward something like the Chinese cultural revolution, with elements of both religious revival and witch hunt repression. While good things may come of this, we risk exaggeration races, wherein people try to outdo themselves to show loyalty via ever more extreme and implausible claims, policies, and witch indicators.

One robust check on such exaggeration races could be a healthy community of intellectual generalists. Smart thoughtful people who are widely respected on many topics, who can clearly see the exaggerations, see that others of their calibre also see them, and who crave such associates’ respect enough to then call out those exaggerations. Like the child who said the emperor wore no clothes.

So are our generalists up to this challenge? As such communities matter to us for this and many other reasons, let us consider more who they are and how they are organized. I see three kinds of intellectual generalists: philosophers, polymaths, and public intellectuals.

Public intellectuals seem easiest to analyze. Compared to other intellectuals, these mix with and are selected more by a wider public and a wider world of elites, and thus pander more to such groups. They less use specialized intellectual tools or language, their arguments are shorter and simpler, they impress more via status, eloquent language, and cultural references, and they must speak primarily to the topics currently in public talk fashion.

Professional philosophers, in contrast, focus more on pleasing each other than a wider world. Compared to public intellectuals, they are more willing to use specialized language for particular topics, to develop intricate arguments, and to participate in back and forth debates. As the habits and tools that they learn can be applied to a pretty wide range of topics, philosophers are in that sense generalists.

But philosophers are also very tied to their particular history. More so than in other disciplines, particular historical philosophers are revered as heroes and models. Frequent readings and discussions of their classic texts pushes philosophers to try to retain their words, concepts, positions, arguments, and analysis styles.

As I use the term, polymaths are intellectuals who meet the usual qualifications to be seen as expert in many different intellectual disciplines. For example, they may publish in discipline-specific venues for many disciplines. More points for a wider range of disciplines, and for intellectual projects that combine expertise from multiple disciplines. Learning and integrating many diverse disciplines can force them to generalize from discipline specific insights.

Such polymaths tend less to write off topics as beyond the scope of their expertise. But they also just write less about everything, as our society offers far fewer homes to polymaths than to philosophers or public intellectuals. They must mostly survive on the edge of particular disciplines, or as unusually-expert public intellectuals.

If the disciplines that specialize in thinking about X tend to have the best tools and analysis styles for thinking about X, then we should prefer to support and listen to polymaths, compared to other types of generalist intellectuals. But until we manage to fund them better, they are rarely available to hear from.

Public intellectuals have the big advantage that they can better get the larger world to listen to their advice. And while philosophers suffer their historical baggage, they have the big advantage of stable funding and freedoms to think about non-fashionable topics, to consider complex arguments, and to pander less to the public or elites.

Aside from more support for polymaths, I’d prefer public intellectuals to focus more on impressing each other, instead of wider publics or elites. And I’d rather they tried to impress each other more with arguments, than with their eliteness and culture references. As for philosophers, I’d rather that they paid less homage to their heritage, and instead more adopted the intellectual styles and habits that are now common across most other disciples. The way polymaths do. I don’t want to cut all differences, but some cuts seem wise.

As to whether any of these groups will effectively call out the exaggerations of the coming era of ideological fervor, I alas have grave doubts.

I wrote this post as my Christmas present to Tyler Cowen; this topic was the closest I could manage to the topic he requested.

GD Star Rating
a WordPress rating system
Tagged as: ,

Argument Foreplay

The most prestigious articles in popular media tend to argue for a (value-adjacent) claim. And such articles tend to be long. Even so, most can’t be bothered to define their terms carefully, or to identify and respond to the main plausible counter-arguments to their argument. Such articles are instead filled with anecdotes, literary allusions, and the author’s history of thoughts on the subject. A similar thing happens even in many academic philosophy papers; they leave little space for their main positive argument, which is then short and weakly defended.

Consider also that while a pastor usually considers his or her sermon to be the “meat” of their service, that sermon takes a minority of the time, and is preceded by a great many other rituals, such as singing. And internally such sermons are usually structured like those prestigious media articles. The main argument is preceded by many not-logically-necessary points, leaving little time to address ambiguities or counter-arguments.

And consider sexual foreplay. Even people in a state where they are pretty excited, attracted, and willing are often put off by a partner pushing for too direct or rapid a transition to the actual sex act. They instead want a gradual series of increasingly intense and close interactions, which allow each party to verify that the other party has similar feelings and intentions.

In meals, we don’t want to get straight to a “main dish”, but prefer instead a series of dishes of increasing intensity. The main performers in concerts and political rallies are often preceded by opening acts. Movies in theaters used to be preceded by news and short films, and today are preceded by previews. Conversations often make use of starters and icebreakers; practical conversations are supposed to be preceded by small-talk. And revolutions may be preceded by increasingly dramatic riots and demonstrations.

What is going on here? Randall Collins’ book Interaction Ritual Chains explained this all for me. We humans often want to sync our actions and attention, to assure each other than we feel and think the same. And also that our partners are sufficiently skilled and impressive at this process.
The more important is this assurance, the more we make sure to sync, and the more intensely and intricately we sync. And where shared values and attitudes are important to us, we make sure that those are strongly salient and relevant to our synced actions.

Regarding media articles and sermons, a direct if perhaps surprising implication of all this is that most of us are often not very open to hearing and being persuaded by arguments until speakers show us that they sufficiently share our values, and are sufficiently impressive in this performance. So getting straight to the argument point (as I often do) is often seen as rude and offensive, like a would-be seducer going straight to “can I put it in.”

The lack of attention to argument precision and to counter-arguments bothers them less, as they are relatively wiling to accept a claim just on the basis of the impressiveness and shared values of the speaker. Yes, they want to be given at least one supporting argument, in case they need justify their new position to challengers. But the main goal is to share beliefs with impressive value allies.

GD Star Rating
a WordPress rating system
Tagged as: ,

Status Explains Lots

Some complain that I try to explain too much of human behavior via signaling. But the social brain hypothesis and common observations suggest that we quite often do things with an eye to how they will make us look to others.

Here’s another big influence on human behavior strongly supported by both theory and common sense: status. While it seems obvious that dominance and prestige matter greatly in human behavior, even so it seems to me that we social scientists neglect them, just as we neglect signaling. In this post, I will try to support this claim.

Humans have only domesticated a tiny fraction of animal species, even smart primates. In fact, apes seem plenty smart and dexterous enough to support a real Planet of the Apes scenario, wherein apes do many useful jobs. The main problem is that apes see our giving them orders as an attempt to dominate them, which they sometimes fiercely resist.

And humans are if anything more sensitive to domination than are other primates. After all, while other primates had visible accepted dominance hierarchies, human foragers created “reverse dominance hierarchies” wherein the whole band (of ~20-50) coordinated to take down anyone who would try to overtly dominate them. Which both makes it plausible that dominance matters a lot to humans, and also raises the question of how it is that we’ve come to accept so much of it.

Farmers accepted more domination that did foragers; farmers had kings, classes, wealth inequality, slavery, and generals in war. But most farmers didn’t actually spend much time being directly dominated. War wasn’t the usual condition, most workers had no bosses, and most of their interactions were with people at their same level.

But in the modern world, most workers put up with far more than would most foragers or farmers. Our performance is frequently evaluated, we are ranked in great detail compared to many others around us, and we are given many detailed orders, and not just during an apprenticeship period. All of which allows our complex modern organizations and social interactions, the key to industrial-era wealth, but which raises the key question: how did we get Dom-averse humans to accept all this?

Bosses: It might seem odd to ask what bosses are for, as they have so many plausible functions to perform in orgs. Yet to explain many details, such as the kinds of people we pick for management, and the ways they spend their time, we must still ask which of these functions are the most important. And my guess is that one of the most important is to give workers excuses to obey them.

Here’s the simple story: we often have a choice about whether to frame an interaction as due to dominance or prestige. Humans are supposed to hate dominance, but to love prestige. So if we can frame our boss as prestigious, not dominant, we can tell ourselves and others that we are following their lead out of admiration and wanting to learn from them, not from fear of being fired. If so, firms will want to spend extra on hiring prestigious bosses, who are handsome, articulate, tall, well-educated, pro-social, smooth, etc., even if those features don’t that much improve management decisions. Which does in fact seem to be the case.

School: I’ve discussed several times my story that schools use prestige to train people to take orders:

When firms and managers from rich places try to transplant rich practices to poor places, giving poor place workers exactly the same equipment, materials, procedures, etc., one of the main things that goes wrong is that poor place workers just refuse to do what they are told. They won’t show up for work reliably on time, have many problematic superstitions, hate direct orders, won’t accept tasks and roles that that deviate from their non-work relative status with co-workers, and won’t accept being told to do tasks differently than they had done them before, especially when new ways seem harder. … How did the industrial era get at least some workers to accept more domination, inequality, and ambiguity, and why hasn’t that worked equally well everywhere? … prestigious schools. … if humans hate industrial workplace practices when they see them as bosses dominating, but love to copy the practices of prestigious folks, an obvious solution is to habituate kids into modern workplace practices in contexts that look more like the latter than the former. … while early jobs threaten to trip the triggers than make most animals run from domination, schools try to frame a similar habit practice in more acceptable terms, as more like copying prestigious people. … Start with prestigious teachers [teaching prestigious topics]. … Have students take several classes at at a time, so they have no single “boss” … Make class attendance optional, and let students pick their classes.… give … complex assignments with new ambiguous instructions,… lots of students per teacher, … to create social proof that other students accept all of this. Frequently and publicly rank student performance, using the excuse of helping students to learn.

In two recent twitter polls, I found a 7-2 ratio saying college teachers were more impressive/prestigious than one’s job supervisor then, and a 2-1 ratio for high school teachers. Many descriptions of teaching describe the impressiveness and status of teachers as central to the teaching process.

Governance: we are even more sensitive to dominance in our political leaders than in our workplace bosses. Which was why all though history, each place tended to think they had a noble king, while neighbors had despicable tyrants. And why prestige was so important for kings. In the last few centuries we upped the ante via democracy, a supposedly prestigious mechanism wherein we pretend that all of us are really “ultimately” in control of the government, allowing us to claim that we are not being dominated by our leaders.

The main emotional drive toward socialism, regulation of business, and redistribution from the rich seems to me to be resentment of domination, which is how most people frame the fact that some have more money than others. Our ability to use democracy to frame government as prestige not domination lets us not see government agencies who regulate and redistribute as domination. Furthermore, aversion to dominance by foreigners is the main cause of world poverty today:

Most nations today would be richer if they had long ago just submitted wholesale to a rich nation, allowing that rich nation to change their laws, customs, etc., and just do everything their way. But this idea greatly offends national and cultural pride. So nations stay poor.

Disagreement: I spent many years studying the topic of rational disagreement, and I’m now confident both that rational agents who mainly wanted accurate beliefs would not knowingly disagree, and that humans often knowingly disagree. Why implies that humans have some higher priorities than accuracy. And the strongest of these priorities seems to me to be to avoid domination. People often interpret being persuaded to move toward someone else’s position as being dominated by them. Why is why leaders so often ignore good advice given publicly by rivals. Pride is one of our main obstacles to rationality; it is the main reason we disagree. Prediction markets are able to induce an accurate consensus even in the presence of such pride, but pride prevents such markets from being allowed or adopted.

Mating: Dominance and submission seen central to mating; relations are often broken due to one party being either too dominant, or not dominant enough. See also clear evidence in BDSM:

~30% of participants in BDSM activities are females. … 89% of heterosexual females who are active in BDSM [prefer] the submissive-recipient role … [&] a dominant male, … 71% of heterosexual males preferred a dominant-initiator role … 19.2% of men and 27.8% of women express a desire to attempt in masochistic behavior

So in this post I’ve outlined how status is central to bosses, school, governance, disagreement, and mating, more central than you might have realized. Status really does explain lots.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

Status Trumps Argument

Are elites nicer than other people? No, but they are better at being nice contingently, in the right situations where niceness is rewarded. And also better at being mean contingently, in the situations where that is rewarded. Other people aren’t as good on average at correlating their niceness with rewards for niceness. A similar pattern applies to elites and arguments.

In a world with many strong prediction markets, social consensus would be set by the people willing and able to trade in those markets. Which could be most anyone. And those traders would in general be responsive to good arguments, as traders are on the hook to win or lose a lot of money if they fail to listen to good arguments. In this world, arguments would be a powerful force for producing better beliefs.

But in our world today, the perceived social consensus is mostly set by elites. That is, by whatever seems to be elites’ shared opinion. And so the power of arguments depends on elites being willing and able to listen to them. Do they?

Many elites are selected for their ability to generate and evaluate good arguments. So many are quite able to listen. But as with being nice, elites are especially good at contingent strategies: they generate and credit good arguments when they are rewarded for that, but not otherwise.

The key parameter that determines if an elite is rewarded for using and crediting good arguments is the relative status of the parties involved. When elites argue with equal status elites, their arguments may need to be good. At least if their particular audience values arguments.

But consider a case where two parties to a dispute are of very unequal status, and where the topic is one where there’s a perception that elite consensus agrees with the high status party. In this case, the higher status party only needs to offer the slim appearance of argument quality. Just blathering a few related words is often completely sufficient. Even if put together in context those words don’t really make much sense.

I have seen this happen many times personally. For example, if I argue with a higher status person, who for some reason engages with me in this context, and if my position is one seen as reasonable by the usual elite consensus, then my partner is careful to offer quality arguments, and to credit such arguments if I offer them. But if I take a position seen as against the current elite consensus, that same high status partner instead feels quite comfortable offering very weak and incoherent arguments.

(Yes, low status people follow this approach too, but high status people are better at executing this as a contingent strategy, and their choices matter more.)

Or consider all the crazy weak arguments offered by Project Bluebook to dismiss hard-to-explain UFO encounters. As they were confident that audiences would see UFO advocates as much lower status, they could blithely blather things like “swamp gas” that just didn’t fit case details.

Thus in our world today the quality of arguments only matters for positions “within the Overton window”. That is, positions that many elites are seen to take seriously. Which is why contrarians positions are so often unfairly dismissed. Even though, yes, most contrarian positions are wrong. And this is why we need to break out of our system of social consensus dominated so strongly by elites.

Added 20May: Note that this sort of thing can fool people who listen to such contrarian debates into underestimating the usual intellectual standards for non-contrarian topics. They may then think that arguments only modestly better than the ones elites use to dismiss them are of sufficient quality. But that isn’t remotely good enough.

GD Star Rating
a WordPress rating system
Tagged as: ,

Parsing Pictures of Mars Muck

On Thursday I came across this article, which discusses the peer-reviewed journal article, “Fungi on Mars? Evidence of Growth and Behavior From Sequential Images”. As its pictures seemed to me to suggest fungal life active now on Mars, I tweeted “big news!” Over the next few days it got some quite negative news coverage, mainly complaining that the first author (out of 11 authors) had no prestigious affiliation and expressed other contrarian opinions, and also that the journal charged fees to authors.

I took two small supportive bets and then several people offered me much larger bets, while no one at all offered to bet on my side. That is a big classic clue that you are likely wrong, and so I am for now backing down on my likelihood estimates on this. And thus not (yet) accepting more bets. But to promote social information aggregation, let me try to explain the situation as I now see it. I’ll then listen to your reactions before deciding how to revise my estimates.

First, our priors are that early Mars and early Earth were nearly equally likely as places for life to arise, with Mars being habitable sooner. The rates at which life would have been transferred between the two places look high, though sixty times higher from Mars to Earth than from vice versa. Thus it seems nearly as likely that life started on Mars and then came to Earth, as that life started on Earth. And more likely than not, there was once some life on Mars.

Furthermore, studies that put today’s Earth life in Martian conditions find many that would survive and grow on Mars. So the only question is whether that sort of life ever arose on Mars, or was ever transferred from Earth to Mars. Yes, most of the Martian surface looks quite dead now, including most everything we’ve seen up close due to landers and rovers. But then so does most of the surface of Antartica look dead, but we know is it not all dead. So the chance of life somewhere on Mars now is pretty high; the question is just how common might be the few special places in which Martian life survives.

This new paper offers no single “smoking gun”, but instead offers a collection of pictures that are together suggestive. Some of the authors have been slowly collecting this evidence over many years, and have presented some of it before. The evidence they point to is at the edge of detectability, as you should expect from the fact that the usual view is that we haven’t yet seen life on Mars.

Now if you search though enough images, you’ll find a few strange ones, like the famous “face on mars”, or this one from Mars:

But when there’s just one weird image, with nothing else like it, we mostly should go with a random error theory, unless the image seems especially clear.

In the rest of this post I’ll go over three kinds of non-unique images, and for each compare a conventional explanation to the exotic explanation suggested by this new paper. Continue reading "Parsing Pictures of Mars Muck" »

GD Star Rating
a WordPress rating system
Tagged as: ,

The Debunking of Debunking

In a new paper in Journal of Social Philosophy, Nicholas Smyth offers a “moral critique” of “psychological debunking”, by which he means “a speech‐act which expresses the proposition that a person’s beliefs, intentions, or utterances are caused by hidden and suspect psychological forces.” Here is his summary:

There are several reasons to worry about psychological debunking, which can easily counterbalance any positive reasons that may exist in its favor:

1. It is normally a form of humiliation, and we have a presumptive duty to avoid humiliating others.
2. It is all too easy to offer such stories without acquiring sufficient evidence for their truth,
3. We may aim at no worthy social or individual goals,
4. The speech‐act itself may be a highly inefficient means for achieving worthy goals, and
5. We may unwittingly produce bad consequences which strongly outweigh any good we do achieve, or which actually undermine our good aims entirely.

These problems … are mutually reinforcing. For example, debunking stories would not augment social tensions so rapidly if debunkers were more likely to provide real evidence for their causal hypotheses. Moreover, if we weren’t so caught up in social warfare, we’d be much less likely to ignore the need for evidence, or to ignore the need to make sure that the values which drive us are both worthy and achievable.

That is, people may actually have hidden motives, these might in fact explain their beliefs, and critics and audiences may have good reasons to consider that possibility. Even so, Smyth says that it is immoral to humiliate people without sufficient reason, and we in fact do tend to humiliate people for insufficient reasons when we explain their beliefs via hidden motives. Furthermore, we tend to lower our usual epistemic standards to do so.

This sure sounds to me like Smyth is offering a psychological debunking of psychological debunking! That is, his main argument against such debunking is via his explaining this common pattern, that we explain others’ beliefs in terms of hidden motives, by pointing to the hidden motives that people might have to offer such explanations.

Now Smyth explicitly says that he doesn’t mind general psychological debunking, only that offered against particular people:

I won’t criticize high‐level philosophical debunking arguments, because they are distinctly impersonal: they do not attribute bad or distasteful motives to particular persons, and they tend to be directed at philosophical positions. By contrast, the sort of psychological debunking I take issue with here is targeted at a particular person or persons.

So presumably Smyth doesn’t have an issue with our book The Elephant in the Brain: Hidden Motives in Everyday Life, as it also stays at the general level and does’t criticize particular people. And so he also thinks his debunking is okay, because it is general.

However, I don’t see how staying with generalities saves Smyth from his own arguments. Even if general psychological debunking humiliates large groups all at once, instead of individuals one at a time, it is still humiliation. Which he still might do yet should avoid because of his inadequate reasons, lowering of epistemic standards, there being better ways to achieve his goals, and it unwittingly producing bad consequences. Formally his arguments work just as well against general as against specific debunking.

I’d say that if you have a general policy of not appearing to pick fights, then you should try to avoid arguing by blaming your opponents’ motives if you can find other arguments sufficient to make your case. But that’s just an application of the policy of not visibly picking fights when you can avoid them. And many people clearly seem to be quite willing and eager to pick fights, and so don’t accept this general policy of avoiding fights.

If your policy were just to speak the most relevant truth at each point, to most inform rational audience members at that moment on a particular topic, then you probably should humiliate many people, because in fact hidden motives are quite common and relevant to many debates. But this speak-the-most-truth policy tends to lose you friends and associates over the longer run, which is why it is usually not such a great strategy.

GD Star Rating
a WordPress rating system
Tagged as: ,