Tag Archives: Disagreement

Joiners v. Middlers

Kelley … traced the success of conservative churches to their ability to attract and retain an active and committed membership, characteristics that he in turn attributed to their strict demands for complete loyalty, unwavering belief, and rigid adherence to a distinctive lifestyle. … [Such] a group limits and thereby increases the cost of non-group activities, such as socializing with members of other churches or pursuing “secular” pastime. …

Seemingly unproductive costs … screen out people whose participation would otherwise be low, while at the same time they increase participation among those who do join. As a consequence, apparently unproductive sacrifices can increase the utility of group members. Efficient religions with perfectly rational members may thus embrace stigma, self-sacrifice, and bizarre behavioral standards. …

When we group religions according to the (rated) stringency of their demands, … [we see that] compared to members of other Protestant denominations, [high-demand] sect members are poorer and less educated, contribute more money and attend more services, hold stronger beliefs, belong to more church-related groups, and are less involved in secular organizations. … Data from the 1990 National Jewish Population Survey reveal patterns of interdenominational variation virtually identical to those observed within Protestantism. (more)

I see these tendencies in opinions:

  1. Those with more opinions on some topic categories have more on other categories.
  2. Those with more opinions overall have more extreme opinions on each topic.
  3. Those with more extreme opinions on some topics have more extreme opinions on others.
  4. Those with more extreme opinions are more eager to express their opinions, and vice versa.
  5. Those with more extreme opinions are more eager to join groups and attend their meetings.

(All these could have instead been expressed in terms of less extreme opinions, and “extreme” means noticeably away from the distribution middle.)

One might try to explain these by saying that opinions on a few key topics drive most other opinions. Folks with weak opinions on key topics thus have fewer opinions on other topics, and less interest in expressing opinions or in joining groups to spread the word. Yet there is little evidence that such key opinions exist; most people show little correlation of opinion across topics, or even on the same subject across time.

A more plausible explanation follows the quote above on religion. Religions, ideologies, and other idea-affiliated social groups vary in the level of commitment they ask of members. High commitment groups produce stronger community bonds, and people vary in their taste for such strong bonds. Some folks are “joiners,” with a taste for more strongly bonded groups. Joiners have an induced taste for groups with extreme opinions, and thus an induced taste to have their own more extreme opinions, in order to better fit with stronger groups. Thus joiners tend to let themselves have more opinions and more extreme opinions on many topics.

The opposite group are “middlers,” who prefer to get along mildly well with most everyone, instead of bonding more tightly with a smaller group. Middlers have fewer opinions, fewer extreme opinions, and tend not to join groups that are clearly distinguished by being associated with unusual opinions.

The opinions habits of both joiners and middlers come mainly from social preferences, instead of a preference for belief accuracy. While it isn’t obvious which group is more wrong, it is more obviously wrong to embody the opinion correlations described above.

GD Star Rating
loading...
Tagged as: , , ,

Debate Is Now Book

Back in 2008 Eliezer Yudkowsky blogged here with me, and over several months we debated his concept of “AI foom.” In 2011 we debated the subject in person. Yudkowsky’s research institute has now put those blog posts and a transcript of that debate together in a free book: The Hanson-Yudkowsky AI-Foom Debate.

Added 6Sept: Bryan Caplan weighs in.

GD Star Rating
loading...
Tagged as: , ,

Why Do Bets Look Bad?

Most social worlds lack a norm of giving much extra respect to claims supported by offers to bet. This is a shame because such norms would reduce insincere untruthful claims, and so make for more accurate beliefs in listeners. But instead of advocating for change, in this post I wonder: why are such norms rare?

Yes there are random elements in which groups have which norms, and yes given a local norm that doesn’t respect bets it looks weird to offer bets there. But in this post I’m looking more to explain which norms appear where, and less who follows which norms.

Bets have been around for a long time, and by now most intellectuals understand them, and know that all else equal those who really believe more strongly are willing to bet more. So you might think it wouldn’t be that hard for a betting norm to get added on to all other local norms and cultural factors; all else equal respect bets as showing confidence. But if this happens it must be counter-balanced by other effects, or bets wouldn’t be so rare. What are these other effects?

While info often gets overtly shared in casual conversation, most of that info doesn’t seem very useful.  I thus conclude that casual conversation isn’t mainly about overtly sharing info. So I assume the obvious alternative: casual conversation is mostly about signaling (which is covert or indirect info sharing). But still the puzzle remains: whatever else we signal via conversation, why don’t we typically expect a betting offer to signal overall-admirable confidence in a claim?

One obvious general hypothesis to consider here is that betting signals typically conflict with or interact with other signals. But which other signals, and how? In the rest of this post I explore a few bad-looking features that bets might signal:

  • Sincerity – In many subcultures it looks bad to care a lot about most any topic of casual conversation. Such passion suggests that you just don’t get the usual social functions of such conversations. Conversationalists ideally skip from topic to topic, showing off their wits, smarts, loyalties, and social connections, but otherwise caring little about the truth on particular topics. Most academia communities seem to have related norms. Offers to bet, in contrast, suggest you care too much about the truth on a particular topic. Most listeners don’t care if your claim is true, so aren’t interested in your confidence. Of course on some topics people are expected to care a lot, so this doesn’t explain fewer bets there.
  • Conflict – Many actions we take are seen as signals of cooperation or conflict. That is, our actions are seen as indicating that certain folks are our allies, and that certain other folks are our rivals or opponents. A bet offer can be seen as an overt declaration of conflict, and thus make one look overly confrontational, especially within a group that saw itself as mainly made of allies. We often try to portray any apparent conflict in casual conversations as just misunderstandings or sharing useful info, but bets are harder to portray that way.
  • Provinciality – Bets are most common today in sports, and sport arguments and bets seem to be mostly about showing loyalty to particular teams. In sports, confrontation is more ok and expected about such loyalties. Offering to bet on a team is seen as much like offering to have a fist fight to defend your team’s honor. Because of this association with regional loyalties in sports, offers to bet outside of sports are also seen as affirmations of loyalties, and thus to conflict with norms of a universal intellectual community.
  • Imprudence – Some folks are impulsive and spend available resources on whatever suits their temporary fancy, until they just run out. Others are careful to limit their spending via various simple self-control rules on how much they may spend how often on what kinds of things. Unless one is in the habit of betting often from a standard limited betting budget, bets look like unusual impulsive spending. Bettors seem to not sufficiently keep under control their impulsive urges to show sincerity, make conflict, or signal loyalties.
  • Disloyalty – In many conversations it is only ok to quote as sources or supports people outside the conversation who are “one of us.” Since betting markets must have participants on both sides of a question, they will have participants who are not part of “us”. Thus quoting betting market odds in support of a claim inappropriately brings “them” in to “our” conversation. Inviting insiders to go bet in those markets also invites some of “us” to interact more with “them”, which also seems disloyal.
  • Dominance – In conversation we often pretend to support an egalitarian norm where the wealth and social status of speakers is irrelevant to which claims are accepted or rejected by the audience. Offers to bet conflict with that norm, by seeming to favor those with more money to bet. Somehow, who is how smart or articulate or has more free time to read are considered acceptable bases for conversation inequities. While richer folks could be expected to bet more, the conversation would have to explicitly acknowledge that they are richer, which is rude.
  • Greed – We often try to give the impression that we talk mainly to benefit our listeners. This is a sacred activity. Offering to bet money makes it explicit that we seek personal gains, which is profane. This is why folks sometimes offer to bet charity; the money goes to the winner’s favorite charity. But that looks suspiciously like bringing profane money-lenders into a sacred temple.

Last week I said bets can function much like arguments that offer reasons for a conclusion. If so, how do arguments avoid looking bad in these ways? Since the cost to offer an argument is much less than the cost to offer a bet, arguments seem less imprudent and less show sincerity. Since the benefits from winning arguments aren’t explicit, one can pretend to be altruistic in giving them. Also, you can pretend an argument is not directed at any particular listener, and so is not a bid for conflict. Since most arguments t0day are not about sports, arguments less evoke the image of a sports-regional-signal. As long as you don’t quote outsiders, arguments seem less an invitation to invoke or interact with outsiders.

If we are to find a way to make bets more popular, we’ll need to find ways to let people make bets without sending these bad-looking signals.

Added: It is suspicious that I didn’t do this analysis much earlier. This is plausibly due to the usual corrupting effect of advocacy on analysis; because I advocated betting, I analyzed it insufficiently.

GD Star Rating
loading...
Tagged as: , , ,

Drexler Responds

Three weeks ago I critiqued Eric Drexler’s book Radical Abundance. Below the fold is his reply, and my response: Continue reading "Drexler Responds" »

GD Star Rating
loading...
Tagged as: , , ,

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,

Bets Argue

Imagine a challenge:

You claim you strongly believe X and suggest that we should as well; what supporting arguments can you offer?

Imagine this response:

I won’t offer arguments, because the arguments I might offer now would not necessarily reveal my beliefs. Even all of the arguments I have ever expressed on the subject wouldn’t reveal my beliefs on that subject. Here’s why.

I might not believe the arguments I express, and I might know of many other arguments on the subject, both positive and negative, that I have not expressed. Arguments on other topics might be relevant for this topic, and I might have changed my mind since I expressed arguments. There are so many random and local frictions that influence on which particular subjects people express which particular arguments, and you agree I should retain enough privacy to not have to express all the arguments I know. Also, if I gave arguments now I’d probably feel more locked into that belief and be less willing to change it, and we agree that would be bad.

How therefore could you possibly be so naive as to think the arguments I might now express would reveal what I believe? And that is why I offer no supporting arguments for my claim.

Wouldn’t you feel this person was being unreasonably evasive? Wouldn’t this response suggest at least that he doesn’t in fact know of good supporting arguments for this belief? After all, even if many random factors influence what arguments you express when, and even if you may know of many more arguments than you express, still typically on average the more good supporting arguments you can offer, the more good supporting arguments you know, and the better supported your belief.

This is how I feel about folks like Tyler Cowen who say they feel little obligation to make or accept offers to bet in support of beliefs they express, nor to think less of others who similarly refuse to bet on beliefs they express. (Adam Gurri links to ten posts on the subject here.)

Yes of course, due to limited options and large transaction costs most financial portfolios have only a crude relation to holder beliefs. And any one part of a portfolio can be misleading since it could be cancelled by other hidden parts. Even so, typically on average observers can reasonably infer that someone unwilling to publicly bet in support of their beliefs probably doesn’t really believe what they say as much as someone who does, and doesn’t know of as many good reasons to believe it.

It would be reasonable to point to other bets or investments and say “I’ve already made as many bets on this subject as I can handle.” It is also reasonable to say you are willing to bet if a clear verifiable claim can be worked out, but that you don’t see such a claim yet. It would further be reasonable to say that you don’t have strong beliefs on the subject, or that you aren’t interested in persuading others on it. But to just refuse to bet in general, even though you do express strong beliefs you try to persuade others to share, that does and should look bad.

Added 4July: In honor of Ashok Rao, more possible responses to the challenge:

A norm of thinking less of claims by those who offer fewer good supporting arguments is biased against people who talk slow, are shy of speaking, or have bad memory or low intelligence. Also, by discouraging false claims we’d discourage innovation, and surely we don’t want that.

GD Star Rating
loading...
Tagged as: ,

Imagine Farmer Rights

Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.

One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?

To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:

  1. Work in the open with fresh air and sun.
  2. See how all  food is grown and prepared.
  3. Nights outside are usually quiet and dark.
  4. Quickly get to a mile-long all-nature walk.
  5. All one meets are folks one knows, or known by them.
  6. Easily take apart devices, to see materials, mechanisms.
  7. Authorities with clear answers on cosmology, morality.
  8. Severe punishment of heretics who contradict authorities.
  9. Prior generations quickly make room for new generations.
  10. Rule by a king of our ethnicity, with clear inheritance.
  11. Visible deference from nearby authority-declared inferiors.
  12. More?

Would our lives today be better or worse because of such rights?

Added: I expect to hear this response:

Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.

GD Star Rating
loading...
Tagged as: , , , ,

What Do We Know That We Can’t Say?

I’ve been vacationing with family this week, and (re-) noticed a few things. When we played miniature golf, the winners were treated as if shown to be better skilled than previously thought, even though score differences were statistically insignificant. Same for arcade shooter contests. We also picked which Mexican place to eat at based on one person saying they had eaten there once and it was ok, even though given how random is who likes what when, that was unlikely to be a statistically significant difference for estimating what the rest of us would like.

The general point is that we quite often collect and act on rather weak info clues. This could make good info sense. We might be slowly collecting small clues that eventually add up to big clues. Or if we know well which parameters matter the most, it can make sense to act on weak clues; over a lifetime this can add up to net good decisions. When this is what is going on, then people will tend to know many things they cannot explicitly justify. They might have seen a long history of related situations, and have slowly accumulated enough relevant clues to form useful judgments, but not be able to explicitly point to most of those weak clues which were the basis of that judgement.

Another thing I noticed on vacation is that a large fraction of my relatives age 50 or older think that they know that their lives were personally saved by medicine. They can tell of one or more specific episodes where a good doctor did the right thing, and they’d otherwise be dead. But people just can’t on average have this much evidence, since we usually find it hard to see effects of medicine on health even when we have datasets with thousands of people. (I didn’t point this out to them – these beliefs seemed among the ones they held most deeply and passionately.) So clearly this intuitive collection of weak clues stuff goes very wrong sometimes, even on topics where people suffer large personal consequences. It is not just that random errors can show up; there are topics on which our minds are systematically structured, probably on purpose, to greatly mislead us in big ways.

One of the biggest questions we face is thus: when are judgements trustworthy? When can we guess that the intuitive slow accumulation of weak clues by others or ourselves embodies sufficient evidence to be a useful guide to action? At one extreme, one could try to never act on anything less than explicitly well-founded reasoning, but this would usually go very badly; we mostly have no choice but to rely heavily on such intuition. At the other extreme, many people go their whole lives relying almost entirely on their intuitions, and they seem to mostly do okay.

In between, people often act like they rely on intuition except when good solid evidence is presented to the contrary, but they usually rely on their intuition to judge when to look for explicit evidence, and when that is solid enough. So when those intuitions fail the whole process fails.

Prediction markets seem a robust way to cut through this fog; just believe the market prices when available. But people are usually resistant to creating such market prices, probably exactly because doing so might force them to drop treasured intuitive judgements.

On this blog I often present weak clues, relevant to important topics, but by themselves not sufficient to draw strong conclusions. Usually commenters are eager to indignantly point out this fact. Each and every time. But on many topics we have little other choice; until many weak clues are systematically collected into strong clues, weak clues are what we have. And the topics of where our intuitive conclusions are most likely to be systematically biased tend to be those sort of topics. So I’ll continue to struggle to collect whatever clues I can find there.

GD Star Rating
loading...
Tagged as: , ,

Best To Mix Odd, Ordinary

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories.” … Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview. (more; HT Tyler)

Some people just like to be odd. I’ve noticed that those who tend to accept unusual conclusions in one area tend to accept unusual conclusions in other areas too. In addition, they also tend to choose odd topics on which to have opinions, and base their odd conclusions on odd methods, assumptions, and sources. So opinions on odd topics tend to be unusually diverse, and tend to be defended with an unusually wide range of methods and assumptions.

These correlations are mostly mistakes, for the purpose of estimating truth, if they are mainly due to differing personalities. Thus relative to the typical pattern of opinion, you should guess that the truth varies less on unusual topics, and more on usual topics. You should guess that odd methods, sources, and assumptions are neglected on ordinary topics, but overused on odd topics. And you should guess that while on ordinary topics odd conclusions are neglected, on odd topics it is ordinary conclusions that are neglected.

For example, the way to establish a new method or source is to show that it usually gives the same conclusions as old methods and sources. Once established, one can take it seriously in the rare cases where they give different conclusions.

A related point is that if you create a project or organization to pursue a risky unusual goal, as in a startup firm, you should try to be ordinary on most of your project design dimensions. By being conservative on all those other dimensions, you give your risky idea its best possible chance of success.

My recent work has been on a very unusual topic: the social implications of brain emulations. To avoid the above mentioned biases, I thus try to make ordinary assumptions, and to use ordinary methods and sources.

GD Star Rating
loading...
Tagged as: , ,

In Praise Of Ads

As Katja and I discussed in our podcast on ads, most people we know talk as if they hate, revile, and despise ads. They say ads are an evil destructive manipulative force that exists only because big bad firms run the world, and use ads to control us all.

Yet most such folks accept the usual argument that praises news and research for creating under-provided info which is often socially valuable. And a very similar argument applies to ads. By creating more informed consumers, ads induce producers to offer better prices and quality, which benefits other consumers.

This argument can work even if ads are not optimally designed to cram a maximal amount of relevant info into each second or square inch of ads. After all, news and research can be good overall even if most of it isn’t optimally targeted toward info density or social value. Critics note that the style of most most ads differs greatly from the terse no-nonsense textbook, business memo, or government report that many see as the ideal way to efficiently communicate info. But the idea that such styles are the most effective ways to inform most people seems pretty laughable.

While ad critics often argue that ads only rarely convey useful info, academic studies of ads usually find the sort of correlations that you’d expect if ads often conveyed useful product info. For example, there tend to be more ads when ads are more believable, and more ads for new products, for changed products, and for higher quality products.

Many see ads as unwelcome persuasion, changing our beliefs and behaviors contrary to how we want these to change. But given a choice between ad-based and ad-free channels, most usually choose ad-based channels, suggesting that they consider the price and convenience savings of such channels to more than compensate for any lost time or distorted behaviors. Thus most folks mostly approve (relative to their options) of how ads change their behavior.

Many complain that ads inform consumers more about the images and identities associated with products than about intrinsic physical features. We buy identities when we buy products. But what is wrong with this if identities are in fact what consumers want from products? As Katja points out, buying identities is probably greener than buying physical objects.

So why do so many say they hate ads if most accept ad influence and ads add socially-valuable info? One plausible reason is that ads expose our hypocrisies – to admit we like ads is to admit we care a lot about the kinds of things that ads tend to focus on, like sex appeal, and we’d rather think we care more about other things.

Another plausible reason is that we resent our core identities being formed via options offered by big greedy firms who care little for the ideals we espouse. According to our still deeply-embedded forager sensibilities, identities are supposed to be formed via informal interactions between apparently equal allies who share basic values.

But if we accept that people want what they want, and just seek to get them more of that, we should praise ads. Ads inform consumers, which disciplines firms to better get consumers what they want. And if you don’t like what people want, then blame those people, not the ads. Your inability to persuade people to want what you think they should want is mostly your fault. If you can’t get people to like your product, blame them or yourself, not your competition.

Added 10a: Matt at Blunt Object offers more explanations.

GD Star Rating
loading...
Tagged as: , ,