Tag Archives: Disagreement

Take Origins Seriously

We have a strong tendency to believe what we were taught to believe. This is a serious problem when we were taught different things. How can we rationally have much confidence in the beliefs we were taught, if we know that others were taught to believe other things? In order to overcome this bias, we either need to find a way to later question our initial teachings so well that we eliminate this correlation between our beliefs and our early teachings, or we need to find strong arguments for why one should expect more accurate beliefs to come from the source of our personal teaching, arguments that should persuade people regardless of their teaching. These are both hard standards to meet.

We also have strong tendencies to acquire tastes. Many of the things we like we didn’t like initially, but came to like after a time. In foods, kids don’t initially like spice or bitterness, or meat, especially raw. Kids don’t initially like jogging or structured exercise, or cold showers, or fist fights, but many claim later to love such things. People find they love the kinds of music they grew up with more than other kinds. People who grow up with arranged marriages generally like them, while those who don’t are horrified. Many kids find the very idea of sex repellent, but later come to love it. Particular sex practices seem repellent or not depending on how one is exposed to them.

Now some change in tastes over time could be due to new expressions of hormones at different ages, and some can be the honest discovery of a long-term compatibility between one’s genetic nature and particular practices. But honestly, these just aren’t very plausible explanations for most of our acquired tastes. Instead, it seems that we are designed to acquire tastes according to which things seem high status, make us look good, are endorsed by our community, etc.

Now one doesn’t need to doubt culturally-acquired tastes in the same way one should doubt culturally-acquired beliefs. Once you’d gone through the early acquiring process your tastes may really be genuine, in the sense of really making you happy when satisfied. But you do have to wonder if you could come to acquire new tastes. And even if you are too old for that, you have to wonder what kind of tastes new kids could acquire. There seem to be huge gains from choosing the kinds of tastes to have new kids acquire. If they’d be just as happy with such tastes later, why not get kids to acquire tastes for hard work, for well paid work, or for products that are easier to make. For example, why not encourage a taste for common products, instead of for massive product variety?

The points I’m making are old, and often go under the label “cultural relativity.” This is sometimes summarized as saying that nothing is true or good, except relative to a culture. Which is of course just wrong. But that doesn’t mean there aren’t huge important issues here. The strong ability of cultures to influence our beliefs and tastes does force us to question our beliefs and tastes. But on the flip side, this strong effect offers the promise of big gains in both belief accuracy and happiness efficiency, if only we can think through this culture stuff well.

GD Star Rating
loading...
Tagged as: ,

Disciplines As Contrarian Correlators

I’m often interested in subjects that fall between disciplines, or more accurately that intersect multiple disciplines. I’ve noticed that it tends to be harder to persuade people of claims in these areas, even when one is similarly conservative in basing arguments on standard accepted claims from relevant fields.

One explanation is that people realize that they can’t gain as much prestige from thinking about claims outside their main discipline, so they just don’t bother to think much about such claims. Instead they default to rejecting claims if they see any reason whatsoever to doubt them.

Another explanation is that people in field X more often accept the standard claims from field X than they accept the standard claims from any other field Y. And the further away in disciplinary space is Y, or the further down in the academic status hierarchy is Y, the less likely they are to accept a standard Y claim. So an argument based on claims from both X and Y is less likely to be accepted by X folks than a claim based only on claims from X.

A third explanation is that people in field X tend to learn and believe a newspaper version of field Y that differs from the expert version of field Y. So X folks tend to reject claims that are based on expert versions of Y claims, since they instead believe the differing newspaper versions. Thus a claim based on expert versions of both X and Y claims will be rejected by both X and Y folks.

These explanations all have a place. But a fourth explanation just occurred to me. Imagine that smart people who are interested in many topics tend to be contrarian. If they hear a standard claim of any sort, perhaps 1/8 to 1/3 of the time they will think of a reason why that claim might not be true, and decide to disagree with this standard claim.

So far, this contrarianism is a barrier to getting people to accept any claims based on more than a handful of other claims. If you present an argument based on five claims, and your audience tends to randomly reject more than one fifth of claims, then most of your audience will reject your claim. But let’s add one more element: correlations within disciplines.

Assume that the process of educating someone to become a member of discipline X tends to induce a correlation in contrarian tendencies. Instead of independently accepting or rejecting the claims that they hear, they see claims in their discipline X as coming in packages to be accepted or rejected together. Some of them reject those packages and leave X for other places. But the ones who haven’t rejected them accept them as packages, and so are open to arguments that depend on many parts of those packages.

If people who learn area X accept X claims as packages, but evaluate Y claims individually, then they will be less willing to accept claims based on many Y claims. To a lesser extent, they also reject claims based on some Y claims and some X claims.

Note that none of these explanations suggest that these claims are actually false more often; they are just rejected more.

GD Star Rating
loading...
Tagged as: ,

Show Outside Critics

Worried that you might be wrong? That you might be wrong because you are biased? You might think that your best response is to study different kinds of biases, so that you can try to correct your own biases. And yes, that can help sometimes. But overall, I don’t think it helps much. The vast depths of your mind are quite capable of tricking you into thinking you are overcoming biases, when you are doing no such thing.

A more robust solution is to seek motivated and capable critics. Real humans who have incentives to find and explain flaws in your analysis. They can more reliably find your biases, and force you to hear about them. This is of course an ancient idea. The Vatican has long had “devil’s advocates”, and many other organizations regularly assign critics to evaluate presented arguments. For example, academic conferences often assign “discussants” tasked with finding flaws in talks, and journals assign referees to criticize submitted papers.

Since this idea is so ancient, you might think that the people who talk the most about trying to overcoming bias would apply this principle far more often than do others. But from what I’ve seen, you’d be wrong.

Oh, almost everyone circulates drafts among close associates for friendly criticism. But that criticism is mostly directed toward avoiding looking bad when they present to a wider audience. Which isn’t at all the same as making sure they are right. That is, friendly local criticism isn’t usually directed at trying to show a wider audience flaws in your arguments. If your audience won’t notice a flaw, your friendly local critics have little incentive to point it out.

If your audience cared about flaws in your arguments, they’d prefer to hear you in a context where they can expect to hear motivated capable outside critics point out flaws. Not your close associates or friends, or people from shared institutions via which you could punish them for overly effective criticism. Then when the flaws your audience hears about are weak, they can have more confidence that your arguments are strong.

And if even if your audience only cared about the appearance of caring about flaws in your argument, they’d still want to hear you matched with apparently motivated capable critics. Or at least have their associates hear that such matching happens. Critics would likely be less motivated and capable in this case, but at least there’d be a fig leaf that looked like good outside critics matched with your presented arguments.

So when you see people presenting arguments without even a fig leaf of the appearance of outside critics being matched with presented arguments, you can reasonably conclude that this audience doesn’t really care much about appearing to care about hidden flaws in your argument. And if you are the one presenting arguments, and if you didn’t try to ensure available critics, then others can reasonably conclude that you don’t care much about persuading your audience that your argument lacks hidden flaws.

Now often this criticism approach is often muddled by the question of which kinds of critics are in fact motivated and capable. So often “critics” are used who don’t have in fact have much relevant expertise, or who have incentives that are opaque to the audience. And prediction markets can be seen as a robust solution to this problem. Every bet is an interaction between two sides who each implicitly criticize the other. Both are clearly motivated to be accurate, and have clear incentives to only participate if they are capable. Of course prediction market critics typically don’t give as much detail to explain the flaws they see. But they do make clear that they see a flaw.

GD Star Rating
loading...
Tagged as: , , ,

Me At NIPS Workshop

Tomorrow I’ll present on prediction markets and disagreement, in Montreal at the NIPS Workshop on Transactional Machine Learning and E-Commerce. A video will be available later.

GD Star Rating
loading...
Tagged as: , ,

The Puzzle Of Persistent Praise

We often praise and criticize people for the things they do. And while we have many kinds of praise, one very common type (which I focus on in this post) seems to send the message “what you did was good, and it would be good if more of that sort of thing were done.” (Substitute “bad” for “good” to get the matching critical message.)

Now if it would be good to have more of some act, then that act is a good candidate for something to subsidize more. And if most people agreed that this sort of act deserved more subsidy, then politicians should be tempted to run for office on the platform that they will increase the actual subsidy given to that kind of act. After all, if we want more of some kind of acts, why don’t we try to better reward those acts? And so good acts shouldn’t long remain with an insufficient subsidy. Or bad acts without an insufficient tax.

But in fact we seem to have big categories of acts which we consistently praise for being good, and where this situation persists for decades or centuries. Think charity, innovation, or artistic or sport achievement. Our political systems do not generate much political pressure to increase the subsidies for such things. Subsidy-increasing proposals are not even common issues in elections. Similarly, large categories of acts are consistently criticized, yet few politicians run on platforms proposing to increase taxes on such acts.

My best interpretation of this situation is that while our words of praise give the impression that we think that most people would agree that the acts we praise are good, and should be more common, we don’t really believe this. Either we think that the acts signal impressive or praise-worthy features, but shouldn’t be more common, or we think such acts should be more common, but we also see large opposing political coalitions who disagree with our assessment.

That is, my best guess is that when we look like we are praising acts for promoting a commonly accepted good, we are usually really praising impressiveness, or we are joining in a partisan battle on what should be seen as good.

Because my explanation is cynical, many people count it as “extraordinary”, and think powerful extraordinary evidence must be mustered before one can reasonably suggest that it is plausible. In contrast, the usual self-serving idealistic explanations people give for their behavior are ordinary, and therefore can be accepted on face value without much evidence at all being offered in their defense. People get mad at me for even suggesting cynical theories in short blog posts, where large masses of extraordinary evidences have not been mustered. I greatly disagree with this common stacking of the deck against cynical theories.

Even so, let us consider some seven other possible explanations of this puzzle of persistent praise (and criticism). And in the process make what could have been a short blog post considerably longer. Continue reading "The Puzzle Of Persistent Praise" »

GD Star Rating
loading...
Tagged as: , , ,

Beware Status Arrogance

Imagine that you are expert in field A, and a subject in field B comes up at party. You know that there may be others at the party who are expert in field B. How reluctant does this make you to openly speculate about this topic? Do you clam up and only cautiously express safe opinions, or do you toss out the thoughts that pop into your head as if you knew as much about the subject as anyone?

If you are like most people, the relative status of fields A and B will likely influence your choice. If the other field has higher status than yours, you are more likely to be cautious, while if the other field has lower status than yours, you are more likely to speculate freely. In both cases your subconscious will have made good guesses about the likely status consequences to you if an expert in B were to speak up and challenge your speculations. At some level you would know that others at the party are likely to back whomever has the higher status, even if the subject is within the other person’s area of expertise.

But while you are likely to be relatively safe from status losses, you should know that you are not safe from being wrong. When people from different fields argue about something within one of their areas of expertise, that expert is usually right, even when the other field has higher status. Yes people from your field may on average be smarter and harder-working, and your field may have contributed more to human progress. Even so, people who’ve studied more about the details of something usually know more about it.

GD Star Rating
loading...
Tagged as: ,

Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

GD Star Rating
loading...
Tagged as: ,

Bias Is A Red Queen Game

It takes all the running you can do, to keep in the same place. The Red Queen.

In my last post I said that as “you must allocate a very limited budget of rationality”, we “must choose where to focus our efforts to attend carefully to avoiding possible biases.” Some objected, seeing the task of overcoming bias as like lifting weights to build muscles. Scott Alexander compared it to developing habits of good posture and lucid dreaming:

If I can train myself to use proper aikido styles of movement even when I’m doing something stupid like opening a door, my body will become so used to them that they will be the style I default to when my mind is otherwise occupied. .. Lucid dreamers offer some techniques for realizing you’re in a dream, and suggest you practice them even when you are awake, especially when you are awake. The goal is to make them so natural that you could (and literally will) do them in your sleep. (more)

One might also compare with habits like brushing your teeth regularly, or checking that your fly isn’t unzipped. There are indeed many possible good habits, and some related to rationality. And I encourage you all to develop good habits.

What I object to is letting yourself think that you have sufficiently overcome bias by collecting a few good mental habits. My reason: the task of overcoming bias is a Red Queen game, i.e., one against a smart, capable, and determined rival, not a simple dumb obstacle.

There are few smart determined enemies trying to dirty your teeth, pull your fly down, mess your posture, weaken your muscles, or keep you unaware that you are dreaming. Nature sometimes happens to block your way in such cases, but because it isn’t trying hard to do so, it takes only modest effort to overcome such obstacles. And as these problems are relatively simple and easy, an effective strategy to deal with them doesn’t have to take much context into account.

For a contrast, consider the example of trying to invest to beat the stock market. In that case, it isn’t enough to just be reasonably smart and attentive, and avoid simple biases like not deciding when very emotional. When you speculate in stocks, you are betting against other speculators, and so can only expect to win if you are better than others. If you can’t reasonably expect to have better info and analysis than the average person on the other side of your trades, you shouldn’t bet at all, but instead just take the average stock return, by investing in index funds.

Trying to beat the stock market is a Red Queen game against a smart determined opponent who is quite plausibly more capable than you. Other examples of Red Queen games are poker, and most competitive contests like trying to win at sports, music, etc. The more competitive a contest, the more energy and attention you have to put in to have a chance at winning, and the more you have to expect to specialize to have a decent chance. You can’t just develop good general athletic habits to win at all sports, you have to pick the focus sport where you are going to try to win. And for all the non-focus sports, you might play them for fun sometimes, but you shouldn’t expect to win against the best.

Overcoming bias is also a Red Queen game. Your mind was built to be hypocritical, with more conscious parts of your mind sincerely believing that they are unbiased, and other less conscious parts systematically distorting those beliefs, in order to achieve the many functional benefits of hypocrisy. This capacity for hypocrisy evolved in the context of conscious minds being aware of bias in others, suspecting it in themselves, and often sincerely trying to overcome such bias. Unconscious minds evolved many effective strategies to thwart such attempts, and they usually handily win such conflicts.

Given this legacy, it is hard to see how your particular conscious mind has much of a chance at all. So if you are going to create a fighting chance, you will need to try very hard. And this trying hard should include focusing a lot, so you can realize gains from specialization. Just as you’d need to pay close attention and focus well to have much of a chance at beating the hedge funds and well-informed expert speculators who you compete with in stock markets.

In stock markets, the reference point for “good enough” is set by the option to just take the average via an index fund. If using your own judgement will do worse than an index fund, you might as well just take that fund. In overcoming bias, a reference point is set by the option to just accept the estimates of others who are also trying to overcome bias, but who focus on that particular topic.

Yes you might do better than you otherwise would have if you use a few good habits of rationality. But doing a bit better in a Red Queen game is like bringing a knife to a gunfight. If those good habits make you think “I’m a rationalist,” you might think too highly of yourself, and be reluctant to just take the simple option of relying on the estimates of others who try to overcome their biases and focus on those particular topics. After all, refusing to defer to others is one of our most common biases.

Remember that the processes inside you that bias your beliefs are many, varied, subtle, and complex. They express themselves in different ways on different topics. It is far from sufficient to learn a few simple generic tricks that avoid a few simple symptoms of bias. Your opponent is putting a lot more work into it than that, and you will need to do so as well if you are to have much of a chance. When you play a Red Queen game, go hard or go home.

GD Star Rating
loading...
Tagged as: ,

Disagreement Is Far

Yet more evidence that it is far mental modes that cause disagreement:

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. (more; paper; HT Elliot Olds)

The question “why” evokes a far mode while “how” which evokes a near mode.

GD Star Rating
loading...
Tagged as: ,

Reason, Stories Tuned for Contests

Humans have a capacity to reason, i.e., to find and weigh reasons for and against conclusions. While one might expect this capacity to be designed to work well for a wide variety of types of conclusions and situations, our actual capacity seems to be tuned for more specific cases. Mercier and Sperber:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. … Poor performance in standard reasoning tasks is explained by the lack of argumentative context. … People turn out to be skilled arguers (more)

That is, our reasoning abilities are focused on contests where we already have conclusions that we want to support or oppose, and where particular rivals give conflicting reasons. I’d add that such abilities also seem tuned to win over contest audiences by impressing them, and by making them identify more with us than with our rivals. We also seem eager to visibly hear argument contests, in addition to participating in such contests, perhaps to gain exemplars to improve our own abilities, to signal our embrace of social norms, and to exert social influence as part of the audience who decides which arguments win.

Humans also have a capacity to tell stories, i.e., to summarize sets of related events. Such events might be real and past, or possible and future. One might expect this capacity to be designed to well-summarize a wide variety of event sets. But, as with reasoning, we might similarly find that our actual story abilities are tuned for the more specific case of contests, where the stories are about ourselves or our rivals, especially where either we or they are suspected of violating social norms. We might also be good at winning over audiences by impressing them and making them identify more with us, and we may also be eager to listen to gain exemplars, signal norms, and exert influence.

Consider some forager examples. You go out to find fire wood, and return two hours later, much later than your spouse expected. During a hunt someone shot an arrow that nearly killed you. You don’t want the band to move to new hunting grounds quite yet, as your mother is sick and hard to move. Someone says something that indirectly suggests that they are a better lover than you.

In such examples, you might want to present an interpretation of related events that persuades others to adopt your favored views, including that you are able and virtuous, and that your rivals are unable and ill-motivated. You might try to do this via direct arguments, or more indirectly via telling a story that includes those events. You might even work more indirectly, by telling a fantasy story where the hero and his rival have suspicious similarities to you and your rival.

This view may help explain some (though hardly all) puzzling features of fiction:

  • Most of our real life events, even the most important ones like marriages, funerals, and choices of jobs or spouses, seem too boring to be told as stories.
  • Compared to real events, even important ones, stories focus far more on direct conscious conflicts between people, and on violations of social norms.
  • Compared to real people, character features are more extreme, and have stronger correlations between good features.
  • Compared to real events, fictional events are far more easily predicted by character motivations, and by assuming a just world.
GD Star Rating
loading...
Tagged as: , ,