Why Be Contrarian?

While I’m a contrarian in many ways, it think it fair to call my ex-co-blogger Eliezer Yudkowsky even more contrarian than I. And he has just published a book, Inadequate Equilibria, defending his contrarian stance, against what he calls “modesty”, illustrated in these three quotes:

  1. I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall; … to be mistaken about issues on which there is expert disagreement about half of the time. …
  2. On most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment. …
  3. We all ought to [avoid disagreeing with] each other as a matter of course. … You can’t trust the reasoning you use to think you’re more meta-rational than average.

In contrast, Yudkowsky claims that his book readers can realistically hope to become successfully contrarian in these 3 ways:

  1. 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” …
  2. Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself. …
  3. Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify. … [This] is where you get the fuel for many small day-to-day decisions, and much of your ability to do larger things.

Few would disagree with his claim #1 as stated, and it is claim #3 that applies most often to reader lives. Yet most of the book focuses on claim #2, that “for just myself” one might annually improve on the recommendation of our best official experts.

The main reason to accept #2 is that there exist what we economists call “agency costs” and other “market failures” that result in “inefficient equilibria” (which can also be called “inadequate”). Our best experts don’t try with their full efforts to solve your personal problems, but instead try to win the world’s somewhat arbitrary games. Games that individuals just cannot change. Yudkowsky may not be saying anything especially original here about how broken the world can be, but his discussion is excellent, and I hope it will be widely read.

Yudkowsky gives some dramatic personal examples, but simpler examples can also make the point. For example, one can often use maps or a GPS to improve on official road signs saying which highway exits to use for particular destinations, as sign officials often placate local residents seeking less through-traffic. Similarly, official medical advisors tend to advise medical treatment too often relative to doing nothing, official education experts tend to advise education too often as a career strategy, official investment advisors suggest active investment too often relative to index funds, and official religion experts advise religion too often relative to non-religion. In many cases, one can see plausible system-level problems that could lower the quality of official advice, inducing these experts to try harder to impress and help each other than to help clients.

To explain how inadequate are many of our equilibria, Yudkowsky contrasts them with our most adequate institution: competitive speculative financial markets, where it is kind of crazy to expect your beliefs to be much more accurate than are market prices. He also highlights the crucial importance of competitive meta-institutions, for example lamenting that there is no place on Earth where one can pay to try out arbitrary new social institutions. (Alas he doesn’t endorse my call to fix much of the general problem of disagreement via speculative markets, especially on meta topics. Like many others he seems more interested in bets as methods of personal virtue than as institution solutions.)

However, while understanding that systems are often broken can lead us to accept Yudkowsky’s claim #2 above, that doesn’t obviously support his claim #3, nor undercut the modesty that he disputes. After all, reasonable people could just agree that, by acting directly and avoiding broken institutions, individuals can often beat the best institutionally-embedded experts. For example, individuals can gain by investing more in index funds, and by choosing less medicine, school, and religion than experts advise. So the existence of broken institutions can’t by itself explain why disagreement exists, nor why readers of Yudkowsky’s book should reasonably expect to consistently pick who is right among disagreeing experts.

Thus Yudkowsky needs more to argue against modesty, and for his claim #3. Even if it is crazy to disagree with very adequate financial institutions, and not quite so crazy to disagree with less adequate institutions, that doesn’t imply that it is actually reasonable to disagree with anyone about anything.

His book says less on this topic, but it does say some. First, Yudkowsky accepts my summary of the rationality of disagreement, which says that agents who are mutually aware of being meta-rational (i.e., trying to be accurate and getting how disagreement works) should not be able to foresee their disagreements. Even when they have very different concepts, info, analysis, and reasoning errors.

If you and a trusted peer don’t converge on identical beliefs once you have a full understanding of one another’s positions, at least one of you must be making some kind of mistake.

Yudkowsky says he has applied this result, in the sense that he’s learned to avoid disagreeing with two particular associates that he greatly respects. But he isn’t much inclined to apply this toward the other seven billion humans on Earth; his opinion of their meta-rationality seems low. After all, if they were as meta-rational as he and his two great associates, then “the world would look extremely different from how it actually does.” (It would disagree a lot less, for example.)

Furthermore, Yudkowsky thinks that he can infer his own high meta-rationality from his details:

I learned about processes for producing good judgments, like Bayes’s Rule, and this let me observe when other people violated Bayes’s Rule, and try to keep to it myself. Or I read about sunk cost effects, and developed techniques for avoiding sunk costs so I can abandon bad beliefs faster. After having made observations about people’s real-world performance and invested a lot of time and effort into getting better, I expect some degree of outperformance relative to people who haven’t made similar investments. … [Clues to individual meta-rationality include] using Bayesian epistemology or debiasing techniques or experimental protocol or mathematical reasoning.

The possibility that some agents have low meta-rationality is illustrated by these examples:

Those who dream do not know they dream, but when you are awake, you know you are awake. … If a rock wouldn’t be able to use Bayesian inference to learn that it is a rock, still I can use Bayesian inference to learn that I’m not.

Now yes, the meta-rationality of some might be low, that of others might be high, and the high might see real clues allowing them to correctly infer their different condition, clues that the low also have available to them but for some reason neglect to apply, even though the fact of disagreement should call the issue to their attention. And yes, those clues might in principle include knowing about Bayes’ rule, sunk costs, debiasing, experiments, or math. (They might also include many other clues that Yudkowsky lacks, such as relevant experience.)

Alas, Yudkowsky doesn’t offer empirical evidence that these possible clues of meta-rationality are in fact actually clues in practice, that some correctly apply these clues much more reliably than others, nor that the magnitude of these effects are large enough to justify the size of disagreements that Yudkowsky suggests as reasonable. Remember, to justifiably disagree on which experts are right in some dispute, you’ll have to be more meta-rational than are those disputing experts, not just than the general population. So to me, these all remain open questions on disagreement.

In an accompanying essay, Yudkowsky notes that while he might seem to be overconfident, in many lab tests of cognitive bias,

around 10% of undergraduates fail to exhibit this or that bias … So the question is whether I can, with some practice, make myself as non-overconfident as the top 10% of college undergrads. This… does not strike me as a particularly harrowing challenge. It does require effort.

Though perhaps Yudkowsky isn’t claiming as much as he seems. He admits that allowing yourself to disagree because you think you see clues of your own superior meta-rationality goes badly for many, perhaps most, people:

For many people, yes, an attempt to identify contrarian experts ends with them trusting faith healers over traditional medicine. But it’s still in the range of things that amateurs can do with a reasonable effort, if they’ve picked up on unusually good epistemology from one source or another.

Even so, Yudkowsky endorses anti-modesty for his book readers, who he sees as better than average, and also too underconfident on average (even though most people are overconfident). His advice is especially targeted at those who aspire to his claim #1:

If you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground.

This seems to me to be a good reason to take a big anti-modest stance. If you are serious about trying hard to make a big advance somewhere, then you must get into the habit of questioning the usual accounts, and thinking through arguments for yourself in detail. If your chance of making a big advance is much higher if you are in fact more meta-rational than average, then you have a better chance of achieving a big advance if you assume your own high meta-rationality within your advance-attempt-thinking. Perhaps you could do even better if you limited this habit to the topic areas near where you have a chance of making a big advance. But maybe that sort of mental separation is just too hard.

So far this discussion of disagreement and meta-rationality has drawn nothing from the previous discussion of inefficient institutions in a broken world. And without such a connection, this book is really two separate books, tied perhaps by a mood affiliation.

Yudkowsky doesn’t directly make a connection, but I can make some guesses. One possible connection applies if official experts tend to deny that they sit in inadequate equilibria, or that their claims and advice are compromised by such inadequacy. When these experts are high status, others might avoid contradicting their claims. In this situation, those who are more willing to make cynical claims about a broken world, or more willing to disagree with high status people, can be on average more correct, relative to those who insist on taking more idealistic stances toward the world and the high in status.

In particular, such cynical contrarians can be correct about when individuals can do better via acting directly than indirectly via institution-embedded experts, and they can be correct when siding with low against high status experts. This doesn’t seem sufficient to me to justify Yudkowsky’s more general anti-modesty, which for example seems to support often picking high status experts against low status ones. But it can at least go part of the way.

We have a few other clues to Yudkowsky’s position. First, while he explains the impulse toward modesty via status effects, he claims to personally care little about status:

Many people seem to be the equivalent of asexual with respect to the emotion of status regulation—myself among them. If you’re blind to status regulation (or even status itself) then you might still see that people with status get respect, and hunger for that respect.

Second, note that if the reason you can beat on our best experts is that you can act directly, while they must win via social institutions, then this shouldn’t help much when you must also act via social institutions. So it is telling that in two examples, Yudkowsky thinks he can do substantially better than the rest of the world, even when he must act via social institutions.

First, he claims that the MIRI research institute he helped found “can do better than academia” because “We were a small research institute that sustains itself on individual donors. … we had deliberately organized ourselves to steer clear of [bad] incentives.” Second, he finds it “conceivable” that the world’s rate of innovation might increase noticeably if another small organization that he helped to found “annual budget grew 20x, and then they spent four years iterating experimentally on techniques, and then a group of promising biotechnology grad students went through a year of CFAR training.”

Putting this all together my best guess is that Yudkowsky sees himself, his associates, and his best readers as only moderately smarter and more knowledgeable than others; what really distinguishes them is that they really care much more about the world and truth. So much so that they are willing to make cynical claims, disagree with the high status, and sacrifice their careers. This is the key element of meta-rationality they see as lacking in the others with whom they feel free to disagree. Those others are mainly trying to win the usual status games, while he and his associates are after truth.

Alas this is a familiar story from a great many sides in a great many disputes. Each says they are right because the others are less sincere and more selfish. While most such sides must be wrong in these claims, no doubt some people do care more about the world and truth than others. Furthermore, those special people may see detailed signs telling them this fact, while others lack those signs but fail to sufficiently attend to that fact.

And we again come back to the core hard question in the rationality of disagreement: how can you tell if you are neglecting key signs about your (lack of) meta-rationality? But alas, other than just claiming that such clues exist, Yudkowsky doesn’t offer much analysis to help us advance on this hard problem.

Eliezer Yudkowsky’s new book Inadequate Equilibria  is really two disconnected books, one (larger) book that does an excellent job of explaining how individuals acting directly can often improve on the best advice of experts embedded in broken institutions, and another (smaller) book that largely fails to explain why one can realistically hope to consistently pick the correct side among disputing experts. I highly recommend the first book, even if one has to sometimes skim through the second book to get to it.

Of course, if you are trying hard to make a big advance somewhere, then it can make sense to just assume you are better, at least within the scope of the topic areas where you might make your big advance. But for other topic areas, and for everyone else, you should still wonder how sure you can reasonably be that you are have in fact not neglected clues showing that you are less meta-rational than those with whom you feel free to disagree. This remains the big open question in the rationality of disagreement. It is a question to which I hope to return someday.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Michael Vassar

    If you care about the truth of whether you care about the truth more than most others do, you almost certainly care about the truth more than most others do. If not, you should claim to anyway when it’s convenient, and also, why should others care if what you do works out well?

    • https://entirelyuseless.wordpress.com/ entirelyuseless

      One problem is that it’s easy to confuse caring more about whether you care about the truth with caring more about whether people think you care about the truth.

      You only have so much care to go around. So if you’re excited about saving the world, you probably don’t care much about truth.

      • Michael Vassar

        Silly nihilist. Truth is for reality.
        If you don’t care about truth then as such you don’t care about anything, but I guess it must hurt less, and who am I to judge?

      • https://entirelyuseless.wordpress.com/ entirelyuseless

        I care about truth. I am saying that is different from caring about accomplishing things. If you think they are the same, you are mistaken.

      • Michael Vassar

        Said every ideology ever.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        My recollection is the opposite: most ideologies identify the True and the Good. (Don’t get me wrong. In a dispute between the True and the Good, my conscious values tell me to side with the True. But their nonidentity is inconvenient – not convenient – for ideologies.)

      • Michael Vassar

        If you understood what ideology means, you wouldn’t care what beliefs ideologies claim to assert.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Yeah, right.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Can you not imagine, if not perceive, situations where the knowing the truth conflicts with realizing some altruistic value?

      • Michael Vassar

        Not if you’re using the right decision theory, e.g. one that doesn’t negotiate with terrorists.

      • Tobi Alafin

        Truth != Good. If you can’t distinguish them, it’s a failure of *your* imagination.

  • efalken

    I haven’t read the entire book, just sketches, but it seems immoderately moderate, deferring too much. Just as Feynman noted that science starts with a ‘guess’ and then working it out, many good ideas start with an objectively improbable truth, but one that comes from experience and intuition, and that one finds fun and fruitful to pursue. Hopefully one will be aware of inconsistencies and refutations, but you can’t be too harsh with these because every good theory has them (eg, efficient markets, downward sloping demand curves, the multiverse).

    As children, we should generally defer to experts because we are so ignorant, but eventually most of us glom onto a core set of beliefs we find especially important, ones that most others find either wrong or not that important. This leads us to various ‘tribes’ where others share these core beliefs, and to the extent we are correct, our tribe prospers, collectively and individually. Unfortunately, the benefits of having a tribe (emotional, professional) are sufficiently large that switching becomes costly as we get older, which makes it harder to reject these core beliefs over time. But that just emphasizes the importance of making good choices, and having good parents and friends.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Personal concern for truth is over-rated. Over-concern (fear of error) can be as intellectually debilitating as lack of concern.

    Truth is a societal product. (See the argumentative theory of reason in The Enigma of Reason.) Individuals need only be concerned with being honest and smart.

  • Paul Rain

    “I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall”

    Perhaps if these ‘things’ are things that require knowledge. But that assumes that someone who is above average in IQ would not attempt to learn anything about a subject- and be able to catch up faster than most- before engaging on it. Obviously this is the case for most Hollyweird celebrities and TV scientists. But I would hope not, for Yudkowsky.

  • Tim Josling

    > competitive speculative financial markets, where it is kind of crazy to expect your beliefs to be much more accurate than are market prices.

    It is very common in the rationality world to overstate the efficient markets hypothesis. I think it is important to understand its limits.

    Efficient markets depend on the ability of smarter players to use arbitrage to correct wrong prices. As an example, if IBM’s versus MSFT’s prospects are worse in the short term than market thinks, one can easily ‘short’ IBM and go ‘long’ MSFT. So you should not expect a naive retail investor to be able to win by exploiting relative mispricings in large, highly liquid stocks.

    But arbitrage is often not possible and so prices are often far from rational values. For an excellent academic discussion see “The Limits of Arbitrage” Andrei Shleifer; Robert W. Vishny
    The Journal of Finance, Vol. 52, No. 1. (Mar., 1997), pp. 35-55, readily available online.

    Some areas where arbitrage fails

    1. Real estate and housing markets because shorting is not possible and there are too many properties and not enough experts. Note that a high proportion of all investable wealth is in the form of real estate.

    2. Small illiquid stocks because shorting is too expensive and risky (stocks borrowed to put on the short position can be taken away, usually at the worst possible moment). Also because of high trading costs, especially for large fund managers. Note that the vast majority of stocks are small stocks.

    3. Long term mispricings. Because financial intermediaries are judged by short term performance, long term mispricings are hard to fix. Mispricings can get worse in the short term and the fund manager who bets against them can get fired. Many did in the lead up to the dot.com crash.

    So markets as a whole can go far from fundamental values and the market cannot correct it in the short term.

    Keynes put it roughly this way: markets can stay crazy longer than you can stay solvent.

    I am not suggesting it is easy to beat the market but one should not overstate the rationality of financial markets.

    • http://overcomingbias.com RobinHanson

      Eliezer doesn’t overstate this at all. And my short comment couldn’t (and shouldn’t) make so many disclaimers.

      • Tim Josling

        You are right that Eliezer’s statement was more nuanced.

        I do see in the rationality community very often the meme of perfectly efficient markets and my comment was intended as an antidote to that, as for example if people took your statement literally.

      • http://twitter.com/jordangray Jordan Gray

        Appreciated—I took the statement literally and benefited from your caveating of it.

      • Peter David Jones

        EY does keep quoting ” a tautology that for every loaf of bread bought there must be a loaf of bread sold, and therefore supply is always and everywhere equal to demand”, even though it doesn’t demontrate all markets are efficient, and he doesn’t believe that all markets are efficient. Why? Is this some kind of standing joke amongst economists that the rest of us are not part to?

  • Pingback: You Have the Right to Think | Don't Worry About the Vase

  • Silent Cal

    Would I be accurate to summarize your objection as, “You can expect to outperform the experts with respect to your own life, because you have better incentives to be right; but you don’t have better incentives with respect to questions not about your own life”?

    • http://overcomingbias.com RobinHanson

      Close. If you don’t obviously have better incentives, I’ll wonder what reason you have to say your opinion is more reliable.

  • David Simmons

    When you write “Similarly, official medical advisors tend to advise medical treatment too often relative to doing nothing”, you seem to be saying that if a doctor says you should get medical treatment, then your confidence in this claim should be less (ceteris paribus) than what is expressed by the doctor’s statement. But doesn’t this violate Aumann’s theorem? How do you reconcile this? Are you saying that the doctor has a high chance of being dishonest because of his incentives? Is that enough to explain the disagreement?

    To put it another way, a doctor seems to be a person rather than an institution, so saying that you should disagree with a doctor does seem to “imply that it is actually reasonable to disagree with [someone] about [something].”

    • http://overcomingbias.com RobinHanson

      The doctor is trying to win in a broken system, which often induces him to give inaccurate advise. So you can be justified in disagreeing with him, if you think your incentives are less broken.

  • Pingback: The Right to Be Wrong | Otium

  • Steve Z

    1. E.Y. has a belief, unsupported by empirical evidence, that he’s a member of an “elect” better-able to divine the truth of the world than the average Joe.
    2. E.Y. comes from a Rabbinical line, all of whom had a belief set similar to (1).
    3a. E.Y. attributes HIS membership in the elect to _mind tools_, and various commitments to something he calls “rationality.”
    3b. E.Y.’s Rabinnical forbears presumably attributed THEIR membership in the elect to being the chosen people, and their course of Talmudic studies.
    4. E.Y. rejects his forbears’ claim to being in the elect, because they’ve not studied the right things; yet his Rabinnical forbears would likely have a similar view of E.Y.
    5a. Given E.Y. hypothesis of ascendance through rationality training and meta-commitments isn’t based on empirical evidence, and reasonable priors, it’s at least as likely both the belief, and any sustaining ‘evidence’ for it, are based on genetic propensities mildly mediated by culture, and not rationality training or anything of the sort.
    5b. The fact E.Y. comes from a Rabbinical line with similar beliefs, in broad outline, supports this hypothesis.

  • Pingback: More Dakka | Don't Worry About the Vase

  • Pingback: December 2017 Newsletter - Machine Intelligence Research Institute