Tag Archives: Disagreement

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Economists Rarely Say “Nothing But”

Imagine someone said:

Those physicists go too far. They say conservation of momentum applies exactly at all times to absolutely everything in the universe. And yet they can’t predict whether I will raise my right or left hand next. Clearly there is more going on than their theories can explain. They should talk less and read more literature. Maybe then they’d stop saying immoral things like Earth’s energy is finite.

Sounds silly, right? But many literary types really don’t like economics (in part due to politics), and they often try to justify their dislike via a similar critique. They say that we economists claim that complex human behavior is “nothing but” simple economic patterns. For example, in the latest New Yorker magazine, journalist and novelist John Lanchester tries to make such a case in an article titled:

Can Economists and Humanists Ever Be Friends? One discipline reduces behavior to elegantly simple rules; the other wallows in our full, complex particularity. What can they learn from each other?

He starts by focusing on our book Elephant in the Brain. He says we make reasonable points, but then go too far:

The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. … Erving Goffman’s “The Presentation of Self in Everyday Life,” or … Pierre Bourdieu’s masterpiece “Distinction” … are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.

This intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). … Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve … or mb=mc. … These are powerful tools, which can be taken too far.

You might think that Lanchester would support his claim that we overreach by pointing to particular large claims and then offering evidence that they are false in particular ways. Oddly, you’d be wrong. (Our book mentions no math nor rules of any sort.) He actually seems to accept most specific claims we make, even pretty big ones:

Many of the details of Hanson and Simler’s thesis are persuasive, and the idea of an “introspective taboo” that prevents us from telling the truth to ourselves about our motives is worth contemplating. … The writers argue that the purpose of medicine is as often to signal concern as it is to cure disease. They propose that the purpose of religion is as often to enhance feelings of community as it is to enact transcendental beliefs. … Some of their most provocative ideas are in the area of education, which they believe is a form of domestication. … Having watched one son go all the way through secondary school, and with another who still has three years to go, I found that account painfully close to the reality of what modern schooling is like.

While Lanchester does argue against some specific claims, these are not claims that we actually made. For example:

“The Elephant in the Brain”… has moments of laughable wrongness. We’re told, “Maya Angelou … managed not to woo Bill Clinton with her poetry but rather to impress him—so much so that he invited her to perform at his presidential inauguration in 1993.” The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.

But we said nothing like “Angelou’s career amounts to nothing more than.” Saying that she impressed Clinton with her poetry is not remotely to imply there was “nothing more” to her career. Also:

More generally, Hanson and Simler’s emphasis on signalling and unconscious motives suggests that the most important part of our actions is the motives themselves, rather than the things we achieve. … The last sentence of the book makes the point that “we may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” With that one observation, acknowledging that the consequences of our actions are more important than our motives, the argument of the book implodes.

We emphasize “signalling and unconscious motives” because is the topic of our book. We don’t ever say motives are the most important part of our actions, and as he notes, in our conclusion we suggest the opposite. Just as a book on auto repair doesn’t automatically claim auto repair to be the most important thing in the world, a book on hidden motives needn’t claim motives are the most important aspect of our lives. And we don’t.

In attributing “overreach” to us, Lanchester seems to rely most heavily on a quick answer I gave in an interview, where Tyler Cowen asked me to respond “in as crude or blunt terms as possible”:

Wait, though—surely signalling doesn’t account for everything? Hanson … was asked to give a “short, quick and dirty” answer to the question of how much human behavior “ultimately can be traced back to some kind of signalling.” His answer: “In a rich society like ours, well over ninety per cent.” … That made me laugh, and also shake my head. … There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.

That quote is not from our book, and is from a context where you shouldn’t expect it to be easy to see exactly what was meant. And saying that a signaling motive is on average one of the strongest (if often unconscious) motives in an area of life is to say that this motive importantly shapes some key patterns of behavior in this area of life; it is not remotely to claim that this fact explains most of details of human behavior in this area! So shaping key patterns in 90% of areas explains far less than 90% of all behavior details. Saying that signaling is an important motive doesn’t at all say that human behavior is “nothing more” than signaling. Other motives contribute, we vary in how honest and conscious we are of each motive, there are usually a great many ways to signal any given thing in any given context, and many different cultural equilibria can coordinate individual behavior. There remains plenty of room for complexity, as people like Goffman and Bourdieu illustrate.

Saying that an abstraction is important doesn’t say that the things to which it applies are “nothing but” that abstraction. For example, conservation of momentum applies to all physical behavior, yet it explains only a tiny fraction of the variance in behavior of physical objects. Natural selection applies to all species, yet most species details must be explained in other ways. If most roads try to help people get from points A to B, that simple fact is far from sufficient to predict where all the roads are. The fact that a piece of computer code is designed help people navigate roads explains only a tiny fraction of which characters are where in the code. Financial accounting applies to nearly 100% of firms, yet it explains only a small fraction of firm behavior. All people need air and food to survive, and will have a finite lifespan, and yet these facts explain only a tiny fraction of their behavior.

Look, averaging over many people and contexts there must be some strongest motive overall. Economists might be wrong about what that is, and our book might be wrong. But it isn’t overreach or oversimplification to make a tentative guess about it, and knowing that strongest motive won’t let you explain most details of human behavior. As an analogy, consider that every nation has a largest export commodity. Knowing this commodity will help you understand something about this nation, but it isn’t remotely reasonable to say that a nation is “nothing more” than its largest export commodity, nor to think this fact will explain most details of behavior in this nation.

There are many reasonable complaints one can make about economics. I’ve made many myself. But this complaint that we “overreach” by “reducing complexity to simple rules” seems to me mostly rhetorical flourish without substance. For example, most models we fit to data have error terms to accommodate everything else that we’ve left out of that particular model. We economists are surely wrong about many things, but to argue that we are wrong about a particular thing you’ll actually need to talk about details related to that thing, instead of waving your hands in the general direction of “complexity.”

GD Star Rating
loading...
Tagged as: , ,

How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,

Our Book’s New Ground

In today’s Wall Street Journal, Matthew Hutson, author of The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane, reviews our new book The Elephant in the Brain. He starts and ends with obligatory but irrelevant references to Trump. Quotes from the rest:

The book builds on centuries of writing about self-deception. … I can’t say that the book covers new ground, but it is a smart synthesis and offers several original metaphors. People self-deceive about lots of things. We overestimate our ability to drive. We conveniently forget who started an argument. … Much of what we do, including our most generous behavior, the authors say, is not meant to be helpful. We are, like many other members of the animal kingdom, competitively altruistic—helpful in large part to earn status. … Casual conversations, for instance, often trade in random information. But the point is not to trade facts for facts; what you are actually doing, the book argues, is showing off so people can evaluate your intellectual versatility. …

The authors take particular interest in large-scale social issues and institutions, showing how systems of collective self-deception help explain the odd behavior we see in art, charity, education, medicine, religion and politics. Why do people vote? Not to strengthen the republic. …. Instead, we cheer for our team and participate as a signal of loyalty, hoping for the benefits of inclusion. In education, as many economists have argued, learning is ancillary to accreditation and status. … In many areas of medicine, they note, increased care does not improve outcomes. People offer it to broadcast helpfulness, or demand it to demonstrate how much support they have from others.

“The Elephant in the Brain” is refreshingly frank and penetrating, leaving no stone of presumed human virtue unturned. The authors do not even spare themselves. … It is accessibly erudite, deftly deploying essential technical concepts. … Still, the authors urge hope. … There are ways to leverage our hidden motives in the pursuit of our ideals. The authors offer a few suggestions. … Unfortunately, the book devotes only a few pages to such solutions. “The Elephant in the Brain” does not judge us for hiding selfish motives from ourselves. And to my mind, given that we will always have selfish motives, keeping them concealed might even provide a buffer against naked strife. (more)

All reasonable, except maybe for “can’t say that the book covers new ground.” Yes, scholars of self-deception like Hutson will find plausible both our general thesis and most of our claims about particular areas of life. And yes those specific claims have almost all been published before. Even so, I bet most policy experts will call our claims on their particular area “surprising” and even “extraordinary”, and judge that we have not offered sufficiently extraordinary evidence in support. I’ve heard education policy experts say this on Bryan Caplan’s new book, The Case Against Education. And I’ve heard medicine policy experts say this on our medicine claims, and political system experts say this on our politics claims.

In my view, the key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence can make these conclusions believable. That is, there’s an intellectual contribution to make by arguing together for a large set of related contrarian-to-experts claims. This is what I suggest is original about our book.

I expect that experts in each policy area X will be much more skeptical about our claims on X than about our claims on the other areas. You might explain this by saying that our arguments are misleading, and only experts can see the holes. But I instead suggest that policy experts in each X are biased because clients prefer them to assume the usual stories. Those who hire education policy experts expect them to talk about better learning the material, and so on. Such biases are weaker for those who study motives and self-deception in general.

Hutson has one specific criticism:

The case for medicine as a hidden act of selfishness may have some truth, but it also has holes. For example, the book does not address why medical spending is so much higher in the U.S. than elsewhere—do Americans care more than others about health care as a status symbol?

We do not offer our thesis as an explanation for all possible variations in these activities! We say that our favored motive is under-acknowledged, but we don’t claim that it is the only motive, nor that motive variations are the only way to explain behavioral variation. The world is far too big and complex for one simple story to explain it all.

Finally, I must point out one error:

“The Elephant in the Brain,” a book about unconscious motives. (The titular pachyderm refers not to the Republican Party but to a metaphor used in 2006 by the social psychologist Jonathan Haidt, in which reason is the rider on the elephant of emotion.)

Actually it is a reference to common idea of “the elephant in the room”, a thing we can all easily see but refuse to admit is there. We say there’s a big one regarding how our brains work.

GD Star Rating
loading...
Tagged as: , , ,

When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
loading...
Tagged as: , , ,

Automatic Norms

Some new ideas I want to explain start with a 2000 paper on Taboo Tradeoffs. (See also newer stuff.) So I’ll review that paper in this post, and then I’ll explain my new ideas in the next post.

In Experiment 2 of the 2000 paper, each of 228 subjects were asked to respond to one of 8 scenarios, created by three binary alternatives. All the scenarios involved:

Robert, the key decision maker, was described as the Director of Health Care Management at a major hospital who confronted a “resource allocation decision.”

Robert was either asked to make a tragic tradeoff, where two sacred values conflicted, or a taboo tradeoff, where a sacred value was in conflict with a non-sacred value. The tragic tradeoff:

Robert can either save the life of Johnny, a five year old boy who needs a liver transplant, or he can save the life of an equally sick six year old boy who needs a liver transplant. Both boys are desperately ill and have been on the waiting list for a transplant but because of the shortage of local organ donors, only one liver is available. Robert will only be able to save one child.

The taboo tradeoff:

Robert can save the life of Johnny, a five year old who needs a liver transplant, but the transplant procedure will cost the hospital $1,000,000 that could be spent in other ways, such as purchasing better equipment and enhancing salaries to recruit talented doctors to the hospital. Johnny is very ill and has been on the waiting list for a transplant but because of dire shortage of local organ donors, obtaining a liver will be expensive. Robert could save Johnny’s life, or he could use the $1,000,000 for other hospital needs.

Robert was said to either find this decision easy or difficult:

“Robert sees his decision as an easy one, and is able to decide quickly,” or “Robert finds this decision very difficult, and is only able to make it after much time, thought, and contemplation.”

Finally, Robert was said to have chosen to save Johnny, or to have chosen otherwise. Subjects were asked to rate Robert’s decision and describe their feelings about it in 8 ways. They were also asked to make 3 decisions on actions regarding Robert, including dismiss from job, punish, and end friendship. Using factor analysis all these responses were combined into an outrage factor, mainly weighted on 6 of the ratings and feelings, and a punish factor, mainly weighted on the 3 actions. These factors were on a 1-7 point scale. Here are the average factor values for the eight possible scenarios:

In the case of a taboo tradeoff, Robert is less likely to be punished for saving Johnny than for not.  We have a strong social norm against trading sacred things for non-sacred things, and Robert is to be punished if he violates this taboo. When Robert makes a sacred tradeoff, it is as if he must violate a norm no matter what he does. In this case, he is punished much more if he treats this as an easy choice; norm violation must be done in a serious thoughtful manner.

However, when Robert makes a taboo tradoff, he is punished much more if he treats this as a difficult choice. In fact, he is punished almost as much for saving Johnny after much thought as he is for not saving Johnny after little thought! It is worse to do the wrong thing after careful thought than after little thought.

Years ago, this result helped me to understand the political reaction when in 2003 my Policy Analysis Market (PAM) was accused of trying to let people bet on terrorist deaths.

PAM appeared to some to cross a moral boundary, which can be paraphrased roughly as “none of us should intend to benefit when some of them hurt some of us.” (While many of us do in fact benefit from terrorist attacks, we can plausibly argue that we did not intend to do so.) So, by the taboo tradeoff effect, it was morally unacceptable for anyone in Congress or the administration to take a few days to think about the accusation. The moral calculus required an immediate response.

Of course, no one at high decision-making levels knew much about a $1 million research project within a $1 trillion government budget. If PAM had been a $1 billion project, representatives from districts where that money was spent might have considered defending the project. But there was no such incentive for a $1 million project (spent mostly in California and London); the safe political response was obvious: repudiate PAM, and everyone associated with it. (more)

Today, however, my interest is in what these results imply for our awareness of where our norm feelings come from, and how much they are shared by others. These results suggest that when we face a choice, the categorization of some of the options as norm violating is supposed to come to us fast, and with little thought or doubt. Unless we notice that all of the options violate similarly important norms, we are supposed to be sure of which options to reject, without needing to consult with other people, and without needing to try to frame the choice in multiple ways, to see if the relevant norms are subject to framing effects. We are to presume that framing effects are unimportant, and that everyone agrees on the relevant norms and how they are to be applied.

Apparently the legal principle of “ignorance of the law is no excuse” isn’t just a convenient way to avoid incentives not to know the law, and to avoid having to inquire about who knows what laws. Regarding norms more generally, including legal norms, we seem to think “ignorance of the norms isn’t plausible; you must have known.”

If this description is correct, it seems to me to have remarkable implications. Which I’ll discuss in my next post. (Unless of course you figure them all out in the comments now.)

GD Star Rating
loading...
Tagged as: ,

Why Be Contrarian?

While I’m a contrarian in many ways, it think it fair to call my ex-co-blogger Eliezer Yudkowsky even more contrarian than I. And he has just published a book, Inadequate Equilibria, defending his contrarian stance, against what he calls “modesty”, illustrated in these three quotes:

  1. I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall; … to be mistaken about issues on which there is expert disagreement about half of the time. …
  2. On most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment. …
  3. We all ought to [avoid disagreeing with] each other as a matter of course. … You can’t trust the reasoning you use to think you’re more meta-rational than average.

In contrast, Yudkowsky claims that his book readers can realistically hope to become successfully contrarian in these 3 ways:

  1. 0-2 lifetime instances of answering “Yes” to “Can I substantially improve on my civilization’s current knowledge if I put years into the attempt?” …
  2. Once per year or thereabouts, an answer of “Yes” to “Can I generate a synthesis of existing correct contrarianism which will beat my current civilization’s next-best alternative, for just myself. …
  3. Many cases of trying to pick a previously existing side in a running dispute between experts, if you think that you can follow the object-level arguments reasonably well and there are strong meta-level cues that you can identify. … [This] is where you get the fuel for many small day-to-day decisions, and much of your ability to do larger things.

Few would disagree with his claim #1 as stated, and it is claim #3 that applies most often to reader lives. Yet most of the book focuses on claim #2, that “for just myself” one might annually improve on the recommendation of our best official experts. Continue reading "Why Be Contrarian?" »

GD Star Rating
loading...
Tagged as:

When to Parrot, Pander, or Think for Yourself

Humans are built to argue and persuade. We tend to win when we endorse arguments that others accept, and win even more when we can generate new arguments that others will accept. This is both because people notice who originated the arguments that they accept, and because this ability helps us to move others toward opinions that favor our policies and people.

All of this is of course relative to some community who evaluates our arguments. Sometimes the larger world defers to a community of experts, and then it is that community who you must persuade. In other cases, people insist on deciding for themselves, and then you have to persuade them directly.

Consider three prototypical discussions:

  1. Peers in a car, talking on the path to drive to reach an event where they are late.
  2. Ordinary people, talking on if and how black holes leak information.
  3. Parents, talking on how Santa Claus plans to delivers presents Christmas eve.

In case #1, it can be reasonable for peers to think sincerely, in the sense of looking for arguments to persuade themselves, and then offering those same arguments to each other. It can be reasonable here to speak clearly and directly, to find and point out flaws in others’ arguments, and to believe that the net result is to find better approximations to truth.

In case #2, most people are wise to mostly parrot what they hear experts say on the topic. The more they try to make up their own arguments, or even to adapt arguments they’ve heard to particular contexts, the more they risk looking stupid. Especially if experts respond. On such topics, it can pay to be abstract and somewhat unclear, so that one can never be clearly shown to be wrong.

In case #3, parents gain little from offering complex new arguments, or even finding flaws in the usual kid arguments, at least when only parents can understand these. Parents instead gain from finding variations on the usual kid arguments that kids can understand, variations that get kids to do what parents want. Parents can also gain from talking at two levels at once, one discussion at a surface visible to kids, and another at a level visible only to other parents.

These three cases illustrate the three general cases, where your main audience is 1) about as capable , 2) more capable, or 3) less capable than you in generating and evaluating arguments on this topic. Your optimal argumentation strategy depends on in which of these cases you find yourself.

When your audience is about the same as you, you can most usefully “think for yourself”, in the sense that if an argument persuades you it will probably persuade your audience as well, at least if it uses popular premises. So you can be more comfortable in thinking sincerely, searching for arguments that will persuade you. You can be eager to find fault w/ arguments and criticize them, and to listen to such criticisms to see if they persuade you. And you can more trust the final consensus after your discussion.

The main exception here is where you tend to accept premises that are unpopular with your audience. In this case, you can either disconnect with that audience, not caring to try to persuade them, or you can focus less on sincerity and more on persuasion, seeking arguments that will convince them given their different premises.

When your audience is much more capable than you, then you can’t trust your own argument generation mechanism. You must instead mostly look to what persuades your superiors and try to parrot that. You may well fail if you try to adapt standard arguments to particular new situations, or if you try to evaluate detailed criticisms of those arguments. So you try to avoid such things. You instead seek generic positions that don’t depend as much on context, expressed in not entirely clear language that lets you decide at the last minute what exactly you meant.

When your audience is much less capable than you, then arguments that persuade you tend to be too complex to persuade them. So you must instead search for arguments that will persuade them, even if they seem wrong to you. That is, you must pander. You are less interested in rebuttals or flaws that are too complex to explain to your audience, though you are plenty interested in finding flaws that your audience can understand. You are also not interested in finding complex fixes and solutions to such flaws.

You must attend not only to the internal coherence of your arguments, but also to the many particular confusions and mistakes to which your audience is inclined. You must usually try arguments out to see how well they work on your audience. You may also gain by using extra layers of meaning to talk more indirectly to impress your more capable sub-audience.

What if, in addition to persuading best, you want to signal that you are more capable? To show that you are not less capable than your audience, you might go out of your way to show that you can sincerely, on the fly and without assistance, and without studying or practicing on your audience, construct new arguments that plausibly apply to your particular context, and identify flaws with new arguments offered by others. You’d be sincerely argumentative.

To suggest that you are more capable than your audience, you might instead show that you pay attention to the detailed mistakes and beliefs of your audience, and that you first try arguments out on them. You might try to show that you are able to find arguments by which you could persuade that audience of a wide range of conclusions, not just the conclusions you privately find the most believable. You might also show that you can simultaneously make persuasive arguments to your general audience, while also discreetly making impressive comments to a sub-audience that is much more capable. Sincerely “thinking for yourself” can look bad here.

In a world where people following the strategies I’ve outlined above, the quality of general opinion on each topic probably depends most strongly on something near the typical capability of the relevant audience that evaluates arguments on that topic. (I’d guess roughly the 80th percentile matters most on average.) The less capable mostly parrot up, and the more capable mostly pander down. Thus firms tend to be run in ways that makes sense to that rank employee or investor. Nations are run in ways that make sense to that rank citizen. Stories make sense to that rank reader/viewer. And so on. Competition between elites pandering down may on net improve opinion, as may selective parroting from below, though neither seems clear to me.

If we used better institutions for key decisions (e.g., prediction/ decision markets), then the audience that matters might become much more capable, to our general benefit. Alas that initial worse audience usually decides not to use better institutions. And in a world of ems typical audiences also become much more capable, to their benefit.

GD Star Rating
loading...
Tagged as: , ,

Steven Levy’s Generic Skepticism

Steven Levy praises TED to the heavens:

Not every talk is one for the ages, but the TED News Feed is in sync with Ezra Pound’s insufficiently famous quote that “literature is news that stays news.” In TED’s world, at least when it’s working well, the news that stays news is science — as well as the recognizable truths of who we are as a species, and what we are capable of, good or evil. .. Much of the TED News Feed was an implicit rebuke of the politics of the day. Generally, TED speakers are believers in the scientific method. There were even a couple of talks this year whose very point was that there is a thing called truth.

Well, except for my talk:

Still, the TED News Feed was not free of potentially fake news, albeit of the scientific kind. A speaker named Robin Hanson (a George Mason professor and a guru of prediction markets) gave what he described as a data-driven set of predictions of a world where super-intelligent robots would rule the earth after forcing humans to “retire.” It seemed to me that he simply labeled his sci-fi fantasy as non-fiction. Plus, when I checked his website later, I learned he “invented a new form of government called futarchy,” and that his favorite musician was Vangelis. (When I later asked Anderson about that talk, he explained, without necessarily endorsing my criticism, that it was “a roll of the dice,” and that generally it was a good thing when talks took risks.)

That is all of Steven Levy’s critique; there is no more. He actually came up to me after my talk, saying something generically skeptical. I pointed out that I’d written a whole book full of analysis detail, and I asked him to pick out anything specific I had said that he doubted, offering to explain my reasoning on that. But he instead just walked away.

Maybe Mr. Levy comes from a part of science I’m not familiar with, but in the parts of science I know, a critic of a purported scientific analysis is expected to offer specific criticisms, in addition to any general negative rating. The 130 words he devoted here was enough space to at least hint at which of my claims he doubted. And for the record, in my books and talks I’m very clear that my analysis is theory-driven, not data-driven, and that it is conditional on my key technology assumptions.

GD Star Rating
loading...
Tagged as:

A Book Response Prediction

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. Schopenhauer, 1788-1860.

My next book won’t come out until January, and reviews of it will appear in the weeks and months after that. But now, a year in advance, I want to make a prediction about the main objections that will be voiced. In particular I predict that two of the most common responses will a particular opposing pair.

If you recall, our book is about hidden motives (a.k.a., “X is not about Y):

We’re afraid to acknowledge the extent of our own selfishness. .. The Elephant in the Brain aims to .. blast floodlights into the dark corners of our minds. .. Why do humans laugh? Why are artists sexy? Why do people brag about travel? Why do we so often prefer to speak rather than listen?

Like all psychology books, The Elephant in the Brain examines many quirks of human cognition. But this book also ventures where others fear to tread: into social critique. The authors show how hidden selfish motives lie at the very heart of venerated institutions like Art, Education, Charity, Medicine, Politics, and Religion.

I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.

I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.

These responses are opposing in the sense that it is hard to find a set of positions from which one could endorse both responses.

I have not phrased this prediction so as to make it very easy to check later if its right. I have also not offered a specific probability. Given the many ambiguities here, this seems right to me.

GD Star Rating
loading...
Tagged as: ,