Tag Archives: Disagreement

We Agree On So Much

In a standard Bayesian model of beliefs, an agent starts out with a prior distribution over a set of possible states, and then updates to a new distribution, in principle using all the info that agent has ever acquired. Using this new distribution over possible states, this agent can in principle calculate new beliefs on any desired topic. 

Regarding their belief on a particular topic then, an agent’s current belief is the result of applying their info to update their prior belief on that topic. And using standard info theory, one can count the (non-negative) number of info bits that it took to create this new belief, relative to the prior belief.  (The exact formula is Sumi pi log2(pi/qi), where pi is the new belief, qi is the prior, and i ranges over possible answers to this topic question.)  

How much info an agent acquires on a topic is closely related to how confident they become on that topic. Unless a prior starts out very confident, high confidence later can only come via updating on a great many info bits. 

Humans typically acquire vast numbers of info bits over their lifetime. By one estimate, we are exposed to 34GB per day. Yes, as a practical matter we can’t remotely make full use of all this info, but we do use a lot of it, and so our beliefs do over time embody a lot of info. And even if our beliefs don’t reflect all our available info, we can still talk about the number of bits are embodied in any given level of confidence an agent has on a particular topic. 

On many topics of great interest to us, we acquire a huge volume of info, and so become very confident. For example, consider how confident you are at the moment about whether you are alive, whether the sun is shining, that you have ten fingers, etc. You are typically VERY confident about such things, because have access to a great many relevant bits.

On a great many other topics, however, we hardly know anything. Consider, for example, many details about the nearest alien species. Or even about the life of your ancestors ten generations back. On such topics, if we put in sufficient effort we may be able to muster many very weak clues, clues that can push our beliefs in one direction or another. But being weak, these clues don’t add up to much; our beliefs after considering such info aren’t that different from our previous beliefs. That is, on these topics we have less than one bit of info. 

Let us now collect a large broad set of such topics, and ask: what distribution should we expect to see over the number of bits per topic? This number must be positive, for many familiar topics it is much much larger than one, while for other large sets of topics, it is less than one. 

The distribution most commonly observed for numbers that must be positive yet range over many orders of magnitude is: lognormal. And so I suggest that we tentatively assume a (large-sigma) lognormal distribution over the number of info bits that an agent learns per topic. This may not be exactly right, but it should be qualitatively in the ballpark.  

One obvious implication of this assumption is: few topics have nearly one bit of info. That is, most topics are ones where either we hardly know anything, or where we know so much that we are very confident. 

Note that these typical topics are not worth much thought, discussion, or work to cut biases. For example, when making decisions to maximize expected utility, or when refining the contribution that probabilities on one topic make to other topic probabilities, getting 10% of one’s bits wrong just won’t make much of difference here. Changing 10% of 0.01 bit makes still leaves one’s probabilities very close to one’s prior. And changing 10% of a million bits still leaves one with very confident probabilities.  

Only when the number of bits on a topic is of order unity do one’s probabilities vary substantially with 10% of one’s bits. These are the topics where it can be worth paying a fixed cost per topic to refine one’s probabilities, either to help make a decision or to help update other probability estimates. And these are the topics where we tend to think, talk, argue, and worry about our biases.

It makes sense that we tend to focus on pondering such “talkable topics”, where such thought can most improve our estimates and decisions. But don’t let this fool you into thinking we hardly agree on anything. For the vast majority of topics, we agree either that we hardly know anything, or that we quite confidently know the answer. We only meaningfully disagree on the narrow range of topics where our info is on the order of one bit, topics where it is in fact worth the bother to explore our disagreements. 

Note also that for these key talkable topics, making an analysis mistake on just one bit of relevant info is typically sufficient to induce large probability changes, and thus large apparent disagreements. And for most topics it is quite hard to think and talk without making at least one bit’s worth of error. Especially if we consume 34GB per day! So its completely to be expected that we will often find ourselves disagreeing on talkable topics at the level of few bits.

So maybe cut yourself and others a bit more slack about your disagreements? And maybe you should be more okay with our using mechanisms like betting markets to average out these errors. You really can’t be that confident that it is you who has made the fewest analysis errors. 

GD Star Rating
loading...
Tagged as:

Huemer On Disagreement

Mike Huemer on disagreement:

I participated in a panel discussion on “Peer Disagreement”. … the other person is about equally well positioned for forming an opinion about that issue — e.g., about as well informed, intelligent, and diligent as you. … discussion fails to produce agreement … Should you just stick with your own intuitions/judgments? Should you compromise by moving your credences toward the other person’s credences? …

about the problem specifically of philosophical disagreement among experts (that is, professional philosophers): it seems initially that there is something weird going on, … look how much disagreement there is, … I think it’s not so hard to understand a lot of philosophical disagreement. … we often suck as truth-seekers: Bad motives: We feel that we have to defend a view, because it’s what we’ve said in print in the past. … We lack knowledge (esp. empirical evidence) relevant to our beliefs, when that knowledge is outside the narrow confines of our academic discipline. … We often just ignore major objections to our view, even though those objections have been published long ago. … Differing intuitions. Sometimes, there are just two or more ways to “see” something. …

You might think: “But I’m a philosopher too [if you are], so does that mean I should discount my own judgments too?” Answer: it depends on whether you’re doing the things I just described. If you’re doing most of those things, it’s not that hard to tell.

Philosophy isn’t really that different from most other topic areas; disagreement is endemic most everywhere. The main ways that it is avoided is via extreme restrictions on topic, or strong authorities who can force others to adopt their views.

Huemer is reasonable here right up until his last five words. Sure we can find lots of weak indicators of who might be more informed and careful in general, and also on particular topics. Especially important are clues on if a person listens well to others, and updates on the likely info value of others’ opinions.

But most everyone already knows this, and so typically tries to justify their disagreement by pointing to positive indicators about themselves, and negative indicators about those who disagree with them. If we could agree on the relative weight of these indicators, and act on them, then we wouldn’t actually disagree much. (Formally we wouldn’t foresee to disagree.)

But clearly we are severely biased in our estimates of these relative indicator weight, to favor ourselves. These estimates come to us quite intuitively, without needing much thought, and are typically quite confident, making us not very anxious about their errors. And we mostly seem to be quite sincere; we aren’t usually much aware that we might be favoring ourselves. Or if we are somewhat aware, we tend to feel especially confident that those others with whom we disagree are at least as biased as we. I see no easy introspective fix here.

The main way I know to deal with this problem is to give yourself much stronger incentives to be right: bet on it. As soon as you start to think about how much you’d be willing to bet, and at what odds, you’ll find yourself suddenly much more aware of the many ways you might be wrong. Yes, people who bet still disagree more than is accuracy-rational, but they are much closer to the ideal. And they get even closer as they start to lose bets and update their estimates re how good they are on what topics.

GD Star Rating
loading...
Tagged as:

To Oppose Polarization, Tug Sideways

Just over 42% of the people in each party view the opposition as “downright evil.” … nearly one out of five Republicans and Democrats agree with the statement that their political adversaries “lack the traits to be considered fully human — they behave like animals.” … “Do you ever think: ‘we’d be better off as a country if large numbers of the opposing party in the public today just died’?” Some 20% of Democrats and 16% of Republicans do think [so]. … “What if the opposing party wins the 2020 presidential election. How much do you feel violence would be justified then?” 18.3% of Democrats and 13.8% of Republicans said [between] “a little” to “a lot.” (more)

Pundits keep lamenting our increasing political polarization. And their preferred fix seems to be to write more tsk-tsk op-eds. But I can suggest a stronger fix: pull policy ropes sideways. Let me explain.

Pundit writings typically recommend some policies relative to others. In polarized times such as ours, these policy positions tend to be relatively predictable given a pundit’s political value positions, i.e., the positions they share with their political allies relative to their political enemies. And much of the content of their writings work to clarify any remaining ambiguities, i.e., to explain why their policy position is in fact a natural result of political positions they share with their allies. So only people with evil values would oppose it. So readers can say “yay us, boo them”.

Twelve years ago I described this as a huge tug-o-war:

The policy world can thought of as consisting of a few Tug-O-War “ropes” set up in [a] high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are “one of them,” you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it. (more)

To oppose this tendency, one idea is to encourage pundits to sometimes recommend policies that are surprising or the opposite of what their political positions might suggest. That is, go pull on the opposite side of a rope sometimes, to show us that you think for yourself, and aren’t driven only by political loyalty. And yes doing this may help. But as the space of political values that we fight over is multi-dimensional, surprising pundit positions can often be framed as a choice to prioritize some values over others, i.e., as a bid to realign the existing political coalitions in value space. Yes, this may weakens the existing dominant political axis, but it may not do much to make our overall conversation less political.

Instead, I suggest that we encourage pundits to grab a policy tug-o-war rope and pull it sideways. That is, take positions that are perpendicular to the usual political value axes, in areas where one has not yet taken explicit value-oriented positions. For example, a pundit who has not yet taken a position on whether we should have more or less military spending might argue for more navy relative to army, and then insist that this is not a covert way to push a larger or smaller military. Most credibly by continuing to not take a position on overall military spending. (And by not coming from a navy family, for whom navy is a key value.)

Similarly, someone with no position on if we should punish crime more or less than we currently do might argue for replacing jail-based punishments with fines, torture, or exile. Or, given no position on more or less immigration, argue for a particular new system to decide which candidates are more worthy of admission. Or given no position on how hard we should work to compensate for past racism, argue for cash reparations relative to affirmative action.

Tugging policy ropes sideways will frustrate and infuriate loyalists who seek mainly to praise their political allies and criticize their enemies. Such loyalists will be tempted to assume the worse about you, and claim that you are trying to covertly promote enemy positions. And so they may impose a price on you for this stance. But to the extent that observers respect you, loyalists will pay a price for attacking you in this way, and raising their overall costs of making everything political. And so on average by paying this price you can buy an overall intellectual conversation that’s a bit less political. Which is the goal here.

In addition, pulling ropes sideways is on average just a better way to improve policy. As I said twelve years ago:

If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy. On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled. (more)

Yes, there is a sense in which arguments for “sideways” choices do typically appeal to a shared value: “efficiency”. For example, one would typically argue for navy over army spending in terms of cost-effectiveness in military conflicts and deterrence. Or might argue for punishment via fines in terms of cost-effectiveness for the goals of deterrence or rehabilitation. But all else equal we all like cost-effectiveness; political coalitions rarely want to embrace blatant anti-efficiency positions. So the more our policy debates emphasize efficiency, the less political polarized they should be.

Of course my suggestion here isn’t especially novel; most pundits are aware that they have the option to take the sort of sideways positions that I’ve recommended. Most are also aware that by doing so, they’d less enflame the usual political battles. Yet how often have you heard pundits protest that others falsely attributed larger value positions to them, when they really just tried to argue for cost-effectiveness of A over B using widely shared effectiveness concepts? That scenario seems quite rare to me.

So the main hope I can see here is of a new signaling equilibria where people tug sideways and brag about it, or have others brag on their behalf, to show their support for cutting political polarization. And thereby gain support from an audience who wants to reward cutters. Which of course only works if enough pundits actually believe a substantial such audience exists. So what do you say, is there much of an audience who wants to cut political polarization?

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,

Rationality Requires Common Priors

Late in November 2006 I started this blog, and a month later on Christmas eve I reported briefly on the official publication (after 8 rejections) of my paper Uncommon Priors Require Origin Disputes. That was twelve years ago, and now Google Scholar tells me that this paper has 17 cites, which is about 0.4% of my 3933 total cites, which I’d say greatly under-estimates its value.

Recently I had the good fortune to be invited to speak at the Rutgers Seminar on Foundations of Probability, and I took that opportunity to raise awareness about my old paper. Only about ten folks attended (a famous philosopher spoke nearby at the same time), but this video was taken:

In the video my slides are at times dim, but they can be seen sharp here. Let me now try to explain why my topic is important, and what is my result. Continue reading "Rationality Requires Common Priors" »

GD Star Rating
loading...
Tagged as: ,

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Economists Rarely Say “Nothing But”

Imagine someone said:

Those physicists go too far. They say conservation of momentum applies exactly at all times to absolutely everything in the universe. And yet they can’t predict whether I will raise my right or left hand next. Clearly there is more going on than their theories can explain. They should talk less and read more literature. Maybe then they’d stop saying immoral things like Earth’s energy is finite.

Sounds silly, right? But many literary types really don’t like economics (in part due to politics), and they often try to justify their dislike via a similar critique. They say that we economists claim that complex human behavior is “nothing but” simple economic patterns. For example, in the latest New Yorker magazine, journalist and novelist John Lanchester tries to make such a case in an article titled:

Can Economists and Humanists Ever Be Friends? One discipline reduces behavior to elegantly simple rules; the other wallows in our full, complex particularity. What can they learn from each other?

He starts by focusing on our book Elephant in the Brain. He says we make reasonable points, but then go too far:

The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. … Erving Goffman’s “The Presentation of Self in Everyday Life,” or … Pierre Bourdieu’s masterpiece “Distinction” … are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.

This intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). … Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve … or mb=mc. … These are powerful tools, which can be taken too far.

You might think that Lanchester would support his claim that we overreach by pointing to particular large claims and then offering evidence that they are false in particular ways. Oddly, you’d be wrong. (Our book mentions no math nor rules of any sort.) He actually seems to accept most specific claims we make, even pretty big ones:

Many of the details of Hanson and Simler’s thesis are persuasive, and the idea of an “introspective taboo” that prevents us from telling the truth to ourselves about our motives is worth contemplating. … The writers argue that the purpose of medicine is as often to signal concern as it is to cure disease. They propose that the purpose of religion is as often to enhance feelings of community as it is to enact transcendental beliefs. … Some of their most provocative ideas are in the area of education, which they believe is a form of domestication. … Having watched one son go all the way through secondary school, and with another who still has three years to go, I found that account painfully close to the reality of what modern schooling is like.

While Lanchester does argue against some specific claims, these are not claims that we actually made. For example:

“The Elephant in the Brain”… has moments of laughable wrongness. We’re told, “Maya Angelou … managed not to woo Bill Clinton with her poetry but rather to impress him—so much so that he invited her to perform at his presidential inauguration in 1993.” The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.

But we said nothing like “Angelou’s career amounts to nothing more than.” Saying that she impressed Clinton with her poetry is not remotely to imply there was “nothing more” to her career. Also:

More generally, Hanson and Simler’s emphasis on signalling and unconscious motives suggests that the most important part of our actions is the motives themselves, rather than the things we achieve. … The last sentence of the book makes the point that “we may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” With that one observation, acknowledging that the consequences of our actions are more important than our motives, the argument of the book implodes.

We emphasize “signalling and unconscious motives” because is the topic of our book. We don’t ever say motives are the most important part of our actions, and as he notes, in our conclusion we suggest the opposite. Just as a book on auto repair doesn’t automatically claim auto repair to be the most important thing in the world, a book on hidden motives needn’t claim motives are the most important aspect of our lives. And we don’t.

In attributing “overreach” to us, Lanchester seems to rely most heavily on a quick answer I gave in an interview, where Tyler Cowen asked me to respond “in as crude or blunt terms as possible”:

Wait, though—surely signalling doesn’t account for everything? Hanson … was asked to give a “short, quick and dirty” answer to the question of how much human behavior “ultimately can be traced back to some kind of signalling.” His answer: “In a rich society like ours, well over ninety per cent.” … That made me laugh, and also shake my head. … There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.

That quote is not from our book, and is from a context where you shouldn’t expect it to be easy to see exactly what was meant. And saying that a signaling motive is on average one of the strongest (if often unconscious) motives in an area of life is to say that this motive importantly shapes some key patterns of behavior in this area of life; it is not remotely to claim that this fact explains most of details of human behavior in this area! So shaping key patterns in 90% of areas explains far less than 90% of all behavior details. Saying that signaling is an important motive doesn’t at all say that human behavior is “nothing more” than signaling. Other motives contribute, we vary in how honest and conscious we are of each motive, there are usually a great many ways to signal any given thing in any given context, and many different cultural equilibria can coordinate individual behavior. There remains plenty of room for complexity, as people like Goffman and Bourdieu illustrate.

Saying that an abstraction is important doesn’t say that the things to which it applies are “nothing but” that abstraction. For example, conservation of momentum applies to all physical behavior, yet it explains only a tiny fraction of the variance in behavior of physical objects. Natural selection applies to all species, yet most species details must be explained in other ways. If most roads try to help people get from points A to B, that simple fact is far from sufficient to predict where all the roads are. The fact that a piece of computer code is designed help people navigate roads explains only a tiny fraction of which characters are where in the code. Financial accounting applies to nearly 100% of firms, yet it explains only a small fraction of firm behavior. All people need air and food to survive, and will have a finite lifespan, and yet these facts explain only a tiny fraction of their behavior.

Look, averaging over many people and contexts there must be some strongest motive overall. Economists might be wrong about what that is, and our book might be wrong. But it isn’t overreach or oversimplification to make a tentative guess about it, and knowing that strongest motive won’t let you explain most details of human behavior. As an analogy, consider that every nation has a largest export commodity. Knowing this commodity will help you understand something about this nation, but it isn’t remotely reasonable to say that a nation is “nothing more” than its largest export commodity, nor to think this fact will explain most details of behavior in this nation.

There are many reasonable complaints one can make about economics. I’ve made many myself. But this complaint that we “overreach” by “reducing complexity to simple rules” seems to me mostly rhetorical flourish without substance. For example, most models we fit to data have error terms to accommodate everything else that we’ve left out of that particular model. We economists are surely wrong about many things, but to argue that we are wrong about a particular thing you’ll actually need to talk about details related to that thing, instead of waving your hands in the general direction of “complexity.”

GD Star Rating
loading...
Tagged as: , ,

How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,