Tag Archives: Disagreement

To Oppose Polarization, Tug Sideways

Just over 42% of the people in each party view the opposition as “downright evil.” … nearly one out of five Republicans and Democrats agree with the statement that their political adversaries “lack the traits to be considered fully human — they behave like animals.” … “Do you ever think: ‘we’d be better off as a country if large numbers of the opposing party in the public today just died’?” Some 20% of Democrats and 16% of Republicans do think [so]. … “What if the opposing party wins the 2020 presidential election. How much do you feel violence would be justified then?” 18.3% of Democrats and 13.8% of Republicans said [between] “a little” to “a lot.” (more)

Pundits keep lamenting our increasing political polarization. And their preferred fix seems to be to write more tsk-tsk op-eds. But I can suggest a stronger fix: pull policy ropes sideways. Let me explain.

Pundit writings typically recommend some policies relative to others. In polarized times such as ours, these policy positions tend to be relatively predictable given a pundit’s political value positions, i.e., the positions they share with their political allies relative to their political enemies. And much of the content of their writings work to clarify any remaining ambiguities, i.e., to explain why their policy position is in fact a natural result of political positions they share with their allies. So only people with evil values would oppose it. So readers can say “yay us, boo them”.

Twelve years ago I described this as a huge tug-o-war:

The policy world can thought of as consisting of a few Tug-O-War “ropes” set up in [a] high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are “one of them,” you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it. (more)

To oppose this tendency, one idea is to encourage pundits to sometimes recommend policies that are surprising or the opposite of what their political positions might suggest. That is, go pull on the opposite side of a rope sometimes, to show us that you think for yourself, and aren’t driven only by political loyalty. And yes doing this may help. But as the space of political values that we fight over is multi-dimensional, surprising pundit positions can often be framed as a choice to prioritize some values over others, i.e., as a bid to realign the existing political coalitions in value space. Yes, this may weakens the existing dominant political axis, but it may not do much to make our overall conversation less political.

Instead, I suggest that we encourage pundits to grab a policy tug-o-war rope and pull it sideways. That is, take positions that are perpendicular to the usual political value axes, in areas where one has not yet taken explicit value-oriented positions. For example, a pundit who has not yet taken a position on whether we should have more or less military spending might argue for more navy relative to army, and then insist that this is not a covert way to push a larger or smaller military. Most credibly by continuing to not take a position on overall military spending. (And by not coming from a navy family, for whom navy is a key value.)

Similarly, someone with no position on if we should punish crime more or less than we currently do might argue for replacing jail-based punishments with fines, torture, or exile. Or, given no position on more or less immigration, argue for a particular new system to decide which candidates are more worthy of admission. Or given no position on how hard we should work to compensate for past racism, argue for cash reparations relative to affirmative action.

Tugging policy ropes sideways will frustrate and infuriate loyalists who seek mainly to praise their political allies and criticize their enemies. Such loyalists will be tempted to assume the worse about you, and claim that you are trying to covertly promote enemy positions. And so they may impose a price on you for this stance. But to the extent that observers respect you, loyalists will pay a price for attacking you in this way, and raising their overall costs of making everything political. And so on average by paying this price you can buy an overall intellectual conversation that’s a bit less political. Which is the goal here.

In addition, pulling ropes sideways is on average just a better way to improve policy. As I said twelve years ago:

If, however, you actually want to improve policy, if you have a secure enough position to say what you like, and if you can find a relevant audience, then prefer to pull policy ropes sideways. Few will bother to resist such pulls, and since few will have considered such moves, you have a much better chance of identifying a move that improves policy. On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled. (more)

Yes, there is a sense in which arguments for “sideways” choices do typically appeal to a shared value: “efficiency”. For example, one would typically argue for navy over army spending in terms of cost-effectiveness in military conflicts and deterrence. Or might argue for punishment via fines in terms of cost-effectiveness for the goals of deterrence or rehabilitation. But all else equal we all like cost-effectiveness; political coalitions rarely want to embrace blatant anti-efficiency positions. So the more our policy debates emphasize efficiency, the less political polarized they should be.

Of course my suggestion here isn’t especially novel; most pundits are aware that they have the option to take the sort of sideways positions that I’ve recommended. Most are also aware that by doing so, they’d less enflame the usual political battles. Yet how often have you heard pundits protest that others falsely attributed larger value positions to them, when they really just tried to argue for cost-effectiveness of A over B using widely shared effectiveness concepts? That scenario seems quite rare to me.

So the main hope I can see here is of a new signaling equilibria where people tug sideways and brag about it, or have others brag on their behalf, to show their support for cutting political polarization. And thereby gain support from an audience who wants to reward cutters. Which of course only works if enough pundits actually believe a substantial such audience exists. So what do you say, is there much of an audience who wants to cut political polarization?

GD Star Rating
loading...
Tagged as: , ,

Reponse to Weyl

To my surprise, thrice in his recent 80,000 hours podcast interview with Robert Wiblin, Glen Weyl seems to point to me to represent a view that he dislikes. Yet, in all three cases, these disliked views aren’t remotely close to views that I hold.

Weyl: The Vickrey Auction, … problem is he had this very general solution, but which doesn’t really make any sense like in any practical case. And he pointed out that that was true. But everybody was so enamored of the fact that his was generally correct, that they didn’t try to find like versions of it that might actually make sense. They basically just said, “Oh, that’s correct in general,” and then either you were like Tyler and you’re like … just dismiss that whole thing and you’re like, “Ah, too abstract.” Or you were like, you know, Robin Hanson and you just said, “Let’s just do it! Let’s just do it!” You know? And like neither of those was really convincing.

The Vickrey auction was taught to me in grad school, but I’ve never been a big fan because it looked vulnerable to collusion (also a concern re Weyl’s quadratic voting proposals), and because I’d heard of problems in related lab experiments. I’ve long argued (e.g. here) for exploring new institution ideas, but via working our way up from smaller to larger scale trials, and then only after we’ve seen success at smaller scales. Theory models are often among the smallest possible trials. 

Weyl: What I definitely am against … is something which builds a politics that only wants to speak or only respects nerdy and mathematically inclined ways of approaching issues. I think that’s a huge mistake. … the rationalist community … has … obsessive focus on communicating primarily with and relating socially primarily to people who also agree that whatever set of practices they think defined rationality are the way to think about everything. And I think that, that is extremely dangerous … because I think A, it’s not actually true that most useful knowledge that we have comes from those methods. … And B, it’s fundamentally anti-democratic as an attitude … because if you think that the only people who have access to the truth are philosopher kings, it becomes hard to escape the conclusion that philosopher kings should rule. …

Weyl: So, Robin Hanson has this book, Elephant In The Brain, which has some interesting things in it, but I think ultimately is a long complaint that people aren’t interested in talking about politics in the way that I am interested in talking about politics. And that really annoys me. I would submit that, to someone that has that attitude, you should say, “Perhaps consider talking about politics in a different way. You might find that other people might find it easier to speak to you that way.” 

Weyl: There’s something called neo-reaction, … a politics that is built around the notion that basically there should be a small elite of people who own property and control power through that property. … Even though most people in this rationalist community would reject that kind of politics, I think there’s a natural tendency, if you have that set of social attitudes, to have your politics drift in that direction.

Our book, The Elephant in the Brain, has ten application chapters, only one of which is on politics, and that chapter compares key patterns of political behavior to two theories of why we are political: to change policy outcomes or to show loyalty to political allies. Neither theory is about being nerdy, mathematical, or “rational”, and most of the evidence we point to is not on styles of talking, nor do we recommend any style of talking.

Furthermore, every style of thinking or talking is compatible with the view that some people think much better than others, and also with the opposite view.  Nerdy or math styles are not different in this regard, so I see no reason to expect people with those styles of thinking to more favor “anti-democratic” views on thinking eliteness.

And of course, it remains possible that some people actually are much better at thinking than others. (See also two posts on my responses to other critics of econ style thinking.)

Wiblin: I guess in that case it seems like Futarchy, like Robin Hanson’s idea where people vote for what they want, but then bet on what the outcomes will be, might work quite well because you would avoid exploitation by having distributed voting power, but then you would have these superhuman minds would predict what the outcomes of different policies or different actions would be. Then they would be able to achieve whatever outcome was specified by a broad population. …

Weyl: I have issues with Futarchy, but I think what I really object to, it’s less even the worldview I’m talking about. I think really, the problem I have is that there is a rhetoric out there of trying to convince people that they’re insufficient and that everything should be the private property of a small number of people for this reason when in fact, if it was really the case that those few people were so important, and great, and powerful, they wouldn’t need to have all this rhetoric to convince other people of it. People would just see it, they would get it. 

Futarchy has nothing to do with the claim that everything should be the private property of a small number of people, nor have I ever made any such claim. Hopefully, this is just a case of a possible misreading of what Weyl said, and he didn’t intend to relate futarchy or myself to such views.

Added 3p: Weyl & I have been having a Twitter conversation on this, which you can find from here.

GD Star Rating
loading...
Tagged as: ,

Have A Thing

I’m not into small talk; I prefer to talk to people about big ideas. I want to talk big ideas to people who are smart, knowledgeable, and passionate about big ideas, and where it seems that convincing them about something on a big idea has a decent chance of changing their behavior in important ways.

Because of this, I prefer to talk to people who “have a thing.” That is, who have some sort of abstract claim (or question) which they consider important and neglected, for which they often argue, and which intersects somehow with their life hopes/plans. When they argue, they are open to and will engage counter-arguments. They might push this thing by themselves, or as part of a group, but either way it matters to them, they represent it personally, and they have some reason to think that their personal efforts can make a difference to it.

People with a thing allow me to engage a big idea that matters to someone, via someone who has taken the time to learn a lot about it, and who is willing to answer many questions about it. Such a person creates the hope that I might change their actions by changing their mind, or that they might convince me to change my life hopes/plans. I may convince them that some variation is more promising, or that some other thing fits better with the reasons they give. Or I might know of a resource, such as a technique or a person, who could help them with their thing.

Yes, in part this is all because I’m a person with many things. So I can relate better to such people. And after I engage their thing, there’s a good chance that they will listen to and engage one of my things. Even so, having a thing is handy for many people who are different from me. It lets you immediately engage many people in conversation in a way so that they are likely to remember you, and be impressed by you if you are in fact impressive.

Yes, having a thing can be off-putting to the sort of people who like to keep everything mild and low-key, and make sure that their talk has little risk of convincing them to do something that might seem weird or passionate. But I consider this off-putting effect to be largely a gain, in sorting out the sort of people I’m less interested in.

Now having a thing won’t save you if you are a fool or an idiot. In fact, it might make that status more visible. But if you doubt you are either, consider having a thing.

Added 11p: Beware of two common failures modes for people with things: 1) not noticing how much others want to hear about your thing, 2) getting so attached to your thing that you don’t listen enough to criticism of it.

Note also that having things promotes an intellectual division of labor, which helps the world to better think through everything.

Added 11Jan: Beware a third failure mode: being more serious or preachy than your audience wants. You can be focused and interesting without making people feel judged.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,

Rationality Requires Common Priors

Late in November 2006 I started this blog, and a month later on Christmas eve I reported briefly on the official publication (after 8 rejections) of my paper Uncommon Priors Require Origin Disputes. That was twelve years ago, and now Google Scholar tells me that this paper has 17 cites, which is about 0.4% of my 3933 total cites, which I’d say greatly under-estimates its value.

Recently I had the good fortune to be invited to speak at the Rutgers Seminar on Foundations of Probability, and I took that opportunity to raise awareness about my old paper. Only about ten folks attended (a famous philosopher spoke nearby at the same time), but this video was taken:

In the video my slides are at times dim, but they can be seen sharp here. Let me now try to explain why my topic is important, and what is my result. Continue reading "Rationality Requires Common Priors" »

GD Star Rating
loading...
Tagged as: ,

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Economists Rarely Say “Nothing But”

Imagine someone said:

Those physicists go too far. They say conservation of momentum applies exactly at all times to absolutely everything in the universe. And yet they can’t predict whether I will raise my right or left hand next. Clearly there is more going on than their theories can explain. They should talk less and read more literature. Maybe then they’d stop saying immoral things like Earth’s energy is finite.

Sounds silly, right? But many literary types really don’t like economics (in part due to politics), and they often try to justify their dislike via a similar critique. They say that we economists claim that complex human behavior is “nothing but” simple economic patterns. For example, in the latest New Yorker magazine, journalist and novelist John Lanchester tries to make such a case in an article titled:

Can Economists and Humanists Ever Be Friends? One discipline reduces behavior to elegantly simple rules; the other wallows in our full, complex particularity. What can they learn from each other?

He starts by focusing on our book Elephant in the Brain. He says we make reasonable points, but then go too far:

The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. … Erving Goffman’s “The Presentation of Self in Everyday Life,” or … Pierre Bourdieu’s masterpiece “Distinction” … are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.

This intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). … Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve … or mb=mc. … These are powerful tools, which can be taken too far.

You might think that Lanchester would support his claim that we overreach by pointing to particular large claims and then offering evidence that they are false in particular ways. Oddly, you’d be wrong. (Our book mentions no math nor rules of any sort.) He actually seems to accept most specific claims we make, even pretty big ones:

Many of the details of Hanson and Simler’s thesis are persuasive, and the idea of an “introspective taboo” that prevents us from telling the truth to ourselves about our motives is worth contemplating. … The writers argue that the purpose of medicine is as often to signal concern as it is to cure disease. They propose that the purpose of religion is as often to enhance feelings of community as it is to enact transcendental beliefs. … Some of their most provocative ideas are in the area of education, which they believe is a form of domestication. … Having watched one son go all the way through secondary school, and with another who still has three years to go, I found that account painfully close to the reality of what modern schooling is like.

While Lanchester does argue against some specific claims, these are not claims that we actually made. For example:

“The Elephant in the Brain”… has moments of laughable wrongness. We’re told, “Maya Angelou … managed not to woo Bill Clinton with her poetry but rather to impress him—so much so that he invited her to perform at his presidential inauguration in 1993.” The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.

But we said nothing like “Angelou’s career amounts to nothing more than.” Saying that she impressed Clinton with her poetry is not remotely to imply there was “nothing more” to her career. Also:

More generally, Hanson and Simler’s emphasis on signalling and unconscious motives suggests that the most important part of our actions is the motives themselves, rather than the things we achieve. … The last sentence of the book makes the point that “we may be competitive social animals, self-interested and self-deceived, but we cooperated our way to the god-damned moon.” With that one observation, acknowledging that the consequences of our actions are more important than our motives, the argument of the book implodes.

We emphasize “signalling and unconscious motives” because is the topic of our book. We don’t ever say motives are the most important part of our actions, and as he notes, in our conclusion we suggest the opposite. Just as a book on auto repair doesn’t automatically claim auto repair to be the most important thing in the world, a book on hidden motives needn’t claim motives are the most important aspect of our lives. And we don’t.

In attributing “overreach” to us, Lanchester seems to rely most heavily on a quick answer I gave in an interview, where Tyler Cowen asked me to respond “in as crude or blunt terms as possible”:

Wait, though—surely signalling doesn’t account for everything? Hanson … was asked to give a “short, quick and dirty” answer to the question of how much human behavior “ultimately can be traced back to some kind of signalling.” His answer: “In a rich society like ours, well over ninety per cent.” … That made me laugh, and also shake my head. … There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.

That quote is not from our book, and is from a context where you shouldn’t expect it to be easy to see exactly what was meant. And saying that a signaling motive is on average one of the strongest (if often unconscious) motives in an area of life is to say that this motive importantly shapes some key patterns of behavior in this area of life; it is not remotely to claim that this fact explains most of details of human behavior in this area! So shaping key patterns in 90% of areas explains far less than 90% of all behavior details. Saying that signaling is an important motive doesn’t at all say that human behavior is “nothing more” than signaling. Other motives contribute, we vary in how honest and conscious we are of each motive, there are usually a great many ways to signal any given thing in any given context, and many different cultural equilibria can coordinate individual behavior. There remains plenty of room for complexity, as people like Goffman and Bourdieu illustrate.

Saying that an abstraction is important doesn’t say that the things to which it applies are “nothing but” that abstraction. For example, conservation of momentum applies to all physical behavior, yet it explains only a tiny fraction of the variance in behavior of physical objects. Natural selection applies to all species, yet most species details must be explained in other ways. If most roads try to help people get from points A to B, that simple fact is far from sufficient to predict where all the roads are. The fact that a piece of computer code is designed help people navigate roads explains only a tiny fraction of which characters are where in the code. Financial accounting applies to nearly 100% of firms, yet it explains only a small fraction of firm behavior. All people need air and food to survive, and will have a finite lifespan, and yet these facts explain only a tiny fraction of their behavior.

Look, averaging over many people and contexts there must be some strongest motive overall. Economists might be wrong about what that is, and our book might be wrong. But it isn’t overreach or oversimplification to make a tentative guess about it, and knowing that strongest motive won’t let you explain most details of human behavior. As an analogy, consider that every nation has a largest export commodity. Knowing this commodity will help you understand something about this nation, but it isn’t remotely reasonable to say that a nation is “nothing more” than its largest export commodity, nor to think this fact will explain most details of behavior in this nation.

There are many reasonable complaints one can make about economics. I’ve made many myself. But this complaint that we “overreach” by “reducing complexity to simple rules” seems to me mostly rhetorical flourish without substance. For example, most models we fit to data have error terms to accommodate everything else that we’ve left out of that particular model. We economists are surely wrong about many things, but to argue that we are wrong about a particular thing you’ll actually need to talk about details related to that thing, instead of waving your hands in the general direction of “complexity.”

GD Star Rating
loading...
Tagged as: , ,

How Deviant Recent AI Progress Lumpiness?

I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

I emphasize how all the mighty human edifice of Go knowledge … was entirely discarded by AlphaGo Zero with a subsequent performance improvement. … Sheer speed of capability gain should also be highlighted here. … you don’t even need self-improvement to get things that look like FOOM. … the situation with AlphaGo Zero looks nothing like the Hansonian hypothesis and a heck of a lot more like the Yudkowskian one.

I replied that, just as seeing an unusually large terror attack like 9-11 shouldn’t much change your estimate of the overall distribution of terror attacks, nor seeing one big earthquake change your estimate of the overall distribution of earthquakes, seeing one big AI research gain like AlphaGo Zero shouldn’t much change your estimate of the overall distribution of AI progress. (Seeing two big lumps in a row, however, would be stronger evidence.) In his recent podcast with Sam Harris, Eliezer said:

Y: I have claimed recently on facebook that now that we have seen Alpha Zero, Alpha Zero seems like strong evidence against Hanson’s thesis for how these things necessarily go very slow because they have to duplicate all the work done by human civilization and that’s hard. …

H: What’s the best version of his argument, and then why is he wrong?

Y: Nothing can prepare you for Robin Hanson! Ha ha ha. Well, the argument that Robin Hanson has given is that these systems are still immature and narrow, and things will change when they get general. And my reply has been something like, okay, what changes your mind short of the world actually ending. If your theory is wrong do we get to find out about that at all before the world does.

(Sam didn’t raise the subject in his recent podcast with me.)

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

The citation distributions of papers published in the same discipline and year lie on the same curve for most disciplines, if the raw number of citations c of each paper is divided by the average number of citations c0 over all papers in that discipline and year. The dashed line is a lognormal fit. …

The probability of citing a paper grows with the number of citations that it has already collected. Such a model can be augmented with … decreasing the citation probability with the age of the paper, and a fitness parameter, unique to each paper, capturing the appeal of the work to the scientific community. Only a tiny fraction of papers deviate from the pattern described by such a model.

It seems to me quite reasonable to expect that fields where real research progress is lumpier would also display a lumpier distribution of citations. So if CS, AI, or ML research is much lumpier than in other areas, we should expect to see that in citation data. Even if your hypothesis is that only ML research is lumpier, and only in the last 5 years, we should still have enough citation data to see that. My expectation, of course, is that recent ML citation lumpiness is not much bigger than in most research fields through history.

Added 24Mar: You might save the hypothesis that research areas vary greatly in lumpiness by postulating that the number of citations of each research advance goes as the rank of the “size” of that advance, relative to its research area. The distribution of ranks is always the same, after all. But this would be a surprising outcome, and hence seems unlikely; I’d want to see clear evidence that the distribution of lumpiness of advances varies greatly across fields.

Added 27Mar: More directly relevant might be data on distributions of patent value and citations. Do these distributions vary by topic? Are CS/AI/ML distributed more unequally?

GD Star Rating
loading...
Tagged as: , ,

Our Book’s New Ground

In today’s Wall Street Journal, Matthew Hutson, author of The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane, reviews our new book The Elephant in the Brain. He starts and ends with obligatory but irrelevant references to Trump. Quotes from the rest:

The book builds on centuries of writing about self-deception. … I can’t say that the book covers new ground, but it is a smart synthesis and offers several original metaphors. People self-deceive about lots of things. We overestimate our ability to drive. We conveniently forget who started an argument. … Much of what we do, including our most generous behavior, the authors say, is not meant to be helpful. We are, like many other members of the animal kingdom, competitively altruistic—helpful in large part to earn status. … Casual conversations, for instance, often trade in random information. But the point is not to trade facts for facts; what you are actually doing, the book argues, is showing off so people can evaluate your intellectual versatility. …

The authors take particular interest in large-scale social issues and institutions, showing how systems of collective self-deception help explain the odd behavior we see in art, charity, education, medicine, religion and politics. Why do people vote? Not to strengthen the republic. …. Instead, we cheer for our team and participate as a signal of loyalty, hoping for the benefits of inclusion. In education, as many economists have argued, learning is ancillary to accreditation and status. … In many areas of medicine, they note, increased care does not improve outcomes. People offer it to broadcast helpfulness, or demand it to demonstrate how much support they have from others.

“The Elephant in the Brain” is refreshingly frank and penetrating, leaving no stone of presumed human virtue unturned. The authors do not even spare themselves. … It is accessibly erudite, deftly deploying essential technical concepts. … Still, the authors urge hope. … There are ways to leverage our hidden motives in the pursuit of our ideals. The authors offer a few suggestions. … Unfortunately, the book devotes only a few pages to such solutions. “The Elephant in the Brain” does not judge us for hiding selfish motives from ourselves. And to my mind, given that we will always have selfish motives, keeping them concealed might even provide a buffer against naked strife. (more)

All reasonable, except maybe for “can’t say that the book covers new ground.” Yes, scholars of self-deception like Hutson will find plausible both our general thesis and most of our claims about particular areas of life. And yes those specific claims have almost all been published before. Even so, I bet most policy experts will call our claims on their particular area “surprising” and even “extraordinary”, and judge that we have not offered sufficiently extraordinary evidence in support. I’ve heard education policy experts say this on Bryan Caplan’s new book, The Case Against Education. And I’ve heard medicine policy experts say this on our medicine claims, and political system experts say this on our politics claims.

In my view, the key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence can make these conclusions believable. That is, there’s an intellectual contribution to make by arguing together for a large set of related contrarian-to-experts claims. This is what I suggest is original about our book.

I expect that experts in each policy area X will be much more skeptical about our claims on X than about our claims on the other areas. You might explain this by saying that our arguments are misleading, and only experts can see the holes. But I instead suggest that policy experts in each X are biased because clients prefer them to assume the usual stories. Those who hire education policy experts expect them to talk about better learning the material, and so on. Such biases are weaker for those who study motives and self-deception in general.

Hutson has one specific criticism:

The case for medicine as a hidden act of selfishness may have some truth, but it also has holes. For example, the book does not address why medical spending is so much higher in the U.S. than elsewhere—do Americans care more than others about health care as a status symbol?

We do not offer our thesis as an explanation for all possible variations in these activities! We say that our favored motive is under-acknowledged, but we don’t claim that it is the only motive, nor that motive variations are the only way to explain behavioral variation. The world is far too big and complex for one simple story to explain it all.

Finally, I must point out one error:

“The Elephant in the Brain,” a book about unconscious motives. (The titular pachyderm refers not to the Republican Party but to a metaphor used in 2006 by the social psychologist Jonathan Haidt, in which reason is the rider on the elephant of emotion.)

Actually it is a reference to common idea of “the elephant in the room”, a thing we can all easily see but refuse to admit is there. We say there’s a big one regarding how our brains work.

GD Star Rating
loading...
Tagged as: , , ,

When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
loading...
Tagged as: , , ,