Monthly Archives: January 2018

Toward Better Signals

While we tend to say and think otherwise, in fact much of what we do is oriented toward helping us to show off. (Our new book argues for this at length.) Assuming this is true, what does a better world look like?

In simple signaling models, people tend to do too much of the activities they use to signal. This suggests that a better world is one that taxes or limits such activities. Say by taxing or limiting school, hospitals, or sporting contests. However, this is hard to arrange because signaling via political systems tends to create the opposite: subsidies and minimum required levels of such widely admired activities. (Though socializing such activities under limited government budgets is often effective.) Also, if we put most all of our life energy into signaling, then limits or taxes on just signaling activities will mainly result in us diverting our efforts to other signals.

If some signaling activities have larger positive externalities, then it seems an obvious win to use taxes, subsidies, etc. to divert our efforts into those activities. This is plausibly why we try to praise people more for showing off via charity, innovation, or whistleblowing. Similarly, we tend to criticize activities like war and other violence with large negative externalities. We should continue to do these things, and also look for other such activities worthy of extra praise or criticism.

However, on reflection I think the biggest problem with signals today is the quality of our audience. When the audience that we want to impress knows little about how our visible actions connect to larger consequences, then we also need not attend much to such connections. For example, to show an audience that we care enough about someone via helping them to get medicine, we need only push the sort of medicine that our audience thinks is effective. Similarly for using charity to convince an audience we care about the poor, politics to convince an audience we care about our nation, or using creative activities to convince an audience we promote innovation.

What if our audiences knew more about which medicines helped health, which charities helped the poor, which national policies help the nation, or which creative activities promoted innovation? That would push us to also know more, and lead us to choose more effective medicines, charities, policies, and innovations. All to the world’s benefit. So what could make the audiences that we seek to impress know more about how our activities connect to these larger consequences?

One approach is make our audiences more elite. Today our efforts to gain more likes on social media have us pandering to a pretty broad and ignorant audience. In contrast, in many old-world rags-to-riches stories, a low person rose in rank via a series of encounters with higher persons, each of whom was suitably impressed. The more that we expect to gain via impressing better-informed elites, the better informed will our show-off actions be.

But this isn’t just about who we seek to impress. It is also about whether we impress them via many small encounters, or via a few big ones. In larger encounters, our audience can take more time to judge how much we really understand about what we are doing. Yes risk and randomness could dominate if the main encounters that mattered to us were too small in number. But we seem pretty far away from that limit at the moment. For now, we’d have a better world of signals if we tried more to impress via a smaller number of more intense encounters with better informed elites.

Of course to fill this role of a better informed audience, it isn’t enough for “elites” to merely be richer, prettier, or more popular. They need to actually know more about how signaling actions connect to larger consequences. So there can be outsized gains from better educating elites on such things, and from selecting our elites more from those who are better educated on them. And anything that distracts elites from performing well in this this crucial role can have outsized costs.

Of course there’s a lot more to figure out here; I’ve just scratched the surface. But still, I thought I should plant a flag now, and show that it is possible to think more carefully about how to make a better world, when that world is chock full of signaling.

GD Star Rating
loading...
Tagged as: ,

On Unsolved Problems

Imagine a book review:

The authors present convincing evidence that since 1947 aliens from beyond Earth are here on Earth, can pass as humans, have been living among us, and increasingly influence human affairs. The authors plausibly identify the industries, professions, and geographic regions where aliens have the most influence, and the primary methods of alien influence. Furthermore the authors have made their evidence analysis accessible to a wide audience in a readable and entertaining book, and have published it via a respectable academic press to enable its conclusions to be believed by a wide audience.

Unfortunately, the authors only offer vague and general plans for dealing with these meddling aliens. They offer no cheap and reliable way to detect individual aliens, nor to overpower and neutralize them once detected. What good is it to know about aliens without a detailed response plan? Save your money and buy another book.

Or imagine deleting that last paragraph, and adding this instead:

The authors go further and offer plausible physical mechanisms by which we might detect individual aliens and neutralize their influence. The authors also offer a ten point plan and outline a rough budget for a project to implement this plan.

Unfortunately, they give no detailed schematics for physical devices to detect and neutralize aliens, nor do they offer a specific manufacturing process plan. In addition, they don’t say much about how to fund or staff their proposed project. This project would be international in scope and probably continue for decades. Yet the authors don’t bother to address how to guarantee gender, racial, and national equity when choosing personnel, nor how to achieve national and generational equity in funding. They don’t even give a detailed plan for managing the disruption should a war break out.

What good is it to know about aliens, physical mechanisms to detect and neutralize them, and a ten point plan for managing this, if we lack a detailed device schematics, manufacturing processes, plans to ensure equitable hiring and funding, and war contingencies?  Save your money and buy another book.

I could go on, but you get the idea. You should want to learn about problems you face, even if you don’t yet know how to solve them. The above snark was inspired by this review by Samuel Hammond of Elephant in the Brain. He starts with kind praise:

An entertaining and insightful book that sheds light on a diverse collection of perplexing human behaviors. …

And then he details this criticism:

The book is largely an exercise in simply convincing the reader of the elephant’s existence by hammering away with example after example. As a result of that hammering, The Elephant in the Brain ends up being light on public policy upshots — far more Theory of Moral Sentiments than Wealth of Nations. That’s unfortunate, since the ideas in the book are bursting with potential applications. Worse, however, is the scant attention paid to helping the reader pick up the pieces of their shattered psyche. Instead, Simler and Hanson simply highlight the need to better align public institutions with our hidden motives, leaving the all-important “how” question relatively untouched. …

It at least seems possible to tame the social aspects of our adaptive unconscious with the right self-help techniques, from classroom exercises to mindfulness meditation. This was essentially the strategy developed by the Cynics of ancient Greece. Through rigorous training, the Cynics managed to forgo the pursuit of wealth, sex, and fame in favor of mental clarity and rational ethics.

This is the direction I had hoped The Elephant in the Brain would lead. After all, the elephant in the brain is located squarely in what psychologists call our brain’s “System 1,” or the automatic, noncognitive, and fast mode of thinking. That still leaves our “System 2,” or analytical, cognitive, and slow mode of thinking, as a potential tool for transcending our lowly origins. By failing to give our System 2 mode a balanced consideration, The Elephant in the Brain inadvertently falls into the expanding genre of pop-psych books that simply recapitulate David Hume’s famous assertion that “reason is, and ought only to be the slave of the passions.” …

Haidt’s more recent book, The Righteous Mind, helps to illustrate the pragmatic problem. … Without denying Haidt’s empirical findings, an inviolable application of this theory raises an obvious question: How does one could ever hope to hold to a rational political philosophy at all? …

It seems like Simler was ultimately able to transcend the Silicon Valley rat-race with the employ of his System Two, or cognitive, mode of thinking. That is, he was rationally persuaded to pull the elephant by the reins and steer his life towards truth-seeking.

Our book mainly identifies hidden motives via explaining patterns of behavior that are poorly explained by our usual claimed motives. These patterns result from the usual mix of automatic and reasoned thinking, of impulse and self-control. I’ve seen no evidence that these patterns are weaker for people or places where reason or self-control matters more. This includes the example of my coauthor’s choice to write this book.

Without any concrete evidence suggesting that hidden motives matter more or less when there is more reason or self-control, I don’t see why discussing reason and self-control was a priority for our book. And I doubt that merely promoting reason or self-control is sufficient to reduce the influence of hidden motives.

GD Star Rating
loading...

Caplan Critiques Our Religion Chapter

Bryan Caplan likes our book:

My blurb calls it, “Deeply important, wide-ranging, beautifully written, and fundamentally right” – and I mean every word.

But he also has many complaints on our religion chapter. He summarizes:

[They] could have done even better. They’re so excited about their own theory that they occasionally forget to be curious about the facts. And they’re so eager to show that strange behavior could be functional that they frequently forget to ask, “Functional when?” and “Functional for whom?”

Alas his specific complaints seem to me more like attempts to misread us to find things to criticize. But you be the judge. Bryan starts (he’s indented once, our book twice):

And yet, as we’ll see, there’s a self-serving logic to even the most humble and earnest of religious activities.

The last sentence seems like a clear case of overstatement. What about hidden religiosity? Persecuted religiosity?

If we had said “kitchen tools have practical household uses”, would Bryan say “But a burglar could stab you with a kitchen knife”? Is is really hard to find group-conflict functions of religious persecution, or of hiding your religiosity from likely persecutors?

We don’t worship simply because we believe. Instead, we worship (and believe) because it helps us as social creatures.

While this story is plausible, [they] don’t really grapple with the strongest counter-arguments. Most obviously, arcane doctrinal disputes seem to be the sparks behind several major historical events. Take the Protestant Reformation. Yes, there’s plenty of realpolitik under the surface. But it’s hard to deny that Luther, Calvin, and other key figures did put beliefs in the driver’s seat.

“The dog ate my homework” only works as an excuse because sometimes dogs do eat homework. Similarly, we say while we give too much credit to a usual motive, and too little to a more hidden one, the usual motive is part of the mix. That’s why the usual motive can be an an excuse for the hidden one.

We say religious beliefs are more the excuse, and often function as sacrifices and badges to identify groups. Assuming this, I don’t see how it is at all surprising that, when one religious group splits away from another, their leaders point to particular arcane doctrinal disagreements. How is this at all evidence against such beliefs serving in large part as group badges?

[W]e engage in a wide variety of activities that have a religious or even cult-like feel to them, but which are entirely devoid of supernatural beliefs. … The fact that these behavioral patterns are so consistent, and thrive even in the absence of supernatural beliefs, strongly suggests that the beliefs are a secondary factor.

I struggle to see the logic here. Yes, the world’s leading religions have much in common with secular movements. But how does that suggest that what distinguishes these religions from secular movements is “secondary”? Indeed, doesn’t it suggest precisely the opposite conclusion – that supernatural beliefs are what makes leading religions special?

Common features suggest common structures and functions. We don’t say the differences are unimportant.

We think people can generally intuit what’s good for them. …

This seems like a rash overstatement. For starters, if the religious order is stable and powerful, doubts are dangerous. [Their] own model suggests that the oppressed would develop pronounced Stockholm Syndrome. Why? To avoid social sanctions. The best way to convince your oppressor that you love him is to love him sincerely.

We mean “good for them” in an individual, not collective, sense. Acting religious can give a personal gain even when it is a social loss.

To lock in the benefits of cooperation, then, a community also needs robust mechanisms to keep cheaters at bay.

Strangely, though, many of the leading religions loudly proclaim that they welcome everyone. And they live up to this rather naive promise to an amazing degree. I was raised Catholic for my first sixteen years, and can’t recall any anti-cheating mechanism more “robust” than collective scolding.

But the key question is: was that scolding enough? Religious groups vary in their strength of bonding, and thus in their severity of punishment. Instead of a young Caplan guessing that he could have cheated, it would have been more persuasive to hear an example of someone actually cheating and gaining without giving enough back. Yet even then we have to expect a few successful cheaters.

People who believe they risk punishment for disobeying God are more likely to behave well, relative to nonbelievers. …

I’ve often heard economists make claims like this. But when you look at the real world, it’s far from clear that disobedience and belief in divine punishment are even negatively correlated. Luther and Calvin, the fathers of modern Protestantism, preached … our salvation is absolutely beyond your control. Nevertheless, fundamentalist Protestants have long been known for strict adherence to the rules.

As I used to be a fundamentalist Protestant myself, it seems clear to me that most practicing fundamentalist Protestants today see a connection between their behavior and divine punishment, no matter what doctrines Luther and Calvin once endorsed.

There’s also a peculiar omission in this chapter. HS barely acknowledge the massive gap between how religious people say they are and how religious they actually are.

Our chapter is short, and religion is vast. That topic is interesting, but not essential to our main point.

GD Star Rating
loading...
Tagged as:

A LONG review of Elephant in the Brain

Artir Kel has posted a 21K word review of our book, over 1/6 as long as the book itself! He has a few nice things to say:

What the book does is to offer deeper (ultimate) explanations for the reasons (proximate) behind behaviours that shine new light on everyday life. … It is a good book in that it offers a run through lots of theories and ways of looking at things, some of which I have noted down for further investigation. It is because of this thought-provokingness and summarisation of dozens of books into a single one that I ultimately recommend the book for purchase.

And he claims to agree with this (his) book summary:

There exist evolutionary explanations for many commonplace behaviours, and that most people are not aware of these reasons. … We suffer from all sorts self-serving biases. Some of these biases are behind large scale social problems like the inflated costs of education and healthcare, and the inefficiencies of scientific research and charity.

But Kel also says:

Isn’t it true that education is – to a large degree – about signaling? Isn’t it true that politics is not just about making policy? Isn’t it true that charity is not just about helping others in the most efficient way? Yes, those things are true, but that’s not my point. The object-level claims of the book, the claims about how things are are largely correct. It is the interpretation I take issue with.

If you recall, our book mainly considers behavior in ten big areas of life. In each area, people usually give a particular explanation for the main purposes they achieve there, especially went they talk very publicly. For each area, our book identifies several puzzles not well explained by this main purpose, and offers another main purpose that we suggest better explains these puzzles.

In brief, Kel’s “interpretation” issues are:

  1. Other explanations can account for each of puzzling patterns we consider.
  2. We shouldn’t call hidden purposes “motives”, nor purposeful ignorance of them “self-deception”.

Continue reading "A LONG review of Elephant in the Brain" »

GD Star Rating
loading...
Tagged as: , ,

Read The Case Against Education

Yesterday was the Kindle publication date for my colleague Bryan Caplan’s new book The Case Against Education. The hardcover publication date is in nine days. It is an excellent book, on an important topic. Beyond such cheap talk, I offer the costly signal of having based an entire chapter of our new book on his book. That’s how good and important I think it is.

The most important contribution of Caplan’s book is to make very clear how inadequate “learn the material, then do a job better” is as an explanation for school. Yes, the world is complex enough that it must apply sometimes. Which is why it can work as an excuse for what’s really going on. After all, “the dog ate my homework” only works because sometimes dogs do eat homework.

So what is really going on? Caplan offers plausible evidence that school functions to let students show employers that they are smart, conscientious, and conformist. And surely this is in fact a big part of what is going on. I’ve blogged before one, and in our book we discuss, some other functions that schools may have served in history, including daycare, networking, consumption, state propaganda, domesticating students into modern workplace habits.

But I should be clear that students today don’t need nearly as much school as they get to serve these other functions; showing off to employers is likely the main reason kids get so much school today. Our world would be better off with less school, such as would happen if we cut school subsidies.

I see Caplan’s book as nicely complementing ours. As I said recently:

The key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence can make these conclusions believable. That is, there’s an intellectual contribution to make by arguing together for a large set of related contrarian-to-experts claims.

GD Star Rating
loading...
Tagged as:

Privately Enforced & Punished Crime

I’ve been teaching law & economics for many years now, and have slowly settled on the package legal reforms for which I most strongly want to argue. I have chosen a package that seems big enough to inspire excitement and encompass synergies, and yet small enough to allow a compelling analysis of its net benefits.

My proposal is regarding how to detect, prosecute, and punish criminal law. It is not about non-criminal law, and it is not a proposal to change how we decide what acts are crimes, when to be persuaded by a particular crime accusation, how hard to work to discourage each criminal act, nor how hard to work to catch each criminal act. To start, I hold constant how we do these other things. Continue reading "Privately Enforced & Punished Crime" »

GD Star Rating
loading...
Tagged as:

Social Innovation Disinterest Puzzle

Back in 1977, I started out college in engineering, then switched to physics, where I got a BS and MS. After that I spent nine years in computer research, at Lockheed and NASA. In physics, engineering, and software I saw that people are quite eager to find better designs, and that the world often pays a lot for them. As a result, it is usually quite hard to find even modesty better designs, at least for devices and mechanisms with modest switching costs.

Over time, I came to notice that many of our most important problems had cores causes in social arrangements. So I started to study economics, and found many simple proposed social innovations that could plausibly lead to large gains. And trying my own hand at looking for innovations, I found more apparently plausible gains. So in 1993 I switched to social science, and started a PhD program at the late age of 34, then having two kids age 0 and 2. (For over a decade after, I didn’t have much free time.)

I naively assumed that the world was just as eager for better social designs. But in fact, the world shows far less interest in better designs for social arrangements. Which, I should have realized, is a better explanation than my unusual genius for why it seemed so easy to find better social designs. But that raises a fundamental puzzle: why does the world seem so much less interested in social innovation, relative to innovation in physical and software devices and systems?

I’ve proposed the thesis of our new book as one explanation. But as many other explanations often come to people’s minds, I thought I might go over why I find them insufficient. Here goes: Continue reading "Social Innovation Disinterest Puzzle" »

GD Star Rating
loading...
Tagged as: , ,

Our Book’s New Ground

In today’s Wall Street Journal, Matthew Hutson, author of The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane, reviews our new book The Elephant in the Brain. He starts and ends with obligatory but irrelevant references to Trump. Quotes from the rest:

The book builds on centuries of writing about self-deception. … I can’t say that the book covers new ground, but it is a smart synthesis and offers several original metaphors. People self-deceive about lots of things. We overestimate our ability to drive. We conveniently forget who started an argument. … Much of what we do, including our most generous behavior, the authors say, is not meant to be helpful. We are, like many other members of the animal kingdom, competitively altruistic—helpful in large part to earn status. … Casual conversations, for instance, often trade in random information. But the point is not to trade facts for facts; what you are actually doing, the book argues, is showing off so people can evaluate your intellectual versatility. …

The authors take particular interest in large-scale social issues and institutions, showing how systems of collective self-deception help explain the odd behavior we see in art, charity, education, medicine, religion and politics. Why do people vote? Not to strengthen the republic. …. Instead, we cheer for our team and participate as a signal of loyalty, hoping for the benefits of inclusion. In education, as many economists have argued, learning is ancillary to accreditation and status. … In many areas of medicine, they note, increased care does not improve outcomes. People offer it to broadcast helpfulness, or demand it to demonstrate how much support they have from others.

“The Elephant in the Brain” is refreshingly frank and penetrating, leaving no stone of presumed human virtue unturned. The authors do not even spare themselves. … It is accessibly erudite, deftly deploying essential technical concepts. … Still, the authors urge hope. … There are ways to leverage our hidden motives in the pursuit of our ideals. The authors offer a few suggestions. … Unfortunately, the book devotes only a few pages to such solutions. “The Elephant in the Brain” does not judge us for hiding selfish motives from ourselves. And to my mind, given that we will always have selfish motives, keeping them concealed might even provide a buffer against naked strife. (more)

All reasonable, except maybe for “can’t say that the book covers new ground.” Yes, scholars of self-deception like Hutson will find plausible both our general thesis and most of our claims about particular areas of life. And yes those specific claims have almost all been published before. Even so, I bet most policy experts will call our claims on their particular area “surprising” and even “extraordinary”, and judge that we have not offered sufficiently extraordinary evidence in support. I’ve heard education policy experts say this on Bryan Caplan’s new book, The Case Against Education. And I’ve heard medicine policy experts say this on our medicine claims, and political system experts say this on our politics claims.

In my view, the key problem is that, to experts in each area, no modest amount of evidence seems sufficient support for claims that sound to them so surprising and extraordinary. Our story isn’t the usual one that people tell, after all. It is only by seeing that substantial if not overwhelming evidence is available for similar claims covering a great many areas of life that each claim can become plausible enough that modest evidence can make these conclusions believable. That is, there’s an intellectual contribution to make by arguing together for a large set of related contrarian-to-experts claims. This is what I suggest is original about our book.

I expect that experts in each policy area X will be much more skeptical about our claims on X than about our claims on the other areas. You might explain this by saying that our arguments are misleading, and only experts can see the holes. But I instead suggest that policy experts in each X are biased because clients prefer them to assume the usual stories. Those who hire education policy experts expect them to talk about better learning the material, and so on. Such biases are weaker for those who study motives and self-deception in general.

Hutson has one specific criticism:

The case for medicine as a hidden act of selfishness may have some truth, but it also has holes. For example, the book does not address why medical spending is so much higher in the U.S. than elsewhere—do Americans care more than others about health care as a status symbol?

We do not offer our thesis as an explanation for all possible variations in these activities! We say that our favored motive is under-acknowledged, but we don’t claim that it is the only motive, nor that motive variations are the only way to explain behavioral variation. The world is far too big and complex for one simple story to explain it all.

Finally, I must point out one error:

“The Elephant in the Brain,” a book about unconscious motives. (The titular pachyderm refers not to the Republican Party but to a metaphor used in 2006 by the social psychologist Jonathan Haidt, in which reason is the rider on the elephant of emotion.)

Actually it is a reference to common idea of “the elephant in the room”, a thing we can all easily see but refuse to admit is there. We say there’s a big one regarding how our brains work.

GD Star Rating
loading...
Tagged as: , , ,

When Disciplines Disagree

Our new book, The Elephant in the Brain, can be seen as taking one side in a disagreement between disciplines. On one side are psychologists (among others) who say of course people try to spin their motives as being higher than they are, especially in public forums. People on this side find our basic book thesis, and our many specific examples, so plausible that they fear our book may be too derivative and unoriginal.

On the other side, however, are most experts in concrete policy analysis. They spend their time studying ways that schools could help people to learn more material, hospitals could help people get healthier, charities could better assist people in need, and so on. They thus implicitly accept the usual claims people make about what they are trying to achieve via schools, hospitals, charities, etc. And so the practice of policy experts disagrees a lot with our claims that people actually care more about other ends, and that this is why most people show so little interest in reforms proposed by policy experts. (The world shows great interest in new kinds of physical devices and software, but far less interest in most proposed social reforms.)

My first book The Age of Em can also be seen as expressing disagreement between disciplines. In that book I try to straightforwardly apply standard economics to the scenario where brain emulations are the first kind of AI to displace most all human workers. While the assumption of brain-emulation-based-AI seems completely standard and reasonable among large communities of futurists and technologists, it is seen as radical and doubtful in many other intellectual communities (including economics). And many in disciplines outside of economics are quite skeptical that economists know much of anything that can generalize outside of our particular social world.

Now if you are going to make claims with which whole disciplines of experts disagree, you should probably feel most comfortable doing so when you have at least a whole discipline supporting you. Then it isn’t just you the crazy outlier against a world of experts. Even so, this sort of situation is problematic, in part because disagreements usually don’t become debates. A book on one side of a disagreement between disciplines is usually ignored by the disciplines who disagree. And the disciplines that agree may also ignore it, if the result seems too obvious to them to be worth discussing within their discipline.

This sort of situation seems to me one of the worse failings of our intellectual world. We fail to generate a consistent consensus across the widest scope of topics. Smaller communities of experts often generate a temporary consistent consensus within each community, but these communities often disagree a lot at larger scopes. And then they mostly just ignore each other. Apparently experts and their patrons have little incentive to debate those from other disciplines who disagree.

When two disciplines disagree, you might think they would both turn especially to the people who have become experts in both disciplines. But in fact those people are usually ignored relative to the people who have the highest status within each discipline. If we generated our consensus via prediction markets, it would automatically be consistent across the widest scope of topics. But of course we don’t, and there’s little interest in moving in that direction.

GD Star Rating
loading...
Tagged as: , , ,

Elephant in the Brain Reviews

Its now one week after the official hardback release date, and five weeks after the ebook release, of Elephant in the Brain. So I guess its time to respond to the text reviews that have appeared so far. Reviews have appeared at Amazon (9), Goodreads (8), and on individual blogs (5). Most comments expressed are quite positive. But there’s a big selection effect whereby people with negative opinions say nothing, and so readers rationally attend more to explicitly negative comments. And thus so will I. This post is looong. Continue reading "Elephant in the Brain Reviews" »

GD Star Rating
loading...
Tagged as: , ,