Author Archives: Robin Hanson

Disciplines As Contrarian Correlators

I’m often interested in subjects that fall between disciplines, or more accurately that intersect multiple disciplines. I’ve noticed that it tends to be harder to persuade people of claims in these areas, even when one is similarly conservative in basing arguments on standard accepted claims from relevant fields.

One explanation is that people realize that they can’t gain as much prestige from thinking about claims outside their main discipline, so they just don’t bother to think much about such claims. Instead they default to rejecting claims if they see any reason whatsoever to doubt them.

Another explanation is that people in field X more often accept the standard claims from field X than they accept the standard claims from any other field Y. And the further away in disciplinary space is Y, or the further down in the academic status hierarchy is Y, the less likely they are to accept a standard Y claim. So an argument based on claims from both X and Y is less likely to be accepted by X folks than a claim based only on claims from X.

A third explanation is that people in field X tend to learn and believe a newspaper version of field Y that differs from the expert version of field Y. So X folks tend to reject claims that are based on expert versions of Y claims, since they instead believe the differing newspaper versions. Thus a claim based on expert versions of both X and Y claims will be rejected by both X and Y folks.

These explanations all have a place. But a fourth explanation just occurred to me. Imagine that smart people who are interested in many topics tend to be contrarian. If they hear a standard claim of any sort, perhaps 1/8 to 1/3 of the time they will think of a reason why that claim might not be true, and decide to disagree with this standard claim.

So far, this contrarianism is a barrier to getting people to accept any claims based on more than a handful of other claims. If you present an argument based on five claims, and your audience tends to randomly reject more than one fifth of claims, then most of your audience will reject your claim. But let’s add one more element: correlations within disciplines.

Assume that the process of educating someone to become a member of discipline X tends to induce a correlation in contrarian tendencies. Instead of independently accepting or rejecting the claims that they hear, they see claims in their discipline X as coming in packages to be accepted or rejected together. Some of them reject those packages and leave X for other places. But the ones who haven’t rejected them accept them as packages, and so are open to arguments that depend on many parts of those packages.

If people who learn area X accept X claims as packages, but evaluate Y claims individually, then they will be less willing to accept claims based on many Y claims. To a lesser extent, they also reject claims based on some Y claims and some X claims.

Note that none of these explanations suggest that these claims are actually false more often; they are just rejected more.

GD Star Rating
loading...
Tagged as: ,

Open Thread

This is the place to discuss related topics that have not appeared in recent posts. And of course crazy sounding claims that are especially appreciated on this particular day of the year.

GD Star Rating
loading...
Tagged as:

Oxford To Publish The Age Of Em

Eighteen months ago I asked here for readers to criticize my Em Econ book draft, then 62K words. (137 of you sent comments – thanks!) Today I announce that Oxford University Press will publish its descendant (now 212K words) in Spring 2016. Tentative title, summary, outline:

The Age Of Em: Envisioning Brain Emulation Societies

Author Robin Hanson takes an oft-mentioned disruptive future tech, brain emulations, and expertly analyzes its social consequences in unprecedented breadth and detail. His book is intended to prove: we can foresee our social future, not just by projecting trends, but also by analyzing the detailed social consequences of particular disruptive future technologies.

I. Basics
1. Start: Contents, Preface, Introduction, Summary
2. Modes: Precedents, Factors, Dreamtime, Limits
3. Mechanics: Emulations, Opacity, Hardware, Security
II. Physics
4. Scales: Time, Space, Reversing
5. Infrastructure: Climate, Cooling, Buildings
6. Existence: Virtuality, Views, Fakery, Copying, Darkness
7. Farewells: Fragility, Retirement, Death
III. Economics
8. Labor: Wages, Selection, Enough
9. Efficiency: Competition, Eliteness, Spurs, Power
10. Business: Institutions, Growth, Finance, Manufacturing
11. Lifecycle: Careers, Age, Preparation, Training
IV. Organization
12. Clumping: Cities, Speeds, Transport
13. Extremes: Software, Inequality, War
14. Groups: Clans, Nepotism, Firms, Teams
15. Conflict: Governance, Law, Innovation
V. Sociology
16. Connection: Mating, Signaling, Identity, Ritual
17. Collaboration: Conversation, Synchronization, Coalitions
18. Society: Profanity, Divisions, Culture, Stories
19. Minds: Humans, Unhumans, Intelligence, Psychology
VI. Implications
20. Variations: Trends, Alternatives, Transition, Aliens
21. Choices: Evaluation, Policy, Charity, Success
22. Finale: Critics, Conclusion, References, Thanks
23. Appendix: Motivation, Method, Biases

GD Star Rating
loading...
Tagged as: ,

Advice Shows Status

When we give and seek advice, we think and talk as if we mainly just want to exchange useful information on the topic at hand. But seeking someone’s advice shows them respect, especially if that advice is followed. And in fact, a lot of our advice giving and taking behavior can be better understand in such status terms:

When making decisions together, we tend to give everyone an equal chance to voice their opinion. To make the best decisions, however, each opinion must be scaled according to its reliability. Using behavioral experiments and computational modelling, we tested (in Denmark, Iran, and China) the extent to which people follow this latter, normative strategy. We found that people show a strong equality bias: they weight each other’s opinion equally regardless of differences in their reliability, even when this strategy was at odds with explicit feedback or monetary incentives. (more)

Individuals in powerful positions are the worst offenders. According to one experimental study, they feel competitive when they receive advice from experts, which inflates their confidence and leads them to dismiss what the experts are telling them. High-power participants in the study ignored almost two-thirds of the advice they received. Other participants (the control and low-power groups) ignored advice about half as often. … Research shows that they value advice more if it comes from a confident source, even though confidence doesn’t signal validity. Conversely, seekers tend to assume that advice is off-base when it veers from the norm or comes from people with whom they’ve had frequent discord. (Experimental studies show that neither indicates poor quality.) Seekers also don’t embrace advice when advisers disagree among themselves. And they fail to compensate sufficiently for distorted advice that stems from conflicts of interest, even when their advisers have acknowledged the conflicts and the potential for self-serving motives. … Though many people give unsolicited advice, it’s usually considered intrusive and seldom followed. Another way advisers overstep is to chime in when they’re not qualified to do so. … many advisers take offense when their guidance isn’t accepted wholesale, curtailing further discussion. (more)

GD Star Rating
loading...
Tagged as: , , ,

Bowing To Elites

Imagine that that you are a politically savvy forager in a band of size thirty, or a politically savvy farmer near a village of size thousand. You have some big decisions to make, including who to put in various roles, such as son-in-law, co-hunter, employer, renter, cobbler, or healer. Many people may see your choices. How should you decide?

Well first you meet potential candidates in person and see how much you intuitively respect them, get along with them, and can agree on relative status. It isn’t enough for you to have seen their handiwork, you want to make an ally out of these associates, and that won’t work without respect, chemistry, and peace. Second, you see what your closest allies think of candidates. You want to be allies together, so it is best if they also respect and get along with your new allies.

Third, if there is a strong leader in your world, you want to know what that leader thinks. Even if this leader says explicitly that you can do anything you like, they don’t care, if you get any hint whatsoever that they do care, you’ll look closely to infer their preferences. And you’ll avoid doing anything they’d dislike too much, unless your alliance is ready to mount an overt challenge.

Fourth, even if there is no strong leader, there may be a dominant coalition encompassing your band or town. This is a group of people who tend to support each other, get deference from others, and win in conflicts. We call these people “elites.” If your world has elites, you’ll want to treat their shared opinions like those of a strong leader. If elites would gossip disapproval of a choice, maybe you don’t want it.

What if someone sets up objective metrics to rate people in suitability for the roles you are choosing? Say an archery contest for picking hunters, or a cobbler contest to pick cobblers. Or public track records of how often healer patients die, or how long cobbler shoes last. Should you let it be known that such metrics weigh heavily in your choices?

You’ll first want to see what your elites or leader think of these metrics. If they are enthusiastic, then great, use them. And if elites strongly oppose, you’d best only use them when elites can’t see. But what if elites say, “Yeah you could use those metrics, but watch out because they can be misleading and make perverse incentives, and don’t forget that we elites have set up this whole other helpful process for rating people in such roles.”

Well in this case you should worry that elites are jealous of this alternative metric displacing their advice. They like the power and rents that come from advising on who to pick for what. So elites may undermine this metric, and punish those who use it.

When elites advise people on who to pick for what, they will favor candidates who seem loyal to elites, and punish those who seem disloyal, or who aren’t sufficiently deferential. But since most candidates are respectful enough, elites often pick those they think will actually do well in the role. All else equal, that will make them look good, and help their society. While their first priority is loyalty, looking good is often a close second.

Since humans evolved to be unconscious political savants, this is my basic model to explain the many puzzles I listed in my last post. When choosing lawyers, doctors, real estate agents, pundits, teachers, and more, elites put many obstacles in the way of objective metrics like track records, contests, or prediction markets. Elites instead suggest picking via personal impressions, personal recommendations, and school and institution prestige. We ordinary people mostly follow this elite advice. We don’t seek objective metrics, and instead use elite endorsements, such as the prestige of where someone went to school or now works. In general we favor those who elites say have the potential to do X, over those who actually did X.

This all pushes me to more favor two hypotheses:

  1. We choose people for roles mostly via evolved mental modules designed mainly to do well at coalition politics. The resulting system does often pick people roughly well for their roles, but more as a side than a direct effect.
  2. In our society, academia reigns as a high elite, especially on advice for who to put in what roles. When ordinary people see another institution framed as competing directly with academia, that other institution loses. Pretty much all prestigious institutions in our society are seen as allied with academia, not as competing with it. Even religions, often disapproved by academics, rely on academic seminary degrees, and strongly push kids to gain academic prestige.

We like to see ourselves as egalitarian, resisting any overt dominance by our supposed betters. But in fact, unconsciously, we have elites and we bow to them. We give lip service to rebelling against them, and they pretend to be beaten back. But in fact we constantly watch out for any actions of ours that might seem to threaten elites, and we avoid them like the plague. Which explains our instinctive aversion to objective metrics in people choice, when such metrics compete with elite advice.

Added 8am: I’m talking here about how we intuitively react to the possibility of elite disapproval; I’m not talking about how elites actually react. Also, our intuitive reluctance to embrace track records isn’t strong enough to prevent us from telling specific stories about our specific achievements. Stories are way too big in our lives for that. We already norms against bragging, and yet we still manage to make our selves look good in stories.

GD Star Rating
loading...
Tagged as: , , ,

Dissing Track Records

Years ago I was being surprised to learn that patients usually can’t pick docs based on track records of previous patient outcomes. Because, people say, that would invade privacy and make bad incentives for docs picking patients. They suggest instead relying on personal impressions, wait times, “bedside” manner, and prestige of doc med school or hospital. (Yeah, those couldn’t possibly make bad incentives.) Few ever study if such cues correlate with patient outcomes, and we actively prevent the collection of patient satisfaction track records.

For lawyers, most trials are in the public record, so privacy shouldn’t be an obstacle to getting track records. So people pick lawyers based on track records, right? Actually no. People who ask are repeatedly told: no practically you can’t get lawyer track records, so just pick lawyers based on personal impressions or the prestige of their law firm or school. (Few study if those correlate with client outcomes.)

A new firm Premonition has been trying to change that:

Despite being public record, court data is surprisingly inaccessible in bulk, nor is there a unified system to access it, outside of the Federal Courts. Clerks of courts refused Premonition requests for case data. Resolved to go about it the hard way, Unwin … wrote a web crawler to mine courthouse web sites for the data, read it, then analyze it in a database. …

Many publications run “Top Lawyer” lists, people who are recognized by their peers as being “the best”. Premonition analyzed the win rates of these attorneys, it turned out most were average. The only way that they stood out was a disproportionate number of appealed and re-opened cases, i.e. they were good at dragging out litigation. They discovered that even the law firms themselves were poor at picking litigators. In a study of the United Kingdom Court of Appeals, it found a slight negative correlation of -0.1 between win rates and re-hiring rates, i.e. a barrister 20% better than their peers was actually 2% less likely to be re-hired! … Premonition was formed in March 2014 and expected to find a fertile market for their services amongst the big law firms. They found little appetite and much opposition. …

The system found an attorney with 22 straight wins before the judge – the next person down was 7. A bit of checking revealed the lawyer was actually a criminal defense specialist who operated out of a strip mall. … The firm claims such outliers are far from rare. Their web site … shows an example of an attorney with 32 straight wins before a judge in Orange County, Florida. (more)

As a society we supposedly coordinate in many ways to make medicine and law more effective, such as via funding med research, licensing professionals, and publishing legal precedents. Yet we don’t bother to coordinate to create track records for docs or lawyers, and in fact our public representatives tend to actively block such things. And strikingly: customers don’t much care. A politician who proposed to dump professional licensing would face outrage, and lose. A politician who proposed to post public track records would instead lose by being too boring.

On reflection, these examples are part of a larger pattern. For example, I’ve mentioned before that a media firm had a project to collect track records of media pundits, but then abandoned the project once it realized that this would reduce reader demand for pundits. Readers are instead told to pick pundits based on their wit, fame, and publication prestige. If readers really wanted pundit track records, some publication would offer them, but readers don’t much care.

Attempts to publish track records of school teachers based on students outcomes have produced mostly opposition. Parents are instead encouraged to rely on personal impressions and the prestige of where the person teaches or went to school. No one even considers doing this for college teachers, we at most just survey student satisfaction just after a class ends (and don’t even do that right).

Regarding student evaluations, we coordinate greatly to make standard widely accessible tests for deciding who to admit to schools. But we have almost no such measures of students when they leave school for work. Instead of showing employers a standard measure of what students have learned, we tell employers to rely on personal impressions and the prestige of the school from which the student came. Some have suggested making standard what-I-learned tests, but few are interested, including employers.

For researchers like myself, publications and job position are measures of endorsements by prestigious authorities. Citations are a better measure of the long term impact of research on intellectual progress, but citations get much less attention in evaluations of researchers. Academics don’t put their citation count on their vita (= resume), and when a reporter decides which researcher to call, or a department decides who to hire, they don’t look much at citations. (Yes, I look better by citations than by publications or jobs, and my prestige is based more on the later.)

Related is the phenomenon of people being more interested in others said to have the potential to achieve X, than in people who have actually achieved X. Related also is the phenomenon of firms being reluctant to use formulaic measures of employee performance that aren’t mediated mostly by subjective boss evaluations.

It seems to me that there are striking common patterns here, and I have in mind a common explanation for them. But I’ll wait to explain that in my next post. Till then, how do you explain these patterns? And what other data do we have on how we treat track records elsewhere?

Added 22Mar: Real estate sales are also technically in the public record, and yet it is hard for customers to collect comparable sales track records for real estate agents, and few seem to care enough to ask for them.

GD Star Rating
loading...
Tagged as: , , ,

Ford’s Rise of Robots

In the April issue of Reason magazine I review Martin Ford’s new book Rise of the Robots:

Basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It’s as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now. … [But] In the end, it seems that Martin Ford’s main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction. (more)

I’ll admit Ford is hardly alone, and he ably summarizes what are quite common views. Even so, I’m skeptical.

GD Star Rating
loading...
Tagged as: ,

The Data We Need

Almost all research into human behavior focuses on particular behaviors. (Yes, not extremely particular, but also not extremely general.) For example, an academic journal article might focus on professional licensing of dentists, incentive contracts for teachers, how Walmart changes small towns, whether diabetes patients take their medicine, how much we spend on xmas presents, or if there are fewer modern wars between democracies. Academics become experts in such particular areas.

After people have read many articles on many particular kinds of human behavior, they often express opinions about larger aggregates of human behavior. They say that government policy tends to favor the rich, that people would be happier with less government, that the young don’t listen enough to the old, that supply and demand is a good first approximation, that people are more selfish than they claim, or that most people do most things with an eye to signaling. Yes, people often express opinions on these broader subjects before they read many articles, and their opinions change suspiciously little as a result of reading many articles. But even so, if asked to justify their more general views academics usually point to a sampling of particular articles.

Much of my intellectual life in the last decade has been spent in the mode of collecting many specific results, and trying to fit them into larger simpler pictures of human behavior. So both I and the academics I’m describing above in essence present themselves as using these many results presented in academic papers about particular human behaviors as data to support their broader inferences about human behavior. But we do almost all of this informally, via our vague impressionistic memories of what has been the gist of the many articles we’ve read, and our intuitions about what more general claims seem how consistent with those particulars.

Of course there is nothing especially wrong with intuitively matching data and theory; it is what we humans evolved to do, and we wouldn’t be such a successful species if we couldn’t at least do it tolerably well sometimes. It takes time and effort to turn complex experiences into precise sharable data sets, and to turn our theoretical intuitions into precise testable formal theories. Such efforts aren’t always worth the bother.

But most of these academic papers on particular human behaviors do in fact pay the bother to substantially formalize their data, their theories, or both. And if it is worth the bother to do this for all of these particular behaviors, it is hard to see why it isn’t be worth the bother for the broader generalizations we make from them. Thus I propose: let’s create formal data sets where the data points are particular categories of human behavior.

To make my proposal clearer let’s for now restrict attention to explaining government regulatory policies. We could create a data set where the datums are particular kinds of products and services that governments now provide, subsidize, tax, advise, restrict, etc. For such datums we could start to collect features about them into a formal data set. Such features could say how long that sort of thing has been going on, how widely it is practiced around the world, how variable has been that practice over space and time, how familiar are ordinary people today with its details, what sort of justifications do people offer for it, what sort of emotional associations do people have with it, how much do we spend on it, and so on. We might also include anything we know about how such things correlate with age, gender, wealth, latitude, etc.

Generalizing to human behavior more broadly, we could collect a data set of particular behaviors, many of which seem puzzling at least to someone. I often post on this blog about puzzling behaviors. Each such category of behaviors could be one or more data points in this data set. And relevant features to code about those behaviors could be drawn from the features we tend to invoke when we try to explain those behaviors. Such as how common is that behavior, how much repeated experience do people have with it, how much do they get to see about the behavior of others, how strong are the emotional associations, how much would it make people look bad to admit to particular motives, and so on.

Now all this is of course much easier said than done. Is it a lot of work to look up various papers and summarize their key results as entries in this data set, or just to look at real world behaviors and put them into simple categories. It is also work to think carefully about how to usefully divide up the space of actions and features. First efforts will no doubt get it wrong in part, and have to be partially redone. But this is the sort of work that usually goes into all the academic papers on particular behaviors. Yes it is work, but if those particular efforts are worth the bother, then this should be as well.

As a first cut, I’d suggest just picking some more limited category, such as perhaps government regulations, collecting some plausible data points, making some guesses about what useful features might be, and then just doing a quick survey of some social scientists where they each fill in the data table with their best guesses for data point features. If you ask enough people, you can average out a lot of individual noise, and at least have a data set about what social scientists think are features of items in this area. With this you could start to do some exploratory data analysis, and start to think about what theories might well account for the patterns you see.

Now one obvious problem with my proposal is that while it looks time consuming and tedious, it isn’t obviously impressive. Researchers who specialize in particular areas will complain about your data entries related to their areas, and you won’t be able to satisfy them all. So you will end up with a chorus of critics saying your data is all wrong, and your efforts will look too low brow to cower them with your impressive tech. So I can see why this hasn’t been done much. Even so, I think this is the data set we need.

GD Star Rating
loading...
Tagged as: , ,

Life Before Earth

This paper is two years old now, but still seems big news to me:

OriginLife

Genetic complexity, roughly measured by the number of non-redundant functional nucleotides … Linear regression of genetic complexity (on a log scale) extrapolated back to just one base pair suggests the time of the origin of life = 9.7 ± 2.5 billion years ago. … There was no intelligent life in our universe at the time of the origin of Earth, because the universe was 8 billion years old at that time, whereas the development of intelligent life requires ca. 10 billion years of evolution. (source; discussion; HT Stuart LaForge)

That seems remarkably close to the age of the universe, 13.8 billion years. Yes it might be a coincidence, but we have other reasons to suspect life began before Earth. So I take this as a substantial if hardly overwhelming confirmation.

GD Star Rating
loading...
Tagged as:

Student Status Puzzle

Grad students vary in their research autonomy. Some students are very willing to ask for advice and to listen to it carefully, while others put a high priority on generating and pursuing their own research ideas their own way. This varies with personality, in that more independent people pick more independent strategies. It varies over time, in that students tend to start out deferring at first, and then later in their career switch to making more independent choices. It also varies by topic; students defer more in more technical topics, and where topic choices need more supporting infrastructure, such as with lab experiments. It also varies by level of abstraction; students defer more on how to pursue a project than on which project ideas to pursue.

Many of these variations seem roughly explained by near-far theory, in that people defer more when near, and less when far. These variations seem at least plausibly justifiable, though doubts make sense too. Another kind of variation is more puzzling, however: students at top schools seem more deferential than those at lower rank schools.

Top students expect to get lots of advice, and they take it to heart. In contrast, students at lower ranked schools seem determined to generate their own research ideas from deep in their own “soul”. This happens not only for picking a Ph.D. thesis, but even just for picking topics of research papers assigned in classes. Students seem as averse to getting research topic advice as they would be to advice on with whom to fall in love. Not only are they wary of getting research ideas from professors, they even fear that reading academic journals will pollute the purity of their true vision. It seems a moral matter to them.

Of course any one student might be correct that they have a special insight into what topics are neglected by their local professors. But the overall pattern here seems perverse; people who hardly understand the basics of a field see themselves as better qualified to identify feasible interesting research topics than those nearby with higher status, and who have been in the fields for decades.

One reason may be overconfidence; students think their profs deserve more to be at a lower rank school than they do, and so estimate a lower quality difference between they and their profs. More data supporting this is that students also seem to accept the relative status ranking of profs at their own school, and so focus most of their attention on the locally top status profs. It is as if each student thinks that they personally have so far been assigned too low of a status, but thinks most others have been correctly assigned.

Another reason may be like our preferring potential to achievement; students try to fulfill the heroic researcher stories they’ve heard, wherein researchers get much credit for embracing ideas early that others come to respect later. Which can make some sense. But these students are trying to do this way too early in their career, and they go way too far with it. Being smarter and knowing more, students at top schools understand this better.

GD Star Rating
loading...
Tagged as: , , , ,