Tag Archives: Academia

Talks Not About Info

You can often learn about your own world by first understanding some other world, and then asking if your world is more like that other world than you had realized. For example, I just attended WorldCon, the top annual science fiction convention, and patterns that I saw there more clearly also seem echoed in wider worlds.

At WorldCon, most of the speakers are science fiction authors, and the modal emotional tone of the audience is one of reverence. Attendees love science fiction, revere its authors, and seek excuses to rub elbows with them. But instead of just having social mixers, authors give speeches and sit on panels where they opine on many topics. When they opine on how to write science fiction, they are of course experts, but in fact they mostly prefer to opine on other topics. By presenting themselves as experts on a great many future, technical, cultural, and social topics, they help preserve the illusion that readers aren’t just reading science fiction for fun; they are also part of important larger conversations.

When science fiction books overlap with topics in space, physics, medicine, biology, or computer science, their authors often read up on those topics, and so can be substantially more informed than typical audience members. And on such topics actual experts will often be included on the agenda. Audiences may even be asked if any of them happen to have expertise on a such a topic.

But the more that a topic leans social, and has moral or political associations, the less inclined authors are to read expert literatures on that topic, and the more they tend to just wing it and think for themselves, often on their feet. They less often add experts to the panel or seek experts in the audience. And relatively neutral analysis tends to be displaced by position taking – they find excuses to signal their social and political affiliations.

The general pattern here is: an audience has big reasons to affiliate with speakers, but prefers to pretend those speakers are experts on something, and they are just listening to learn about that thing. This is especially true on social topics. The illusion is exposed by facts like speakers not being chosen for knowing the most about a subject discussed, and those speakers not doing much homework. But enough audience members are ignorant of these facts to provide a sufficient fig leaf of cover to the others.

This same general pattern repeats all through the world of conferences and speeches. We tend to listen to talks and panels full of not just authors, but also generals, judges, politicians, CEOs, rich folks, athletes, and actors. Even when those are not the best informed, or even the most entertaining, speakers on a topic. And academic outlets tend to publish articles and books more for being impressive than for being informative. However, enough people are ignorant of these facts to let audiences pretend that they mainly listen to learn and get information, rather than to affiliate with the statusful.

Added 22Aug: We feel more strongly connected to people when we together visibly affirm our shared norms/values/morals. Which explains why speakers look for excuses to take positions.

GD Star Rating
loading...
Tagged as: ,

Sycophantry Masquerading As Bargains

The Catholic Church used to sell “indulgences”; you gave them cash and they gave you the assurance that God would let you sin without punishment. If you are at all suspicious about whether this church can actually deliver on their claim, this seems a bad deal. You give them something tangible and clearly valuable, and they give you a vague promise on something you can’t see, and can’t even check if anyone has ever received.

We make similar bad “bargains” with a few kinds of workers, to whom we grant extraordinary privileges of “self-regulation.” That is, we let certain “professionals” run their own organizations which tell us how their job their job is to be done, and who can do it. In some areas, such as with doctors, these judgements are enforced by law: you can only buy medical services approved by doctors, and can only buy such services from those who the official medical organizations labels “doctors.” In other areas, such as with academics, these judgements are more enforced by our strong eagerness to associate with high prestige professionals: most everyone just accepts the word of key academic organizations on who is a good academic.

There is a literature which frames this as a “grand bargain”. The philosopher Donald Schön says:

In return for access to their extraordinary knowledge in matters of great human importance, society has granted them [professionals] a mandate for social control in their fields of specialization, a high degree of autonomy in their practice, and a license to determine who shall assume the mantle of professional authority.

In their book The Future of the Professions: How Technology Will Transform the Work of Human Experts, Richard and Daniel Susskind elaborate:

In acknowledgement of and in return for their expertise, experience, and judgement, which they are expected to apply in delivering affordable, accessible, up-to-date, reassuring, and reliable services, and on the understanding that they will curate and update their knowledge and methods, train their members, set and enforce standards for the quality of their work, and that they will only admit appropriately qualified individuals into their ranks, and that they will always act honestly, in good faith, putting the interests of clients ahead of their own, we (society) place our trust in the professions in granting them exclusivity over a wide range of socially significant services and activities, by paying them a fair wage, by conferring upon them independence, autonomy, rights of self-determination, and by according them respect and status.

Notice how in this supposed bargain, what we give the professionals is concrete and clearly valuable, while what they give us (over what we’d get without the deal) is vague and very hard for us to check. Like an indulgence. The Susskinds claim that while this bargain has been a good deal so far, we will soon cancel it:

We predict that increasingly capable machines, operating on their own or with non-specialist users, will take on many of the tasks that have been the historic preserve of the professions. We anticipate an ‘incremental transformation’ in the way that we produce and distribute expertise in society. This will lead eventually to a dismantling of the traditional professions.

This seems seriously mistaken to me. There is actually no bargain, there is just the rest of us submitting to professionals’ prestige. Cheaper yet outcome-effective substitutes to expensive professionals have long been physically available, and yet we have mostly not chosen those substitutes due to our eagerness to affiliate with prestigious professionals. We don’t choose nurses who can do primary care as well as doctors, and we don’t watch videos of the best professors from which we could learn as much as from attending typical lectures in person. And we aren’t interested in outcome track records for our lawyers. The existence of even more such future substitutes won’t change this situation much.

GD Star Rating
loading...
Tagged as: , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,

Could Gambling Save Psychology?

A new PNAS paper:

Prediction markets set up to estimate the reproducibility of 44 studies published in prominent psychology journals and replicated in The Reproducibility Project: Psychology predict the outcomes of the replications well and outperform a survey of individual forecasts. … Hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%). … Prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. (more; see also coverage at 538AtlanticScience, Gelman)

We’ve had enough experiments with prediction markets over the years, both lab and field experiments, to not be at all surprised by these findings of calibration and superior accuracy. If so, you might ask: what is the intellectual contribution of this paper?

When one is trying to persuade groups to try prediction markets, one encounters consistent skepticism about experiment data that is not on topics very close to the proposed topics. So one value of this new data is to help persuade academic psychologists to use prediction markets to forecast lab experiment replications. Of course for this purpose the key question is whether enough academic psychologists were close enough to the edge of making such markets a continuing practice that it was worth the cost of a demonstration project to create closely related data, and so push them over the edge.

I expect that most ordinary academic psychologists need stronger incentives than personal curiosity to participate often enough in prediction markets on whether key psychology results will be replicated (conditional on such replication being attempted). Such additional incentives could come from:

  1. direct monetary subsidies for market trading, such as via subsidized market makers,
  2. traders with higher than average trading records bragging about it on their vitae, and getting hired etc. more because of that, or
  3. prediction market prices influencing key decisions such as what articles get published where, who gets what grants, or who gets what jobs.

For example, imagine that one or more top psychology journals used prediction market chances that an empirical paper’s main result(s) would be confirmed (conditional on an attempt) as part of deciding whether to publish that paper. In this case the authors of a paper and their rivals would have incentives to trade in such markets, and others could be enticed to trade if they expected trades by insiders and rivals alone to produce biased estimates. This seems a self-reinforcing equilibrium; if good people think hard before participating in such markets, others could see those market prices as deserving of attention and deference, including in the journal review process.

However, the existing equilibrium also seems possible, where there are few or small markets on such topics off to the side, markets that few pay much attention to and where there is little resources or status to be won. This equilibrium arguably results in less intellectual progress for any given level of research funding, but of course progress-inefficient academic equilibria are quite common.

Bottom line: someone is going to have to pony up some substantial scarce academic resources to fund an attempt to move this part of academia to a better equilibria. If whomever funded this study didn’t plan on funding this next step, I could have told them ahead of time that they were mostly wasting their money in funding this study. This next move won’t happen without a push.

GD Star Rating
loading...
Tagged as: ,

Why Have Opinions?

I just surprised some people here at a conference by saying that I don’t have opinions on abortion or gun control. I have little use for such opinions, and so haven’t bothered to form them. Since that attitude seems to be unusual among my intellectual peers, let me explain myself.

I see four main kinds of reasons to have opinions on subjects:

  • Decisions – Sometimes I need to make concrete decisions where the best choice depends on particular key facts or values. In such cases I am forced to have opinions on those subjects, in order to make good decisions. I may well just adopt, without much reflection, the opinions of some standard expert source. I have to make a lot of decisions and don’t have much time to reflect. But even so, I must have an opinion. And my incentives here tend to be toward having true opinions.
  • Socializing – A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions. Here my incentives are to have opinions that others find interesting or loyal, which is less strongly (but not zero) correlated with truth.
  • Research – As a professional intellectual, I specialize in particular topics. On those topics I generate opinions together with detailed supporting justifications for those opinions. I am evaluated on the originality, persuasiveness, and impressiveness of these opinions and justifications. These incentives are somewhat more strongly, but still only somewhat, correlated with truth.
  • Exploration – I’m not sure what future topics to research, and so continually explore a space of related topics which seem like they might have the potential to become promising research areas for me. Part of that process of exploration involves generating tentative opinions and justifications. Here it is even less important that these opinions be true than they help reveal interesting, neglected, areas especially well-suited to my particular skills and styles.

Most topics that are appropriate for research have little in the way of personal decision impact. So intellectuals focus more on research reasons for such topics. Most intellectuals also socialize a lot, so they also generate opinions for social reasons. Alas most intellectuals generate these different types of opinions in very different ways. You can almost hear their mind gears shift when they switch from being careful on research topics to being sloppy on social topics. Most academics have a pretty narrow speciality area, which they know isn’t going to change much, so they do relatively little exploration that isn’t close to their specialty area.

Research opinions are my best contribution to the world, and so are where I should focus my altruistic efforts. (They also give my best chance for fame and glory.) So I try to put less weight on socializing reasons for my opinions, and more weight on the exploration reasons. As long as I see little prospect of my research going anywhere near the abortion or gun control topics, I won’t explore there much. Topics diagnostic of left vs. right ideological positions seem especially unlikely to be places where I could add something useful to what everyone else is saying. But I do explore a wide range of topics that seem plausibly related to areas in which I have specialized, or might specialize. I have specialized in far more different areas than have most academics. And I try to keep myself honest by looking for plausible decisions I might make related to all these topics, though that tends to be hard. If we had more prediction markets this could get much easier, but alas we do not.

Of course if you care less about research, and more about socializing, your priorities could easily differ from mine.

GD Star Rating
loading...
Tagged as: , ,

Learn By Doing, Not Watching

Decades ago the famous “gondola kitten” experiment demonstrated that one must actively explore if one is to learn. One littermate in the set-up was free to explore its environment while another hung passively suspended in a contraption that moved in parallel with the exploring kitten. The gondola passenger saw everything the exploring kitten did but could not initiate any action. The mobile kitten discovered the world for itself while the passive kitten was presented a fait accompli-world in the same way that screen images are passively delivered to us. The passive kitten learned nothing. Since this classic experiment we have come to appreciate how crucial self-directed exploration is to understanding the world.

This holds true for humans as well as kittens. In an update of the gondola kitten experiment, researchers recently videotaped an American child’s Chinese-speaking nanny so that a second child saw and heard exactly what the first one did. The second child learned no Chinese whatsoever, whereas the first child picked up quite a lot. (more)

This supports my suggestion to Chase Your Reading; you more learn to figure things out by trying yourself to figure things out, and less by passively listening while writers figure things out in front of you.

GD Star Rating
loading...
Tagged as:

What Does Harvard Do Right?

Is Harvard the top rated college because it is the most clever in deciding who to admit? Not obviously. Instead, in the short run Harvard can gain plenty from a positive feedback loop: the best people apply and prefer to go there, which adds a glow to those who graduate from there, which makes the best want to apply, and so on.

While this seems an obvious and simple story, I must admit I haven’t been thinking enough in such terms, probably in part because I haven’t seen formal economic models that capture this story well. I thank venture capital (VC) titan Marc Andreessen for clarifying. Here is part of a 14 May twitter chat between him (MA) and myself (RH):

RH: VC is dominated by a few firms. What is the scale economy? Few geniuses? Info of seeing most pitches? Ability to create new fashions? Other?

MA: Core dynamic: A few firms have positive selection on their side; the other firms have adverse selection working against them.

The battle among VC firms is less “who is smarter?” than “who do the best founders approach first?”.

RH: OK, but why approach the top few first? What is more attractive about being funded by them vs others?

MA: Founders care about the VC brand halo because potential employees, potential customers, and other potential investors care.

RH: Is it just that top VC get first pick, so they are better picks, so their picks get halo by being in that pool, rinse & repeat?

MA: Yes, that’s the core positive feedback loop. How it starts is less meaningful than how it perpetuates.

Core dynamic: A few firms have positive selection on their side; the other firms have adverse selection working against them.

The battle among VC firms is less “who is smarter?” than “who do the best founders approach first?”.

The main historical driver of positive selection is prior success: a halo branding effect that new startups seek.

In essence, a new startup uses its VC’s brand as a credibility bridge until the startup establishes its own brand.

RH: Sure, but the question is why some VC brands shine brighter. Their money isn’t any more green.

MA: They have an aura of success as a consequence of having previously funded successful startups.

Arguably these dynamics are changing in real time in some interesting ways:

RH: Is there a prediction on if VC industry will become more or less concentrated as result of these changes?

MA: My belief is that VC is restructuring the same way retail stores, law firms, accounting firms, and investment banking did:

This seems to be the hallmark of a professionalizing industry being run properly. You either go big or you go specialist.

RH: I guess the key idea is that there are big scale economies with doing standard tasks, but big diseconomies for specialized tasks.

MA: Yes, but with the subtlety that the well-run scale players are also excellent at many of the specialized tasks.

RH: Many, but not most, or the specialized shops couldn’t exist long.

MA: This is exactly what happened in the talent agency business in the 1980s and 1990s. The big agencies got great at many things.

The specialized shops have to stay small and stay laser-focused on particular areas of specialized advanced competency.

But of course similarly, a scaled franchise firm that gets sloppy runs the same risk, can degrade itself into the middle tier.

RH: Summary: long trend is to scale given tasks, but also task specialization. Overall scale rises, but falls locally when specialize.

MA: Right, exactly. And this explains the size distribution — the scaled players have to be big; the boutiques have to stay small.

You see this in investment banking. You either work with Goldman Sachs or you work with a small boutique specialist bank.

RH: This makes sense, but I’m not sure we have any formal models that predict this correlation nicely.

This same sort of story also seems to work in the short run to explain why some journals have higher prestige. It is not so much that top journal editors are more clever, or use a smarter system to review submissions. It is just that the best papers are submitted there first, which makes the average quality of their publications higher, and so on.

In the long run, we see changes in the prestige rankings of these colleges, journals, investment banks, and venture capital funds. The key question is: what determines those long run changes? Do competitors with slightly better ways to evaluate or help submissions slowly win out over others? Or do other factors dominate?

GD Star Rating
loading...
Tagged as: , ,

Disciplines As Contrarian Correlators

I’m often interested in subjects that fall between disciplines, or more accurately that intersect multiple disciplines. I’ve noticed that it tends to be harder to persuade people of claims in these areas, even when one is similarly conservative in basing arguments on standard accepted claims from relevant fields.

One explanation is that people realize that they can’t gain as much prestige from thinking about claims outside their main discipline, so they just don’t bother to think much about such claims. Instead they default to rejecting claims if they see any reason whatsoever to doubt them.

Another explanation is that people in field X more often accept the standard claims from field X than they accept the standard claims from any other field Y. And the further away in disciplinary space is Y, or the further down in the academic status hierarchy is Y, the less likely they are to accept a standard Y claim. So an argument based on claims from both X and Y is less likely to be accepted by X folks than a claim based only on claims from X.

A third explanation is that people in field X tend to learn and believe a newspaper version of field Y that differs from the expert version of field Y. So X folks tend to reject claims that are based on expert versions of Y claims, since they instead believe the differing newspaper versions. Thus a claim based on expert versions of both X and Y claims will be rejected by both X and Y folks.

These explanations all have a place. But a fourth explanation just occurred to me. Imagine that smart people who are interested in many topics tend to be contrarian. If they hear a standard claim of any sort, perhaps 1/8 to 1/3 of the time they will think of a reason why that claim might not be true, and decide to disagree with this standard claim.

So far, this contrarianism is a barrier to getting people to accept any claims based on more than a handful of other claims. If you present an argument based on five claims, and your audience tends to randomly reject more than one fifth of claims, then most of your audience will reject your claim. But let’s add one more element: correlations within disciplines.

Assume that the process of educating someone to become a member of discipline X tends to induce a correlation in contrarian tendencies. Instead of independently accepting or rejecting the claims that they hear, they see claims in their discipline X as coming in packages to be accepted or rejected together. Some of them reject those packages and leave X for other places. But the ones who haven’t rejected them accept them as packages, and so are open to arguments that depend on many parts of those packages.

If people who learn area X accept X claims as packages, but evaluate Y claims individually, then they will be less willing to accept claims based on many Y claims. To a lesser extent, they also reject claims based on some Y claims and some X claims.

Note that none of these explanations suggest that these claims are actually false more often; they are just rejected more.

GD Star Rating
loading...
Tagged as: ,

Bowing To Elites

Imagine that that you are a politically savvy forager in a band of size thirty, or a politically savvy farmer near a village of size thousand. You have some big decisions to make, including who to put in various roles, such as son-in-law, co-hunter, employer, renter, cobbler, or healer. Many people may see your choices. How should you decide?

Well first you meet potential candidates in person and see how much you intuitively respect them, get along with them, and can agree on relative status. It isn’t enough for you to have seen their handiwork, you want to make an ally out of these associates, and that won’t work without respect, chemistry, and peace. Second, you see what your closest allies think of candidates. You want to be allies together, so it is best if they also respect and get along with your new allies.

Third, if there is a strong leader in your world, you want to know what that leader thinks. Even if this leader says explicitly that you can do anything you like, they don’t care, if you get any hint whatsoever that they do care, you’ll look closely to infer their preferences. And you’ll avoid doing anything they’d dislike too much, unless your alliance is ready to mount an overt challenge.

Fourth, even if there is no strong leader, there may be a dominant coalition encompassing your band or town. This is a group of people who tend to support each other, get deference from others, and win in conflicts. We call these people “elites.” If your world has elites, you’ll want to treat their shared opinions like those of a strong leader. If elites would gossip disapproval of a choice, maybe you don’t want it.

What if someone sets up objective metrics to rate people in suitability for the roles you are choosing? Say an archery contest for picking hunters, or a cobbler contest to pick cobblers. Or public track records of how often healer patients die, or how long cobbler shoes last. Should you let it be known that such metrics weigh heavily in your choices?

You’ll first want to see what your elites or leader think of these metrics. If they are enthusiastic, then great, use them. And if elites strongly oppose, you’d best only use them when elites can’t see. But what if elites say, “Yeah you could use those metrics, but watch out because they can be misleading and make perverse incentives, and don’t forget that we elites have set up this whole other helpful process for rating people in such roles.”

Well in this case you should worry that elites are jealous of this alternative metric displacing their advice. They like the power and rents that come from advising on who to pick for what. So elites may undermine this metric, and punish those who use it.

When elites advise people on who to pick for what, they will favor candidates who seem loyal to elites, and punish those who seem disloyal, or who aren’t sufficiently deferential. But since most candidates are respectful enough, elites often pick those they think will actually do well in the role. All else equal, that will make them look good, and help their society. While their first priority is loyalty, looking good is often a close second.

Since humans evolved to be unconscious political savants, this is my basic model to explain the many puzzles I listed in my last post. When choosing lawyers, doctors, real estate agents, pundits, teachers, and more, elites put many obstacles in the way of objective metrics like track records, contests, or prediction markets. Elites instead suggest picking via personal impressions, personal recommendations, and school and institution prestige. We ordinary people mostly follow this elite advice. We don’t seek objective metrics, and instead use elite endorsements, such as the prestige of where someone went to school or now works. In general we favor those who elites say have the potential to do X, over those who actually did X.

This all pushes me to more favor two hypotheses:

  1. We choose people for roles mostly via evolved mental modules designed mainly to do well at coalition politics. The resulting system does often pick people roughly well for their roles, but more as a side than a direct effect.
  2. In our society, academia reigns as a high elite, especially on advice for who to put in what roles. When ordinary people see another institution framed as competing directly with academia, that other institution loses. Pretty much all prestigious institutions in our society are seen as allied with academia, not as competing with it. Even religions, often disapproved by academics, rely on academic seminary degrees, and strongly push kids to gain academic prestige.

We like to see ourselves as egalitarian, resisting any overt dominance by our supposed betters. But in fact, unconsciously, we have elites and we bow to them. We give lip service to rebelling against them, and they pretend to be beaten back. But in fact we constantly watch out for any actions of ours that might seem to threaten elites, and we avoid them like the plague. Which explains our instinctive aversion to objective metrics in people choice, when such metrics compete with elite advice.

Added 8am: I’m talking here about how we intuitively react to the possibility of elite disapproval; I’m not talking about how elites actually react. Also, our intuitive reluctance to embrace track records isn’t strong enough to prevent us from telling specific stories about our specific achievements. Stories are way too big in our lives for that. We already norms against bragging, and yet we still manage to make our selves look good in stories.

GD Star Rating
loading...
Tagged as: , , ,

Dissing Track Records

Years ago I was being surprised to learn that patients usually can’t pick docs based on track records of previous patient outcomes. Because, people say, that would invade privacy and make bad incentives for docs picking patients. They suggest instead relying on personal impressions, wait times, “bedside” manner, and prestige of doc med school or hospital. (Yeah, those couldn’t possibly make bad incentives.) Few ever study if such cues correlate with patient outcomes, and we actively prevent the collection of patient satisfaction track records.

For lawyers, most trials are in the public record, so privacy shouldn’t be an obstacle to getting track records. So people pick lawyers based on track records, right? Actually no. People who ask are repeatedly told: no practically you can’t get lawyer track records, so just pick lawyers based on personal impressions or the prestige of their law firm or school. (Few study if those correlate with client outcomes.)

A new firm Premonition has been trying to change that:

Despite being public record, court data is surprisingly inaccessible in bulk, nor is there a unified system to access it, outside of the Federal Courts. Clerks of courts refused Premonition requests for case data. Resolved to go about it the hard way, Unwin … wrote a web crawler to mine courthouse web sites for the data, read it, then analyze it in a database. …

Many publications run “Top Lawyer” lists, people who are recognized by their peers as being “the best”. Premonition analyzed the win rates of these attorneys, it turned out most were average. The only way that they stood out was a disproportionate number of appealed and re-opened cases, i.e. they were good at dragging out litigation. They discovered that even the law firms themselves were poor at picking litigators. In a study of the United Kingdom Court of Appeals, it found a slight negative correlation of -0.1 between win rates and re-hiring rates, i.e. a barrister 20% better than their peers was actually 2% less likely to be re-hired! … Premonition was formed in March 2014 and expected to find a fertile market for their services amongst the big law firms. They found little appetite and much opposition. …

The system found an attorney with 22 straight wins before the judge – the next person down was 7. A bit of checking revealed the lawyer was actually a criminal defense specialist who operated out of a strip mall. … The firm claims such outliers are far from rare. Their web site … shows an example of an attorney with 32 straight wins before a judge in Orange County, Florida. (more)

As a society we supposedly coordinate in many ways to make medicine and law more effective, such as via funding med research, licensing professionals, and publishing legal precedents. Yet we don’t bother to coordinate to create track records for docs or lawyers, and in fact our public representatives tend to actively block such things. And strikingly: customers don’t much care. A politician who proposed to dump professional licensing would face outrage, and lose. A politician who proposed to post public track records would instead lose by being too boring.

On reflection, these examples are part of a larger pattern. For example, I’ve mentioned before that a media firm had a project to collect track records of media pundits, but then abandoned the project once it realized that this would reduce reader demand for pundits. Readers are instead told to pick pundits based on their wit, fame, and publication prestige. If readers really wanted pundit track records, some publication would offer them, but readers don’t much care.

Attempts to publish track records of school teachers based on students outcomes have produced mostly opposition. Parents are instead encouraged to rely on personal impressions and the prestige of where the person teaches or went to school. No one even considers doing this for college teachers, we at most just survey student satisfaction just after a class ends (and don’t even do that right).

Regarding student evaluations, we coordinate greatly to make standard widely accessible tests for deciding who to admit to schools. But we have almost no such measures of students when they leave school for work. Instead of showing employers a standard measure of what students have learned, we tell employers to rely on personal impressions and the prestige of the school from which the student came. Some have suggested making standard what-I-learned tests, but few are interested, including employers.

For researchers like myself, publications and job position are measures of endorsements by prestigious authorities. Citations are a better measure of the long term impact of research on intellectual progress, but citations get much less attention in evaluations of researchers. Academics don’t put their citation count on their vita (= resume), and when a reporter decides which researcher to call, or a department decides who to hire, they don’t look much at citations. (Yes, I look better by citations than by publications or jobs, and my prestige is based more on the later.)

Related is the phenomenon of people being more interested in others said to have the potential to achieve X, than in people who have actually achieved X. Related also is the phenomenon of firms being reluctant to use formulaic measures of employee performance that aren’t mediated mostly by subjective boss evaluations.

It seems to me that there are striking common patterns here, and I have in mind a common explanation for them. But I’ll wait to explain that in my next post. Till then, how do you explain these patterns? And what other data do we have on how we treat track records elsewhere?

Added 22Mar: Real estate sales are also technically in the public record, and yet it is hard for customers to collect comparable sales track records for real estate agents, and few seem to care enough to ask for them.

GD Star Rating
loading...
Tagged as: , , ,