Tag Archives: Academia

School Vouchers As Pandemic Response

Politico asked me and 17 others:

If you were in charge of your school district or university, how would you design the fall semester?

My answer:

Let 1,000 vouchers bloom. Schools face very difficult choices this fall, between higher risks of infection and worse learning outcomes. We should admit we don’t know how to make these choices well collectively, and empower parents to choose instead. Take the per-student school budget and offer a big fraction of it to parents as a voucher, to pay for home schooling they run themselves, for a neighbor to set up a one-house schoolhouse, for a larger private school, or to use at a qualifying local public school. Each option would set its own learning policies and also policies on distancing and testing. Let parents weigh family infection risks against learning quality risks, using what they know about available options, and their children’s risks, learning styles and learning priorities.

Yes, schools may suffer a large initial revenue shortfall this way; maybe they could rent out some rooms to new private school ventures. Yes, some children will end up with regretful schooling outcomes, though that seems inevitable no matter what we do. Yes, there should be some limits on teaching quality, but we should be forgiving at first; after all, public schools don’t know how to ensure quality here either. And maybe let any allowed option start a month or two late, if they also end later next summer; after all, we aren’t giving them much time to get organized.

GD Star Rating
loading...
Tagged as: ,

Toward A University Department of Generalists

The hard problem then is how to get specialists to credit you for advancing their field when they don’t see you as a high status one of them. (more)

Many of my most beloved colleagues, and also I, are intellectual polymaths. That is, we have published in many different areas, and usefully integrated results from diverse areas. Academia tends to neglect integration and generality, which hurts not only intellectual progress, but also myself and my colleagues. Which makes me especially interested in fixing this problem.

The key problem is that academics and their research are mostly evaluated by those who work on very similar topics and methods. To the extent that these are evaluated by folks at a larger distance, it is by those who control one of the limited number of standard “disciplines” (math, physics, literature, econ, etc.).

Thus we have a poor system for evaluating work and people that sit between disciplines, or that cover many disciplines. Making it harder to evaluate work that combines areas A and B, and maybe also C and D. You might be able to get an A person to evaluate the A parts, and then a B person for the B parts, but that is more work, and the person who knows how to pick a good A evaluator may not know how to pick a good B evaluator. Academics tend to think that interdisciplinary groups do worse work, held to lower standards, and this is a big part of why.

Furthermore, even when specialists can evaluate such things well enough, they have an incentive to say “Maybe that should be supported, but not with our resources.” That is, for people and work that combines A and B, the A folks say it should be supported by the B budget, and vice versa. Often to be accepted by people in A, you must do as much good work in A as someone who only ever works in A, regardless of how much good work you also do in B, C, etc.

Yet generality still gains substantial prestige among intellectuals, which gives me hope. For example, there are usually fights to write more general summaries, such as review articles and textbooks, fights usually won by the highest in status. And Nobel prize winners, upon winning, often famously wax philosophic and general, pontificating (usually badly) on a much wider range of topics than they did previously.

Academic disciplines and departments usually need to do two things: (1) evaluate people to say who can join and stay in them, and (2) train new candidates in a way that makes it likely that many will later be evaluated positively in part (1). I’m not sure there is a way to do part (2) well here, but I think I at least know of a way to do part (1).

I propose that one university (and eventually many) create a Department of Generalists. (Maybe there’s a better name for it.) To apply to join this department, you must first get tenure in some other department. You submit your publication record, and from that they can calculate a measure of the range of your publications. Weighted by quality of course. Folks with very high range are assumed to be shoo-ins, folks with low ranges are routinely rejected, and existing department members have discretion on borderline cases.

How could we calculate publication range? I’ve posted before on using citation data to construct maps of academia. From such maps it seems straightforward to create robust metrics describing the volume in that space encompassed by a person’s research. And something like citations could be used to weigh publications in this metric. No doubt there is room for disagreement on exact metrics, and I’m not pushing to get too mechanical here. My point is that it is feasible to evaluate generality, as we know how to mechanically get a decent first cut measure of a researcher’s range.

So what do people in Department of Generalists do exactly? Well of course they continue with their research, and can continue to serve the departments form which they came. But they are encouraged to do more general research than do folks in other departments. They can now more easily talk with other generalists, work together on more general projects, and invite outside generalist speakers.

Maybe they experiment with training or mentoring other professors at the university to be generalists, people who hope to later apply to join this generalist department. They might be preferred candidates to write those prestigious general summaries, such as review articles and textbooks, and to teach generalist courses, like big introductory courses. And especially to review more generalist work by others.

It would of course be hard work to get such a department going. And you’d need to start it at a university where there are already many generalists who could get along. But I have high hopes, again from the fact that academics so often fight to appear general, as in fighting to write summarizes and to pontificate on more general issues. Once there was a widespread perception that people in the Department of Generalists were in fact better at being generalists, as well as meeting the usual criteria of at least one regular department, they would naturally be seen as an elite. A group that others aspire to join, patrons aspire to fund, reporters aspire to interview, and students aspire to learn under.

And then academia would less neglect work on integration, synthesis, and generality, and work between existing disciplines. Oh academia would still neglect those things, don’t get me wrong, just less. And that seems a goal worth pursuing.

GD Star Rating
loading...
Tagged as: ,

Our Prestige Obsession

Long ago our distant ancestors lived through both good times and bad. In bad times, they did their best to survive, while in good times they asked themselves, “What can I invest in now to help me in coming bad times?” The obvious answer was: good relations and reputations. So they had kids, worked to raise their personal status, and worked to collect and maintain good allies.

This has long been my favored explanation for why we now invest so much in medicine and education, and why those investment have risen so much over the last century. We subconsciously treat medicine as a way to show that we care about others, and to let others show they care about us. As we get richer, we devote a larger fraction of our resources to this plan, and to other ways of showing off.

I’d never thought about it until yesterday, but this theory also predicts that, as we get rich, we put an increasing priority on associating with prestigious doctors and teachers. In better times, we focus more on gaining prestige via closer associations with more prestigious people. So as we get rich, we not only spend more on medicine, we more want that spending to connect us to especially prestigious medical professionals.

This increasing-focus-on-prestige effect can also help us to understand some larger economic patterns. Over the last half century, rising wage inequality has been driven to a large extent by a limited number of unusual services, such as medicine, education, law, firm management, management consulting, and investment management. And these services tend to share a common pattern.

As a fraction of the economy, spending on these services has increased greatly over the last half century or so. The public face of each service tends to be key high status individuals, e.g., doctors, teachers, lawyers, managers, who are seen as driving key service choices for customers. Customers often interact directly with these faces, and develop personal relations with them. There are an increasing number of these key face individuals, their pay is high, and it has been rising faster than has average pay, contributing to rising wage inequality.

For each of these services, we see customers knowing and caring more about the prestige of key service faces, relative to their service track records. Customers seem surprisingly disinterested in big ways in which these services are inefficient and could be greatly improved, such as via tech. And these services tend to be more highly regulated.

For example, since 1960, the US has roughly doubled its number of doctors and nurses, and their pay has roughly tripled, a far larger increase than seen in median pay. As a result, the fraction of total income spent on medicine has risen greatly. Randomized trials comparing paramedics and nurse practitioners to general practice doctors find that they all produce similar results, even though doctors cost far more. While student health centers often save by having one doctor supervise many nurses who do most of the care, most people dislike this and insist on direct doctor care.

We see very little correlation between having more medicine and more health, suggesting that there is much excess care and inefficiency. Patients prefer expensive complex treatments, and are suspicious of simple cheap treatments. Patients tend to be more aware of and interested in their doctor’s prestigious schools and jobs than of their treatment track record. While medicine is highly regulated overall, the much less regulated world of animal medicine has seen spending rise a similar rate.

In education, since 1960 we’ve seen big rises in the number of students, the number of teachers and other workers per student, and in the wages of teachers relative to worker elsewhere. Teachers make relatively high wages. While most schools are government run, spending at private schools has risen at a similar rate to public schools. We see a strong push for more highly educated teachers, even though teachers with less schooling seem adequate for learning. Students don’t actually remember much of what they are taught, and most of what they do learn isn’t actually useful. Students seem to know and care more about the prestige of their teachers than about their track records at teaching. College students prefer worse teachers who have done more prestigious research.

In law, since 1960 we’ve similarly seen big increases in the number of court cases, the number of lawyers employed, and in lawyer incomes. While two centuries ago most people could go to court without a lawyer, law is now far more complex. Yet it is far from clear whether we are better off with our more complex and expensive legal system. Most customers know far more about the school and job prestige of the lawyers they consider than they do about such lawyers’ court track records.

Management consultants have greatly increased in number and wages. While it is often possible to predict what they would recommend at a lower cost, such consultants are often hired because their prestige can cow internal opponents to not resist proposed changes. Management consultants tend to hire new graduates from top schools to impress clients with their prestige.

People who manage investment funds have greatly increased in number and pay. Once their management fees are taken into account, they tend to give lower returns than simple index funds. Investors seem willing to accept such lower expected returns in trade for a chance to brag about their association should returns happen to be high. They enjoy associating with prestigious fund managers, and tend to insist that such managers take their phone calls, which credibly shows a closer than arms-length relation.

Managers in general have also increased in number and also in pay, relative to median pay. And a key function of managers may be to make firms seem more prestigious, not only to customers and investors, but also to employees. Employees are generally wary of submitting to the dominance of bosses, as such submission violates an ancient forager norm. But as admiring and following prestigious people is okay, prestigious bosses can induce more cooperative employees.

Taken together, these cases suggest that increasing wage inequality may be caused in part by an increased demand for associating with prestigious service faces. As we get rich, we become willing to spend a larger fraction of our income on showing off via medicine and schooling, and we put higher priority on connecting to more prestigious doctors, teachers, lawyers, managers, etc. This increasing demand is what pushes their wages high.

This demand for more prestigious service faces seems to not be driven by a higher productivity that more prestigious workers may be able to provide. Customers seem to pay far less attention to productivity than to prestige; they don’t ask for track records, and they seem to tolerate a great deal of inefficiency. This all suggests that it is prestige more directly that customers seek.

Note that my story is somewhat in conflict with the usual “skill-biased technical change” story, which says that tech changed to make higher-skilled workers more productive relative to lower-skilled workers.

Added 10June: Note that the so-called Baumol “cost disease”, wherein doing some tasks just takes a certain number of hours unaided by tech gains, can only explain spending increases proportional to overall wage increases, and that only if demand is very inelastic. It can’t explain how some wages rise faster than the average, nor big increases in quantity demanded even as prices increases.

Added 12Jun: This post inspired by reading & discussing Why Are the Prices So Damn High?

GD Star Rating
loading...
Tagged as: , , , ,

Can We Trust Deliberation Priests?

In Science, academic “deliberation” experts offer a fix for our political ills:

Citizens to express their views … overabundance [of] … has been accompanied by marked decline in civility and argumentative complexity. Uncivil behavior by elites and pathological mass communication reinforce each other. How do we break this vicious cycle? …

All survey research … obtains evidence only about the capacity of the individual in isolation to reason about politics. … [But] even if people are bad solitary reasoners, they can be good group problem-solvers … Deliberative experimentation has generated empirical research that refutes many of the more pessimistic claims about the citizenry’s ability to make sound judgments.

Great huh? But there’s a catch:

Especially when deliberative processes are well-arranged: when they include the provision of balanced information, expert testimony, and oversight by a facilitator … These effects are not necessarily easy to achieve; good deliberation takes time and effort. Many positive effects are demonstrated most easily in face-to-face assemblies and gatherings, which can be expensive and logistically challenging at scale. Careful institutional design involv[es] participant diversity, facilitation, and civility norms …

A major improvement … might involve a randomly selected citizens’ panel deliberating a referendum question and then publicizing its assessments for and against a measure … problem is not social media per se but how it is implemented and organized. Algorithms for ranking sources that recognize that social media is a political sphere and not merely a social one could help. …

It is important to remain vigilant against incentives for governments to use them as symbolic cover for business as usual, or for well-financed lobby groups to subvert their operation and sideline their recommendations. These problems are recognized and in many cases overcome by deliberative practitioners and practice. … The prospects for benign deployment are good to the degree that deliberative scholars and practitioners have established relationships with political leaders and publics—as opposed to being turned to in desperation in a crisis.

So ordinary people are capable of fair and thoughtful deliberation, but only via expensive processes carefully managed in detail by, and designed well in advance by, proper deliberation experts with “established relationships with political leaders and publics.” That is, these experts must be free to pick the “balance” of info, experts, and participants included, and even who speaks when how, and these experts must be treated with proper respect and deference by the public and by political authorities.

No, they aren’t offering a simple well-tested mechanism (e.g., an auction) that we can apply elsewhere with great confidence that the deployed mechanism is the same as the one that they tested. Because what they tested instead was a mechanism with a lot of “knobs” that need context-specific turning; they tested the result of having particular experts use a lot of discretion to make particular political and info choices in particular contexts. They say that went well, and their academic peer reviewers (mostly the same people) agreed. So we shouldn’t worry that such experts would become corrupted if we gave them a lot more power.

This sure sounds like a priesthood to me. If we greatly empower and trust a deliberation priesthood, presumably overseen by these 20 high priest authors and their associates, they promise to create events wherein ordinary people talk much more reasonably, outputting policy recommendations that we could then all defer to with more confidence. At least if we trust them.

In contrast, I’ve been suggesting that we empower and trust prediction markets on key policy outcomes. We’ve tested such mechanisms a lot, including in contexts with strong incentives to corrupt them, and these mechanisms have far fewer knobs that must be set by experts with discretion. Which seems more trustworthy to me.

GD Star Rating
loading...
Tagged as: , , ,

Replication Markets Team Seeks Journal Partners for Replication Trial

An open letter, from myself and a few colleagues:

Recent attempts to systematically replicate samples of published experiments in the social and behavioral sciences have revealed disappointingly low rates of replication. Many parties are discussing a wide range of options to address this problem.

Surveys and prediction markets have been shown to predict, at rates substantially better than random, which experiments will replicate. This suggests a simple strategy by which academic journals could increase the rate at which their published articles replicate. For each relevant submitted article, create a prediction market estimating its chance of replication, and use that estimate as one factor in deciding whether to publish that article.

We the Replication Markets Team seek academic journals to join us in a test of this strategy. We have been selected for an upcoming DARPA program to create prediction markets for several thousand scientific replication experiments, many of which could be based on articles submitted to your journal. Each market would predict the chance of an experiment replicating. Of the already-published experiments in the pool, approximately one in ten will be sampled randomly for replication. (Whether submitted papers could be included in the replication pool depends on other teams in the program.) Our past markets have averaged 70% accuracy, and the work is listed at the Science Prediction Market Project page, and has been published in Science, PNAS, and Royal Society Open Science.

While details are open to negotiation, our initial concept is that your journal would tell potential authors that you are favorably inclined toward experiment article submissions that are posted at our public archive of submitted articles. By posting their article, authors declare that they have submitted their article to some participating journal, though they need not say which one. You tell us when you get a qualifying submission, we quickly tell you the estimated chance of replication, and later you tell us of your final publication decision.

At this point in time we seek only an expression of substantial interest that we can take to DARPA and other teams. Details that may later be negotiated include what exactly counts as a replication, whether archived papers reveal author names, how fast we respond with our replication estimates, what fraction of your articles we actually attempt to replicate, and whether you privately give us any other quality indicators obtained in your reviews to assist in our statistical analysis.

Please RSVP to: Angela Cochran, PM acochran@replicationmarkets.com 571 225 1450

Sincerely, the Replication Markets Team

Thomas Pfeiffer (Massey University)
Yiling Chen, Yang Liu, and Haifeng Xu (Harvard University)
Anna Dreber Almenberg & Magnus Johannesson (Stockholm School of Economics)
Robin Hanson & Kathryn Laskey (George Mason University)

Added 2p: We plan to forecast ~8,000 replications over 3 years, ~2,000 within the first 15 months.  Of these, ~5-10% will be selected for an actual replication attempt.

GD Star Rating
loading...
Tagged as: , ,

It’s All Data

Bayesian decision theory is often a useful approximation as a theory of decisions, evidence, and learning. And according to it, everything you experience or see or get as an input can be used as data. Some of it may be more informative or useful, but it’s all data; just update via Bayes rule and off you go.

So what then is “scientific” data? Well “science” treated as a social phenomena is broken into many different disciplines and sub-fields, and each field tends to have its own standards for what kinds of data they will publish. These standards vary across fields, and have varied across time, and I can think of no universals that apply to all fields at all times.

For example, at some times in some fields one might be allowed to report on the content of one’s dreams, while in other fields at times that isn’t okay but it is okay to give statistics summarizing the contents of all the dreams of some set of patients at a hospital, while in other fields at other times they just don’t want to hear anything subjective about dreams.

Most field’s restrictions probably make a fair bit of sense for them. Journal space is limited, so even if all data can tell you something, they may judge that certain kinds of data rarely say enough, compared to other available kinds. Which is fine. But the not-published kinds of data are not “unscientific”, though they may temporarily be “un-X” for field X. And you should remember that as most academic fields put a higher priority on being impressive than informative, they may thus neglect unimpressive data sources.

For example, chemists may insist that chemistry experiments know what are the chemicals being tested. But geology papers can give data on tests made on samples obtained from particular locations, without knowing the exact chemical composition of those samples. And they don’t need these samples to be uniformly sampled from the volume of the Earth or the universe; it is often enough to specify where samples came from.

Consider agricultural science field experiments, where they grow different types of crops in different kinds of soil and climate. They usually don’t insist on knowing the exact chemical composition of the soil, or the exact DNA of the crops. But they can at least tell you where they got the crops, where exactly is the test field, how they were watered, weeded, and fertilized, and some simple stats on the soils. It would be silly to insist that such experiments use a “representative” sample of crops, fields, or growing conditions. Should it be uniformly sampled from actual farming conditions used today, from all possible land on Earth’s surface, or from random mass or volume in the universe across its history?

Lab experiments in the human and social sciences today typically use convenience samples of subjects. They post invitations to their local school or community and then accept most everyone who signs up or shows up. They collect a few stats on subjects, but do not even attempt to create “representative” samples of subjects. Nationally, globally-now, or over-all-history representative samples of lab subjects would just be vastly more expensive. Medical experiments are done similarly. They may shoot for balance along a few particular measured dimensions, but on other parameters they take whoever they can get.

I mention all this because over the last few months I’ve had some fun doing Twitter polls. And I’ve consistently had critics tell me I shouldn’t do this, because Twitter polls are “meaningless” or “worthless” or “unscientific”. They tell me I should only collect the sort of data I could publish in a social science journal today, and if I show people any other kind of data I’m an intellectual fraud. As if some kinds of data were “unscientific”.

Today I have ~24,700 followers, and I can typically get roughly a thousand people to answer each poll question. And as my book Elephant in the Brain suggests, I have many basic questions about human behavior that aren’t very specific to particular groups of people; we have many things to learn that apply to most people everywhere at all times. Whenever a question occurs to me, I can take a minute to post it, and within a few hours get some thought-provoking answers.

Yes, the subset of my Twitter followers who actually respond to my polls are not a representative sample of my nation, world, profession, university, or even of Twitter users. But why exactly is it so important to have a representative sample from such a group?

Well there is a big advantage to having many representative polls from the same group, no matter what that group. Then when comparing such polls you have to wonder less whether sample differences are driving results. But the more questions I ask of my Twitter followers, the more I can usefully compare those different polls. For example, if I ask them at different times, I can see how their attitudes change over time. Or if I make slight changes in wording, I can see what difference wording changes make.

Of course if I were collecting data to help a political candidate, I’d want data representative of potential voters in that candidate’s district. But if I’m just trying to understand the basics of human behavior, its not clear why I need any particular distribution over people polled. Yes, answers to each thing I ask might vary greatly over people, and my sample might have few of the groups who act the most differently. But this can happen for any distribution over the people sampled.

Even though the people who do lab experiments on humans usually use convenience samples that are not representative of a larger world, what they do is still science. We just have to keep in mind that differing results might be explained by different sources of subjects. Similarly, the data I get from my Twitter polls can still be useful to a careful intellectual, even if isn’t representative of some larger world.

If one suspects that some specific Twitter poll results of mine differ from other results due to my differing sample, or due to my differing wordings, the obvious checks are to ask the same questions of different samples, or using different wordings. Such as having other people on Twitter post a similar poll to their different pool of followers. Alas, people seem to be willing to spend lots of time complaining about my polls, but are almost never willing to take a few seconds to help check on them in this way.

GD Star Rating
loading...
Tagged as: ,

Response To Hossenfelder

In my last post I said:

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. … To fix these problems, Hossenfelder proposes that theoretical physicists learn about and prevent biases, promote criticism, have clearer rules, prefer longer job tenures, allow more specialization and changes of fields, and pay peer reviewers. Alas, as noted in a Science review, Hossenfelder’s proposed solutions, even if good ideas, don’t seem remotely up to the task of fixing the problems she identifies.

In the comments she took issue:

I am quite disappointed that you, too, repeat the clearly false assertion that I don’t have solutions to offer. … I originally meant to write a book about what’s going wrong with academia in general, but both my agent and my editor strongly advised me to stick with physics and avoid the sociology. That’s why I kept my elaborations about academia to an absolute minimum. You are right in complaining that it’s sketchy, but that was as much as I could reasonably fit in.

But I have on my blog discussed what I think should be done, eg here. Which is a project I have partly realized, see here. And in case that isn’t enough, I have a 15 page proposal here. On the proposal I should add that, due to space limitations, it does not contain an explanation for why I think that’s the right thing to do. But I guess you’ll figure it out yourself, as we spoke about the “prestige optimization” last week.

I admitted my error:

I hadn’t seen any of those 3 links, and your book did list some concrete proposals, so I incorrectly assumed that if you had more proposals then you’d mention them in your book. I’m happy to support your proposed research project. … I don’t see our two proposals as competing, since both could be adopted.

She agreed:

I don’t see them as competing either. Indeed, I think they fit well .

Then she wrote a whole blog post elaborating!: Continue reading "Response To Hossenfelder" »

GD Star Rating
loading...
Tagged as:

Can Foundational Physics Be Saved?

Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics. I loved physics theory, and given how far physics had advanced over the previous two 34 year periods, I expected to be giving up many chances for glory. But though I didn’t entirely leave (I’ve since published two physics journal articles), I’ve felt like I dodged a bullet overall; physics theory has progressed far less in the last 34 years, mainly because data dried up:

One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.

Yes, when data is truly scarce, theory must suggest where to look, and so we must choose somehow among as-yet-untested theories. The worry is that we may be choosing badly:

During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.

One bad sign is that physicists have consistently, confidently, and falsely told each other and the public that big basic progress was coming soon: Continue reading "Can Foundational Physics Be Saved?" »

GD Star Rating
loading...
Tagged as: , , ,

How To Fund Prestige Science

How can we best promote scientific research? (I’ll use “science” broadly in this post.) In the usual formulation of the problem, we have money and status that we could distribute, and they have time and ability that they might apply. They know more than we do, but we aren’t sure who is how good, and they may care more about money and status than about achieving useful research. So we can’t just give things to anyone who claims they would use it to do useful science. What can we do? We actually have many options. Continue reading "How To Fund Prestige Science" »

GD Star Rating
loading...
Tagged as: , ,

Intellectual Status Isn’t That Different

In our world, we use many standard markers of status. These include personal connections with high status people and institutions, power, wealth, popularity, charisma, intelligence, eloquence, courage, athleticism, beauty, distinctive memorable personal styles, and participation in difficult achievements. We also use these same status markers for intellectuals, though specific fields favor specific variations. For example, in economics we favor complex game theory proofs and statistical analyses of expensive data as types of difficult achievements.

When the respected intellectuals for topic X tell the intellectual history of topic X, they usually talk about a sequence over time of positions, arguments, and insights. Particular people took positions and offered arguments (including about evidence), which taken together often resulted in insight that moved a field forward. Even if such histories do not say so directly, they give the strong impression that the people, positions, and arguments mentioned were selected for inclusion in the story because they were central to causing the field to move forward with insight. And since these mentioned people are usually the high status people in these fields, this gives the impression that the main way to gain status in these fields is to offer insight that produces progress; the implication is that correlations with other status markers are mainly due to other markers indicating who has an inclination and ability to create insight.

Long ago when I studied the history of science, I learned that these standard histories given by insiders are typically quite misleading. When historians carefully study the history of a topic area, and try to explain how opinions changed over time, they tend to credit different people, positions, and arguments. While standard histories tend to correctly describe the long term changes in overall positions, and the insights which contributed to those changes, they are more often wrong about which people and arguments caused such changes. Such histories tend to be especially wrong when they claim that a prominent figure was the first to take a position or make an argument. One can usually find lower status people who said basically the same things before. And high status accomplishments tend to be given more credit than they deserve in causing opinion change.

The obvious explanation for these errors is that we are hypocritical about what counts for status among intellectuals. We pretend that the point of intellectual fields is to produce intellectual progress, and to retain past progress in people who understand it. And as a result, we pretend that we assign status mainly based on such contributions. But in fact we mostly evaluate the status of intellectuals in the same way we evaluate most everyone, not changing our markers nearly as much as we pretend in each intellectual context. And since most of the things that contribute to status don’t strongly influence who actually offers positions and arguments that result in intellectual insight and progress, we can’t reasonably expect the people we tend to pick as high status to typically have been very central to such processes. But there’s enough complexity and ambiguity in intellectual histories to allow us to pretend that these people were very central.

What if we could make the real intellectual histories more visible, so that it became clearer who caused what changes via their positions, arguments, and insight? Well then fields would have the two usual choices for how to respond to hypocrisy exposed: raise their behaviors to meet their ideals, or lower their ideals to meet their behaviors. In the first case, the desire for status would drive much strong efforts to actually produce insights that drives progress, making plausible much faster rates of progress. In this case it could well be worth spending half of all research budgets on historians to carefully track who contributed how much. The factor of two lost in all that spending on historians might be more than compensated by intellectuals focused much more strongly on producing real insight, instead of on the usual high-status-giving imitations.

Alas I don’t expect many actual funders of intellectual activity today to be tempted by this alternative, as they also care much more about achieving status, via affiliation with high status intellectuals, than they do about producing intellectual insight and progress.

GD Star Rating
loading...
Tagged as: , ,