Tag Archives: Academia

Fundamentalists Are Not Traditionalists

In my last two years of college I rebelled against the system. I stopped doing homework and instead studied physics by playing with equations (and acing exams). In this I was a “school fundamentalist.” I wanted to cut out what I saw as irrelevant and insincere ritual, so that school could better serve what I saw as its fundamental purpose, which was to help curious people learn. I contrasted myself with “traditionalists” who just unthinkingly continued with previous habits and customs.

One of the big social trends over the last few centuries has been a move toward reforming previous rituals and institutions to become more “sincere,” i.e., to more closely align with stated purposes, especially purposes related to internal feelings. For example, the protestant revolution tried to reform religious rituals and institutions toward a stated purpose of improving personal relations with God. (Christian and Islamic “fundamentalists” continue in this vein today.) The romantic revolution in marriage was to move marriage toward a stated purpose of promoting loving romantic relations. And various revolutions in government have been justified as moving government toward stated purposes of legitimacy, representation, and accountability.

In all of these cases advocates for reform have complained about insincerity and hypocrisy in prior practices and institutions. Similar sincerity concerns can be raised about birthday presents, or dinner table manners. Kids sometimes ask why, if gifts are to show feelings, people shouldn’t wait to give gifts until they most feel the mood. Or wait for when the receiver would most like the gift. Kids also sometimes ask why they must lie and say “thank you” when that is not how they feel. Here kids are being fundamentalists, while parents are traditionalists who mostly just want the kids to do the usual thing, without too much reflection on exactly why.

We economists are deep into this sincerity trend, in that we often analyze institutions according to stated purposes, and propose institutional reforms that seem to better achieve stated purposes. For example, in law & economics, the class I’m teaching this semester, we analyze which legal rules best achieve the stated purpose of creating incentives to increase economic welfare.

I’ve been made aware of this basic sincerity vs. tradition conflict by the sociology book Ritual and Its Consequences: An Essay on the Limits of Sincerity. While its sociology theory can make for hard reading at times, I was persuaded by its basic claim that modern intellectuals are too quick to favor the sincerity side of this conflict. For example, even if dinner manners and birthday presents rituals don’t most directly express the sincerest feeling of those involved, they can create an “as if” appearance of good feelings, and this appearance can make people nicer and feel better about each other. We’d get a lot fewer presents if people only gave them when in the mood.

Similarly, while for some kids it seems enough to just support their curiosity, most kids are probably better off in a school system that forces them to act as if they are curious, even when they are not. Also, my wife, who works in hospice, tells me that people today often reject traditional bereavement rituals which don’t seem to reflect their momentary sincere feelings. But such people often then feel adrift, not knowing what to do, and their bereavement process goes worse.

Of course I’m not saying we should always unthinkingly follow tradition. But I do think our efforts to reform often go badly because we focus on the most noble and flattering functions and situations, and neglect many other important ones.

From Ritual and Its Consequences I also got some useful distinctions. In addition to sincerity vs. tradition, there is also play vs. ritual. This is the distinction among less-practical “as-if” behaviors between those (play) that spin out into higher variance and those (ritual) that spin in to high predictability. Ritual in this sense can help one to feel safe when threatened, while play can bring joy when one doesn’t feel threatened. One can also distinguish between kinds of play and ritual where people’s usual roles are preserved vs. reversed, and distinguish between kinds where people are in control vs. out of control of events.

GD Star Rating
Tagged as: , , , ,

Conflicting Abstractions

My last post seems an example of an interesting general situation: when abstractions from different fields conflict on certain topics. In the case of my last post, the topic was the relative growth rate feasible for a small project hoping to create superintelligence, and the abstractions that seem to conflict are the ones I use, mostly from economics, and abstractions drawn from computer practice and elsewhere used by Bostrom, Yudkowsky, and many other futurists.

What typically happens when it seems that abstractions from field A suggests X, while abstraction from field B suggests not X? Well first, since both X and not X can’t be true, each field would likely see this as a threat to their good reputation. If they were forced to accept the existence of the conflict, then they’d likely try to denigrate the other field. If one field is higher status, the other field would expect to lose a reputation fight, and so they’d be especially eager to reject the claim that a conflict exists.

And in fact, it should usually be possible to reject a claim that a conflict exists. The judgement that a conflict exists would come from specific individuals studying the questions of if A suggests X and if B suggests not X. One could just suggest that some of those people were incompetent at analyzing the implications of the abstractions of particular fields. Or that they were talking past each other and misunderstanding what X and not X mean to the other. So one would need especially impeccable credentials to publicly make these claims and make them stick.

The ideal package of expertise for investigating such an issue would be expertise in both fields A and B. This would position one well to notice that a conflict exists, and to minimize the chance of problems arising from misunderstandings on what X means. Unfortunately, our institutions for crediting expertise don’t do well at encouraging combined expertise. For example, often patrons are interested in the intersection between fields A and B, and sponsor conferences, journal issues, etc. on this intersection. However, seeking maximal prestige they usually prefer people with the most prestige in each field, over people who actually know both fields simultaneously. Anticipating this, people usually choose to stay within each field.

Anticipating this whole scenario, people are likely to usually avoid seeking out or calling attention to such conflicts. To seek out or pursue a conflict, you’d have to be especially confident that your field would back you up in a fight, because your credentials are impeccable and the field thinks it could win a status conflict with the other field. And even then you’d have to waste some time studying a field that your field doesn’t respect. Even if you win the fight you might lose prestige in your field.

This is unfortunate, because such conflicts seem especially useful clues to help us refine our important abstractions. By definition, abstractions draw inferences from reduced descriptions, descriptions which ignore relevant details. Usually that is useful, but sometimes that leads to errors when the dropped details are especially relevant. Intellectual progress would probably be promoted if we could somehow induce more people to pursue apparent conflicts between the abstractions from different fields.

GD Star Rating
Tagged as: ,

Robot Econ in AER

In the May ’014 American Economic Review, Fernald & Jones mention that having computers and robots replace human labor can dramatically increase growth rates:

Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods. Brynjolfsson and McAfee (2012) discuss this possibility. In standard growth models, it is quite easy to show that this can lead to a rising capital share—which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman 2013)—and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming infinite in finite time.

For example, drawing on Zeira (1998), assume the production function is


Suppose that over time, it becomes possible to replace more and more of the labor tasks with capital. In this case, the capital share will rise, and since the growth rate of income per person is 1/(1 − capital share ) × growth rate of A, the long-run growth rate will rise as well.6


Of course the idea isn’t new; but apparently it is now more respectable.

GD Star Rating
Tagged as: , ,

Fixing Academia Via Prediction Markets

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see here, herehere, here). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehere, here, here). I’ve also talked a lot lately about what I see as the main social functions of academia (see here, here, here, here). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques. First, people willing to bet on theories are not a good source of revenue to pay for research. There aren’t many of them and they should in general be subsidized not taxed. You’d have to legally prohibit other markets to bet on these without the tax, and even then you’d get few takers.

Second, Alexander says to subsidize markets the same way they’d be taxed, by adding money to the betting pot. But while this can work fine to cancel the penalty imposed by a tax, it does not offer an additional incentive to learn about the question. Any net subsidy could be taken by anyone who put money in the pot, regardless of their info efforts. As I’ve discussed often before, the right way to subsidize info efforts for a speculative market is to subsidize a market maker to have a low bid-ask spread.

Third, Alexander’s plan to have bettors vote to agree on a question tester seems quite unworkable to me. It would be expensive, rarely satisfy both sides, and seems easy to game by buying up bets just before the vote. More important, most interesting theories just don’t have very direct ways to test them, and most tests are of whole bundles of theories, not just one theory. Fourth, for most claim tests there is no obvious definition of “everyone in the field,” nor is it obvious that everyone should have opinion on those tests. Forcing a large group to all express a public opinion seems a huge cost with unclear benefits.

OK, now let me review my proposal, the result of twenty five years of thinking about this. The market maker subsidy is a very general and robust mechanism by which research patrons can pay for accurate info on specified questions, at least when answers to those questions will eventually be known. It allows patrons to vary subsidies by questions, answers, time, and conditions.

Of course this approach does require that such markets be legal, and it doesn’t do well at the main academic function of credentialing some folks as having the impressive academic-style mental features with which others like to associate. So only the customers of academia who mainly want accurate info would want to pay for this. And alas such customers seem rare today.

For research patrons using this market-maker subsidy mechanism, their main issues are about which questions to subsidize how much when. One issue is topic. For example, how much does particle physics matter relative to anthropology? This mostly seems to be a matter of patron taste, though if the issue were what topics should be researched to best promote economic growth, decision markets might be used to set priorities.

The biggest issue, I think, is abstraction vs. concreteness. At one extreme one can ask very specific questions like what will be the result of this very specific experiment or future empirical measurement. At the other extreme, one can ask very abstract questions like “do grapes cure cancer” or “is the universe infinite”.

Very specific questions offer bettors the most protection against corruption in the judging process. Bettors need worry less about how a very specific question will be interpreted. However, subsidies of specific questions also target specific researchers pretty directly for funding. For example, subsidizing bets on the results of a very specific experiment mainly subsidizes the people doing that experiment. Also, since the interest of research patrons in very specific questions mainly results from their interest in more general questions, patrons should prefer to directly target the more general questions directly of interest to them.

Fortunately, compared to other areas where one might apply prediction markets, academia offers especially high hopes for using abstract questions. This is because academia tends to house society’s most abstract conversations. That is, academia specializes in talking about abstract topics in ways that let answers be consistent and comparable across wide scopes of time, space, and discipline. This offers hope that one could often simply bet on the long term academic consensus on a question.

That is, one can plausibly just directly express a claim in direct and clear abstract language, and then bet on what the consensus will be on that claim in a century or two, if in fact there is any strong consensus on that claim then. Today we have a strong academic consensus on many claims that were hotly debated centuries ago. And we have good reasons to believe that this process of intellectual progress will continue long into the future.

Of course future consensus is hardly guaranteed. There are many past debates that we’d still find to hard to judge today. But for research patrons interested in creating accurate info, the lack of a future consensus would usually be a good sign that info efforts in that area less were valuable than in other areas. So by subsidizing markets that bet on future consensus conditional on such a consensus existing, patrons could more directly target their funding at topics where info will actually be found.

Large subsidies for market-makers on abstract questions would indirectly result in large subsidies on related specific questions. This is because some bettors would specialize in maintaining coherence relationships between the prices on abstract and specific questions. And this would create incentives for many specific efforts to collect info relevant to answering the many specific questions related to the fewer big abstract questions.

Yes, we’d  probably end up with some politics and corruption on who qualifies to judge later consensus on any given question – good judges should know the field of the question as well as a bit of history to help them understand what the question meant when it was created. But there’d probably be less politics and lobbying than if research patrons choose very specific questions to subsidize. And that would still probably be less politics than with today’s grant-based research funding.

Of course the real problem, the harder problem, is how to add mechanisms like this to academia in order to please the customers who want accuracy, while not detracting from or interfering too much with the other mechanisms that give the other customers of academia what they want. For example, should we subsidize high relevant prestige participants in the prediction markets, or tax those with low prestige?

Those promised quotes: Continue reading "Fixing Academia Via Prediction Markets" »

GD Star Rating
Tagged as: , ,

When Will Schools Space, Interleave, and Vary Practice?

If school’s purpose were to develop skills, we’d teach differently:

Almost everywhere you look, you find examples of massed practice: colleges that offer concentration in a single subject with the promise of fast learning, continuing education seminars for professionals where training is condensed into a single weekend. Cramming for exams is a form of massed practice. It feels like a productive strategy, and it may get you through the next day’s midterm, but most of the material will be long forgotten by the time you sit down for the final. Spacing out your practice feels less productive for the very reason that some forgetting has set in and you’ve got to work harder to recall the concepts. … [But] the benefits of spacing out practice sessions are long established. …

The learning from interleaved practice feels slower than learning from massed practice. Teachers and students sense the difference. They can see that their grasp of each element is coming more slowly, and the compensating long-term advantage is not apparent to them. As a result, interleaving is unpopular and seldom used. Teachers dislike it because it feels sluggish. Students find it confusing: they’re just starting to get a handle on new material and don’t feel on top of it yet when they are forced to switch. But the research shows unequivocally that mastery and long-term retention are much better if you interleave practice than if you mass it. …

The basic idea is that varied practice—like tossing your beanbags into baskets at mixed distances—improves your ability to transfer learning from one situation and apply it successfully to another. (more)

So, a good test of a theory of school is: how long do you predict it will take teachers to learn this lesson? The article above talks about how many coaches have learned this lesson, plausibly because they really do want to win games, and face strong competitive pressures.

If you think the main function of schools is something other than learning, you might think it could take a very long time before schools adopt these practices. If you think the main function of schools is learning, but that public schools face much weaker pressures to be efficient that private schools, you might predict that private schools will adopt this much faster. If you think public schools are effective at adopting better approaches, you might predict that they adopt these quickly. So, what do you predict?

GD Star Rating
Tagged as:

The Future Of Intellectuals

Back in 1991, … [a reporter] described Andrew Ross, a doyen of American studies, strolling through the Modern Language Association conference … as admiring graduate students gawked and murmured, “That’s him!” That was academic stardom then. Today, we are more likely to bestow the aura and perks of stardom on speakers at “ideas” conferences like TED. …

Plenty of observers have argued that some of the new channels for distributing information simplify and flatten the world of ideas, that they valorize in particular a quick-hit, name-branded, business-friendly kind of self-helpish insight—or they force truly important ideas into that kind of template. (more)

Across time and space, societies have differed greatly in what they celebrated their intellectuals for. Five variations stand out:

  • Influence – They compete to privately teach and advise the most influential folks in society. The ones who teach or advised kings, CEOs, etc. are the best. In many nations today, the top intellectuals do little else but teach the next generation of elites.
  • Attention – They compete to make op-eds, books, talks, etc. that get attention from the intellectual-leaning public. The ones most discussed by the snooty public are the best. Think TED stars today, or french public intellectuals of a generation ago.
  • Scholarship – They compete to master stable classics in great detail. When disputes arise on those classics, the ones who other scholars say win those disputes are the best. Think scholars who oversaw the ancient Chinese civil service exams.
  • Fashion – They compete to be first to be visibly associated with new intellectual fads, and to avoid association with out-of-fashion topics, methods, and conclusions. The ones who fashionable people say have the best fashion sense are the best. Think architecture and design today.
  • Innovation – They compete to add new results, methods, and conclusions to an accumulation of such things that lasts and is stable over the long run. Think hard sciences and engineering today.

Over the last half century, in the most prestigious fields and in the world’s dominant nations, intellectuals have been celebrated most for their innovation. But other standards have applied through most of history, in most fields in most nations today, and in many fields today in our dominant nations. Thus innovation standards are hardly inevitable, and may not last into the indefinite future. Instead, the world may change to celebrating the other four features more.

A thousand years ago society changed very slowly, and there was little innovation to celebrate. So intellectuals were naturally celebrated for other things that they had in greater quantities. The celebration of innovation got a big push from World War II, as innovations from intellectuals were seen as crucial to winning that war. Funding went way up for innovation-oriented intellectuals. Today, however, tech and business startups, and innovative big firms like Apple, have grabbed a lot of innovation prestige from academics. Many parts of academia may plausibly respond to this by celebrating other things besides innovation where those competitors aren’t as good.

Thus the standards of intellectuals may change in the future if academics are seen as less responsible for important innovation, or if there is much less total innovation within the career of each intellectual. Or maybe if intellectuals who are better at doing other things besides innovation to win their political battles within intellectual or wider circles.

If intellectuals were the main source of innovation in society, such a change would be very bad news for economic and social growth. But in fact, intellectuals only contribute a small fraction of innovation, so growth could continue on nearly as fast, even if intellectuals care less about innovation.

(Based on today’s lunch with Tyler Cowen & John Nye.)

GD Star Rating
Tagged as: , ,

Academic Stats Prediction Markets

In a column, Andrew Gelman and Eric Loken note that academia has a problem:

Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.

They consider prediction markets as a solution, but largely reject them for reasons both bad and not so bad. I’ll respond here to their article in unusual detail. First the bad:

Would prediction markets (or something like them) help? It’s hard to imagine them working out in practice. Indeed, the housing crisis was magnified by rampant speculation in derivatives that led to a multiplier effect.

Yes, speculative market estimates were mistaken there, as were most other sources, and mistaken estimates caused bad decisions. But speculative markets were the first credible source to correct the mistake, and no other stable source had consistently more accurate estimates. Why should the most accurate source should be blamed for mistakes made by all sources?

Allowing people to bet on the failure of other people’s experiments just invites corruption, and the last thing social psychologists want to worry about is a point-shaving scandal.

What about letting researchers who compete for grants, jobs, and publications write critical referee reports and publish criticism, doesn’t that invite corruption too? If you are going to forbid all conflicts of interest because they invite corruption, you won’t have much left you will allow. Surely you need to argue that bet incentives are more corrupting that other incentives. Continue reading "Academic Stats Prediction Markets" »

GD Star Rating
Tagged as: ,

`Best’ Is About `Us’

Why don’t we express and follow clear principles on what sort of inequality is how bad? Last week I suggested that we want the flexibility to use inequality as an excuse to grab resources when grabbing is easy, but don’t want to obligate ourselves to grab when grabbing is hard.

It seems we prefer similar flexibility on who are the “best” students to admit to elite colleges. Not only do inside views of the admission process seem to show careful efforts to avoid clarity on criteria, ordinary people seem to support such flexibility:

Half [of whites surveyed] were simply asked to assign the importance they thought various criteria should have in the admissions system of the University of California. The other half received a different prompt, one that noted that Asian Americans make up more than twice as many undergraduates proportionally in the UC system as they do in the population of the state. When informed of that fact, the white adults favor a reduced role for grade and test scores in admissions—apparently based on high achievement levels by Asian-American applicants. (more)

Matt Yglesias agrees:

This is further evidence that there’s no stable underlying concept of “meritocracy” undergirding the system. But rather than dedicating the most resources to the “best” students and then fighting over who’s the best, we should be allocating resources to the people who are mostly likely to benefit from additional instructional resources.

But this seems an unlikely strategy for an elite coalition to use to entrench itself. If we were willing to admit the students who would benefit most by objective criteria like income or career success, we could use prediction markets. The complete lack of interest in this suggests that isn’t really the agenda.

Much of law is like this, complex and ambiguous enough to let judges usually draw their desired conclusions. People often say the law needs this flexibility to adapt to complex local conditions. I’m skeptical.

GD Star Rating
Tagged as: , , ,

Beware Star Academia

I recently saw the show Old Jews Telling Jokes, and was reminded of a big change in humor over the last century. The show was full of old-style jokes, i.e., jokes designed to be funny given only a moderate level of showmanship. Once upon a time the jokes we heard were mostly jokes that got passed around because lots of pretty ordinary folks could tell them and get laughs. Today, instead, most jokes we hear are told by professional comics, who mostly tell their own unique jokes integrated with their life story and personality. Few others, even professional comics, can get such laughs from these jokes.

A similar change happened in music. Once upon a time the songs we heard were mostly songs that got passed around because many relatively ordinary folks could sing them and sound good. Today instead we mostly hear songs designed to show off the particular abilities of particular musicians. We are less tempted to sing these songs to our friends, or even to ourselves. Further in the past, a similar change happened with stories. Once, the stories we heard were passed around because many story tellers could enthrall listeners with them, even with many details changed. Then after the invention of writing we have preferred to pass around the exact words of particular story-tellers.

These changes seem driven by the ability to pass around more exactly the particular performances of particular artists. When we have that option, we take it eagerly. While we might think we mainly like the jokes, songs, and stories, and that artists are just a vehicle for getting to those. But it seems instead that we more care about admiring the abilities of particular artists, and that jokes, songs, and stories mostly vehicles to showcase artists.

If, as I have suggested, academia mainly functions to let us affiliate with impressive intellectuals, then academia should be at risk of suffering the same trend. That is, once upon a time we passed around the intellectual arguments and claims that a wide range of speakers could use in many contexts to persuade many listeners. But as we have gained better abilities to pass around the particular ways that particular speakers argue for claims, the above trend in jokes, song, and stories suggests that we did or will switch to focus more on the particular ways that particular intellectuals express and elaborate claims and arguments, and less on the claims and arguments themselves.

This is a problem because we have stronger reasons to expect that the arguments and claims that many people can use in many contexts to persuade varied listeners are more likely to be true, relative to those designed more to be parts of overall impressive displays by particular persons in particular contexts. If listeners actually care less if claims are true than if claimers are impressive, we should expect that when the audience for intellectuals can get better access to a rich personal display of attempted persuasion, they will lose much of their derived interest in the truth of claims. After all, maybe the audience never really cared that much if the claims were true – they mainly cared about claim truth as evidence of claimer impressiveness.

I’ve actually seen a lot that looks like this in my intellectual travels over the years. For example, many famous classic texts, especially in philosophy, are said to be popular because they can’t be effectively summarized or rephrased for a modern audience; to assimilate their insights, one must read the original authors in the original voices, even if their issues and styles are strange to us. We should suspect that folks read these classics less for insights and more for admiring and affiliating with impressive minds.

Also, I have seen people take arguments that others have made and express them with a bit more elegance and status, perhaps using more difficult methods, and get famous for originating such arguments, even when they mostly repeated what others said. It seems that people pretend that they celebrate these folks for originating certain arguments, but really want to admire and affiliate with their impressiveness.

Where could you go if you wanted to get the robust arguments, instead of affiliating with impressive intellectuals? First, read textbooks. I heartily recommend textbooks in most any subject. In fact, it is hard to do better than just sitting in a university bookstore and reading all the intro texts they have. Long ago I spent many days in the Stanford bookstore doing just that. Once you are done with textbooks, review articles are the next most robust option. And beware when interest in a topic seems to focus mainly on a particular author, and doesn’t transfer much to others who write on that same topic.

GD Star Rating
Tagged as: , ,

Impressive Power

Monday I attended a conference session on the metrics academics use to rate and rank people, journals, departments, etc.:

Eugene Garfield developed the journal impact factor a half-century ago based on a two-year window of citations. And more recently, Jorge Hirsch invented the h-index to quantify an individual’s productivity based on the distribution of citations over one’s publications. There are also several competing “world university ranking” systems in wide circulation. Most traditional bibliometrics seek to build upon the citation structure of scholarship in the same manner that PageRank uses the link structure of the web as a signal of importance, but new approaches are now seeking to harness usage patterns and social media to assess impact. (agenda; video)

Session speakers discussed such metrics in an engineering mode, listing good features metrics should have, and searching for metrics with many good features. But it occurred to me that we can also discuss metrics in social science mode, i.e., as data to help us distinguish social theories. You see, many different conflicting theories have been offered about the main functions of academia, and about the preferences of academics and their customers, such as students, readers, and funders. And the metrics that various people prefer might help us to distinguish between such theories.

For example, one class of theories posits that academia mainly functions to increase innovation and intellectual progress valued by the larger world, and that academics are well organized and incentivized to serve this function. (Yes such theories may also predict individuals favoring metrics that rate themselves highly, but such effects should wash out as we average widely.) This theory predicts that academics and their customers prefer metrics that are good proxies for this ultimate outcome.

So instead of just measuring the influence of academic work on future academic publications, academics and customers should strongly prefer metrics that also measure wider influence on the media, blogs, business practices, ways of thinking, etc. Relative to other kinds of impact, such metrics should focus especially on relevant innovation and intellectual progress. This theory also predicts that, instead of just crediting the abstract thinkers and writers in an academic project, there are strong preferences for also crediting supporting folks who write computer programs, built required tools, do tedious data collection, give administrative support, manage funding programs, etc.

My preferred theory, in contrast, is that academia mainly functions to let outsiders affiliate with credentialed impressive power. Individual academics show exceptional impressive abstract mental abilities via their academic work, and academic institutions credential individual people and works as impressive in this way, by awarding them prestigious positions and publications. Outsiders gain social status in the wider world via their association with such credentialed-as-impressive folks.

Note that I said “impressive power,” not just impressiveness. This is the new twist that I’m introducing in this post. People clearly want academics to show not just impressive raw abilities, but also to show that they’ve translated such abilities into power over others, especially over other credentialled-as-impressive folks. I think we also see similar preferences regarding music, novels, sports, etc. We want people who make such things to show not only that they have have impressive abilities in musical, writing, athletics, etc., we also want them to show that they have translated such abilities into substantial power to influence competitors, listeners, readers, spectators, etc.

My favored theory predicts that academics will be uninterested in and even hostile to metrics that credit the people who contributed to academic projects without thereby demonstrating exceptional abstract mental abilities. This theory also predicts that while there will be some interest in measuring the impact of academic work outside academia, this interest will be mild relative to measuring impact on other academics, and will focus mostly on influence on other credentialed-as-impressives, such as pundits, musicians, politicians, etc. This theory also predicts little extra interest in measuring impact on innovation and intellectual progress, relative to just measuring a raw ability to change thoughts and behaviors. This is a theory of power, not progress.

Under my preferred theory of academia, innovation and intellectual progress are mainly side-effects, not main functions. They may sometimes be welcome side effects, but they mostly aren’t what the institutions are designed to achieve. Thus proposals that would tend to increase progress, like promoting more inter-disciplinary work, are rejected if they make it substantially harder to credential people as mentally impressive.

You might wonder: why would humans tend to seek signals of the combination of impressive abilities and power over others? Why not signal these things separately? I think this is yet another sign of homo hypocritus. For foragers, directly showing off one’s power is quite illicit, and so foragers had to show power indirectly, with strong plausible deniability. We humans evolved to lust after power and those who wield power, but to pretend our pursuit of power is accidental; we mainly just care about beauty, stories, exciting contests, and intellectual progress. Or so we say.

So does anyone else have different theories of academia, with different predictions about which metrics academics and their customers will prefer? I look forward to the collection of data on who prefers which metrics, to give us sharper tests of these alternative theories of the nature and function of academia. And theories of music, stories, sport, etc.

GD Star Rating
Tagged as: , , , ,