Elite Evaluator Rents

The elite evaluator story discussed in my last post is this: evaluators vary in the perceived average quality of the applicants they endorse. So applicants seek the highest ranked evaluator willing to endorse them. To keep their reputation, evaluators can’t consistently lie about the quality of those they evaluate. But evaluators can charge a price for their evaluations, and higher ranked evaluators can charge more. So evaluators who, for whatever reason, end up with a better pool of applicants can sustain that advantage and extract continued rents from it.

This is a concrete plausible story to explain the continued advantage of top schools, journals, and venture capitalists. On reflection, it is also a nice concrete story to help explain who resists prediction markets and why.

For example, within each organization, some “elites” are more respected and sought after as endorsers of organization projects. The better projects look first to get endorsement of elites, allowing those elites to sustain a consistently higher quality of projects that they endorse. And to extract higher rents from those who apply to them. If such an organization were instead to use prediction markets to rate projects, elite evaluators would lose such rents. So such elites naturally oppose prediction markets.

For a more concrete example, consider that in 2010 the movie industry successfully lobbied the US congress to outlaw the Hollywood Stock Exchange, a real money market just then approved by the CFTC for predicting movie success, and about to go live. Hollywood is dominated by a few big studios. People with movie ideas go to these studios first with proposals, to gain a big studio endorsement, to be seen as higher quality. So top studios can skim the best ideas, and leave the rest to marginal studios. If people were instead to look to prediction markets to estimate movie quality, the value of a big studio endorsement would fall, as would the rents that big studios can extract for their endorsements. So studios have a reason to oppose prediction markets.

While I find this story as stated pretty persuasive, most economists won’t take it seriously until there is a precise formal model to illustrate it. So without further ado, let me present such a model. Math follows.

Let a unit quantity of applicants have quality x uniformly distributed over the range x in [0,1]. An evaluator i claims that its endorsed applicants have a quality of at least xi, and later suffers prohibitive penalties if such claims are ever found to be wrong. Thus an evaluator who chooses limit xi can actually only endorse applicants for whom x ≥ xi. There are N evaluators i in [1,N] who are endowed with different prior reputations that restrict their choice of limit xi. Evaluator i must choose xi, in [0,2i-N) because observers just won’t believe that they could attract applicants of quality x ≥ 2i-N.

An evaluator who charges price p to accurately endorse the set of applicants in the range [a,b] gains profit p*(a-b); evaluators have no other costs or revenue. Applicants who pay price p to to be endorsed as having quality x ≥ a gain net value V = a – p because of how they are treated by later observers. This value is not larger due to adverse selection in later observer process.

The order of play is as follows. First, evaluators choose sequentially in order of increasing index i. Each i chooses both price pi and quality limit xi simultaneously. After evaluators have chosen, then applicants, knowing all the pi  and xi and their own quality x, simultaneously each choose an evaluator. Finally evaluators choose whether or not to endorse each of their applicants. (We get the same results if applicants don’t know their x, and can repeatedly apply to evaluators until one endorses them.) Let i=0 correspond to paying nothing and getting no endorsement, with x0 = p0 = 0.

A simple (and maybe unique) equilibrium of this game is: each evaluator i chooses  pi = xi =2i-N-1, each applicant applies to the highest i such that their x ≥ xi , and then all applicants are accepted. (Applicants with x< x1 “apply” for no endorsement and get it.) All applicants get exactly zero net value, and evaluator i endorses 2i-N-1 applicants, gaining profit 22(i-N-1).

Note that higher ranked evaluators endorse more applicants, and gain more profits. “Big” goes with “high.” And evaluators take all the gains in this world; applicants get nothing.

Proof: For xi,pi to beat offer xi-1,pi-1, need max pgiven xto satisfy x– pi ≥ xi-1 – pi-1, gives x– pi = xi-1 – pi-1 and p= pi-1 xi – xi-1. Assume xi = ci+(1-ci)(xi-1 – pi-1). Gives the correct xN+1 = 1 with cN+1 = 1, and substituting these into profit πi =pi(xi+1 – xi) gives πi =(xpi-1  – xi-1)(ci+1+(1-ci+1)(xi-1 – pi-1) – xi). Maximizing πwith respect to xgives first order condition xi = ci+1/2 +(1-ci+1/2)(xi-1 – pi-1), which confirms assumption with cci+1/2. Combined with cN+1 = 1 and x0 – p0 = 0 gives xi = pc= 2i-N-1.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • Why wouldn’t the elite prediction markets obtain rents? [Perhaps that explains why some folks favor them.]

    • IMASBA

      What incentive is there for good predictors to pay a premium to place bets in markets that tend to be more accurate? The return on a correct bet in such a market would actually be lower than in a market with lots of bad predictors. Prediction markets have no self-fulfilling prophecy (“positive feedback loop”) powers and I doubt people will ever derive significant status increases from bragging that they placed a bet in a particular market.

      • The rents would come not from the bettors but from those who need predictions. They would be drawn from the subsidies that prediction markets would have to offer bettors interested in investment rather than excitement.

      • IMASBA

        The clients who need predictions might be inclined to pay a premium for the predictions of more accurate markets, markets that consistently hold better predictors as per Robin’s universities and VCs examples. However, unlike with the universities and VCs there is no incentive for the good predictors to stick with those more accurate markets, in fact they have a disincentive (a good predictor makes more money in an inaccurate market), so the advantage would be short lived.

      • there is no incentive for the good predictors to stick with those more accurate markets, in fact they have a disincentive (a good predictor makes more money in an inaccurate market), so the advantage would be short lived.

        I’m not sure I follow your use of “predictor.” We’re talking about the clients who need predictions, right? They have incentives to stick with prediction-market firms who are more accurate because what they’re in the market for is accurate prediction.

      • IMASBA

        No, “predictors” are the people making the bets.

      • Prediction firms will get bettors if they offer adequate subsidies. (Otherwise prediction markets are worse than a zero-sum game; for entertainment, not investment).

        Obtaining clients (subsidizers) is what lets them get people to bet.

        [I haven’t seen RH discuss clients and subsidies–except to admit they will sometimes be necessary. More than sometimes necessary, they will usually be essential.]

      • IMASBA

        If subsidies become significant to accurate prediction and if only a few markets can acquire enough subsidies to get accurate predictions then that will lead to an oligopoly in the public prediction market market. Still, it might very well be that rents stay low because the higher subsidies mostly go towards market participants (predictors) instead of the market’s managers.

      • [Can we call the participants “bettors”?]

        The same indeterminacy would have beset any prediction that the top universities and VC firms would become dominant as dominant as they are. In other words, it isn’t at all obvious that prediction markets are the solution to high evaluator rents.

        To get beyond the interminacy in the prediction, we need to explain why evaluation is a field of endeavor which is prone to prestige effects. The reason isn’t hard to fathom: predicting important things is high status, whether done by universities or prediction firms.

      • IMASBA

        Well, in the case of VC it’s not really expensive to maintain a better clientele. For universities there are expenses (they have to be somewhat demonstratively better, offer better facilities, etc…) but the (potential) clientele has long accepted payment of enormous fees for those marginally better results, plus merely being exclusive enhances value as well. Both present prime rent extraction opportunities.

        Public prediction markets would really have to spend a lot more to get significantly more accurate predictions. They can then surely charge higher fees but it remains to be seen whether they can charge such high fees that they can subsidize the bettors AND extract a hefty amount of rent. This might be possible if they go the way of consultancy (being used to provide status to already decided policy choices).

      • it remains to be seen whether they can charge such high fees that they can subsidize the bettors AND extract a hefty amount of rent.

        If they have sufficient market power (which RH has shown includes prestige), they will do what monopolies do: restrict “production,” meaning only taking the most elite bettors who are willing to pay high fees.

        Again, think Harvard. There will also be prediction markets like podunk community colleges.

        Orgs that hire these prediction markets will be looking for prestigious decisions, to quell rival factions (as RH has much discussed and you mention).

        [I don’t see that the expensiveness of the service is a factor in the determination of the size of the rents obtained.]

    • Zvi Mowshowitz

      If they are sufficiently large they do obtain rents by charging fees, but Intrade could not make enough that way to even keep the lights on. That is a small percentage of the value of being seen as high quality. By contrast, colleges/VCs/other elite evaluators can capture most or even all of the value of being seen as high quality.

      • If prediction markets are to be widely applicable, they’ll have to be subsidized. The big rents will come from the subsidies. Unsubsidized markets like Intrade are poor models for widely applied prediction markets, being a form of betting rather than investing.

      • I don’t think you understand what “rent” means.

      • I don’t think you understand what “rent” means.

        That would be an interesting surprise, but it seems more likely that you don’t understand my argument. Let me elaborate it.

        The future elite prediction markets obtain some big clients (by which I mean persons or orgs who need predictions). These prediction firms gain a reputation for giving the best predictions because they can obtain the biggest subsidies and thereby attract the most bettors. Future clients look to them as giving the best predictions. This leads to future bettors selecting them because of bigger subsidies.

        The elite prediction companies obtain huge bet subsidies from which they extract “unearned” profits (rents) because of the feedback loop running from getting big clients before their competitors.

        Rents are basically the result of monopoly, although monopoly takes diverse forms. What you’ve argued is that monopoly power can arise because of prestige when the right feedback loops are in place. Just like elite universities gain prestige by attracting the best students and professors and thereby the monopoly power to extract rents, there will be elite prediction markets that attract the best (richest) clients simply because they already have the best.

        [Subsidies (hence “clients”) seem to me to be of the essence. Where you’ve mentioned them, it’s more as an afterthought.]

      • Silent Cal

        Isn’t the prediction quality on a given question a fairly transparent function of the subsidy level of that question (at least in a thick market)?

      • I agree that transparency would reduce the “surplus prestige” of the company,” but I doubt that subsidy would be the sole factor. I’d think the prediction companies would offer clients considerable assistance in formulating their betting propositions.

  • Douglas Knight

    But successful colleges and VCs do not appear to charge more than unsuccessful ones.

  • LemmusLemmus

    Shouldn’t that be a paper?

  • kevinsdick

    I see a couple of potential issues with this model, at least as it relates to VCs.

    First, it assumes that the startup executives know their own quality. Given the general work on overconfidence, this seems unlikely. Also, my firm has invested in 280 startups over the last 3 years, interviewed probably 1000, and all the founders seem to genuinely think there is little doubt they will succeed. They’ve all placed large financial bets on succeeding so it’s hard to tell whether any of them have a stronger private belief in their success. Now, we typically invest 2-3 rounds before VCs like A16H, so perhaps it’s different, but we do observe a fair number of our companies that subsequently receive such investments.

    Also, your model assumes a decent level of fidelity in VCs ability to judge quality. The best evidence I’ve seen is that past performance explains perhaps a quarter of future performance:


    Moreover, it’s reasonably well known in the industry that _new_ fund managers tend to outperform existing ones:


    I’d like to see what happens to your model when applicants have poor knowledge of their own quality, evaluators’ quality decays over time, and new evaluators with good quality constantly enter the game.

    • IMASBA

      “I see a couple of potential issues with this model, at least as it relates to VCs.

      First, it assumes that the startup executives know their own quality.”

      Does it really assume that? I think the model merely says startup executives tend to go to well reputed/known VCs first, those VCs then get first pickings.

      “Also, your model assumes a decent level of fidelity in VCs ability to judge quality.”

      Yes, and that could be a weakness of the model when it comes to VCs, then again there might be enough low-hanging fruit to make a difference.

      • kevinsdick

        Robin’s model explicitly states: “After evaluators have chosen, then applicants, knowing all the pi and xi and their own quality…”

        So yes, it does assume that. He may be able to contruct his proof under weaker assumptions, of course.

        My guess is that introducing stochastic elements for applicants knowing their own quality, evaluators being able to judge true quality, and random entry by evaluators with unknown but potentially superior judgement will make the equilibrium much less clear.

      • But I said you get the same results if they don’t know but search for the best evaluator to approve them.

      • kevinsdick

        Sorry. I missed that on my small screen. Though now I’m curious about search costs from the candidate’s perspective.

        How do things change if the evaluators aren’t very skilled?

      • As long as they aren’t competing with anyone who is more skilled, the model works fine. Replace x with E[x].

      • kevinsdick

        Yes, but to my other point, there is a constant stream of new entrants. We have some evidence that some of them may be more skilled.

        If you add in search costs for candidates, you have a more complicated situation. They have to decide when to stop. They can’t necessarily afford to get evaluations from every evauator.

        Conversely, an evaluator therefore can’t assume they’ll see every candidate. Evaluators also have a limited budget. Moreover, they know there may be new evaluators with more skill than they. Now they have to calculate when to stop.

        Given that evauator and candidate also negotiate over price (valuation), surely there is a complex bargaining calculation here with lower reputation (but potentially higher true skill) evaluators offering hogger prices to high quality candidates. Similarly, high reputation evaluators holding out for lower prices at the risk of getting lower quality candidates.

      • If you are trying to convince me that the world is more complex than my simple model, why bother? All economists know the world is more complex than all of our models.

      • kevinsdick

        In my experience, if you are trying to figure out how gains are allocated among two parties, ignoring the price of their exchange is too simple.

      • arch1

        If this means that your model implicitly assumes that evaluators are equally skilled, shouldn’t you make that explicit?

  • Wei Dai

    If I understand correctly, in this model you’re assuming from the outset that the top half of all applicants can’t be attracted to any evaluator except one (evaluator i=N) or at least that all observers believe this to be the case. This seems to be assuming the very thing that you’re trying to explain, i.e., why do a few elite evaluators attract all the top applicants. Would it be possible to remove this assumption from the model, and show how such an outcome/belief could arise endogenously in equilibrium?

    • I think I could remove that assumption and still get the same answer. But I’m not sure.

      • Wei Dai

        Hmm, it does seem to work. Suppose there are N=2 evaluators who choose pi,xi sequentially without restrictions on pi,xi. Applicants always choose an evaluator willing to endorse them with the largest xi-pi, and in case of ties, choose the evaluator that moves last (intuitively, evaluators that moves later can always undercut the previous ones slightly). Knowing that, evaluator 2 should maximize profits while keeping x2-p2 = x1-p1. Knowing that, evaluator 1 should always choose x1=p1. When x1=p1, evaluator 2 maximizes profits at x2=p2=1/2 regardless of what x1 actually is. Knowing that, evaluator 1 maximizes profits by choosing x1=p1=1/4. This clearly generalizes to N>2.

        Anything wrong with this reasoning?

      • Yes N=2 is ok. But for N>2 I assumed each i takes into account the reaction of j>i to their choice of x_i p_i, but that they take as given the x_j p_j choices for j<i.

      • Wei Dai

        (Guess you meant for “j 2. Take N=3 for example. Again, no evaluator can do better by offering pi < xi, since a later evaluator can always "price match" and we're considering an equilibrium in which applicants always choose the last evaluator in case of ties. And again, as long as p1=x1 and p2=x2, evaluator 3 always maximizes profit at p3=x3=1/2. Knowing what evaluator 3 will do, evaluator 2 does best by choosing p2=x2=1/4 no matter what p1,x1 is, as long as p1=x1. And knowing what evaluators 2 and 3 will do, evaluator 1 does best by choosing p1=x1=1/8.

      • Silent Cal

        This doesn’t seem like the right dynamic; movement order is doing all the work. The answer to “What Does Harvard Do Right?” probably isn’t ‘move last’.

        In particular, I think the model should depend on state–it shouldn’t leap into the expected pattern in the first round.

        I tried to capture this by building a model where applicants’ rewards come from agents trying to guess their value based on their evaluators and results of previous rounds. It got complicated, but it looked like evaluators would have incentive to increase their standards and/or cut their price, incurring a one-term loss (until observers noticed their increased quality) in exchange for a recurring gain.

      • Wei Dai

        I wanted to make sure I understood Robin’s model before evaluating it, but I agree that if movement order is doing all the work, that does not seem like a good explanation of real-world “elite evaluator rents”. Would be interested to see your model when you finish it.

      • The point of the model isn’t crystal clear to me. [‘Economists love formal models’ isn’t elucidating.] But I gather that the part “economists” would question in the absence of a formal model is “evaluators who, for whatever reason, end up with a better pool of applicants can sustain that advantage and extract continued rents.” (Emphasis added.)

        Does it take much to show that it’s possible for prestige to be self-sustaining? That seems all he’s aiming for. You seem to expect a model that helps establish the argument by yielding illuminating (unexpected) results.

      • Silent Cal

        I posted it as a top-level comment. I’m not an economist, and it could probably be cleaner, but it does reproduce the following facts:
        -If evaluators do not change their quality, applicants and observers have the equilibrium we’re postulating.
        -Evaluators who lower their quality will make money in the short term but pay for it later.
        -Evaluators can only increase their quality by taking losses in the short term–either lowering their prices below short-term profit-maximizing level, or raising their quality without any immediate price raise.

        I thought the second point was a bug, but it might actually be a feature–I suspect that a university in real life actually could eventually raise its prestige given enough money and patience. So the conditions for self-sustaining prestige might actually have to do with time horizon/capital constraints.

    • This seems to be assuming the very thing that you’re trying to explain, i.e., why do a few elite evaluators attract all the top applicants.

      What he’s trying to explain is why can elite evaluators attract all the top applicants despite these evaluators being no better at evaluation Why the evaluators we look to are few in number is a different question (and probably a reasonable assumption).

  • So without further ado, let me present such a model.

    My guess is that most readers would have been grateful for more ado. Between an intuitive description of the general problem and a formal example, it would be most helpful to have an intuitive description of the strategy to be used in constructing the formal example. [Philosophers are good at doing this; mathematician and economists not.] (See “Overzealous concision: Density” — http://disputedissues.blogspot.com/2009/08/overzealous-concision-density.html )

  • Silent Cal

    Here’s my attempt at an alternate model:

    In addition to applicants and evaluators, introduce a third agent, the observer (corresponding to the ‘later observers’ in OP who determine the applicants’ reward; this is a single agent for simplicity). The observer plays after the endorsement phase, and tries to accurately assess the quality of each applicant. I don’t think the exact incentive structure will matter too much, so just say the observer pays a cost equal to the squared difference between their assessment and the actual quality. The observer’s only information about this round’s applicants is what evaluator endorsed them. They also have accurate historical info on the quality applicants endorsed by each evaluator in prior rounds. They do not know what quality the evaluator is claiming to enforce this round.

    Applicants’ reward is now the difference between the observer’s assessment of them and the price of their evaluator, and they know the assessments of applicants from prior rounds. Evaluators must determine their price and quality simultaneously with one another, and they have no exogenous constraints on what quality they will choose. Students will choose randomly if their expected reward is tied. Otherwise the game is as above.

    Now, as an ansatz in our search for equilibrium, suppose that observers assess each candidate as having the average quality of last round’s applicants from the same evaluator, each evaluator has chosen quality xi=2^(i-N-1) and charges a price pi=xi, and each applicant with quality x chooses the evaluator with the greatest xi s.t. xi < x.

    This implies that the candidates endorsed by evaluator i will have qualities uniformly distributed in the range (xi, x(i+1)), so the observer will have no incentive to spontaneously deviate. This implies that the assessment for applicants endorsed by evaluator i will be (x(i+1)-xi)/2, so applicants have no incentive to spontaneously deviate.

    That leaves the evaluators, which is the hard part. I'm out of time right now, but some general thoughts.
    -It seems like a combination of price cutting and quality raising could lift an evaluator's long-term earnings, possibly at short-term cost–so this is not a stable equilibrium.
    -We might correct this by changing the incentives of the evaluators; perhaps future rounds' earnings are time discounted.
    -But this could lead to the counterintuitive result that high-ranking evaluators will slash their quality for a one-round profit at the expense of the future–maybe we want some kind of loss aversion relative to current income?
    -We could also try giving the observer a noisy history to give them an incentive to look at the average of many prior rounds, making rank climbing slower.

    • Wei Dai

      I like the direction you’re taking here, making it a repeated game, but it seems unnatural to assume that applicants know xi, but the observer doesn’t. And that assumption seems to be doing a lot of work here, because without it, an evaluator would be much more tempted to compete with higher ranking evaluators by raising its xi, since the observer would update on that and raise its estimates of that evaluator’s applicants for the current round.

      (Another problem: why doesn’t evaluator N raise its price from 1/2 to 5/8? At pN=1/2, its applicants are going to get a profit of 1/4. With pN=5/8, that profit drops to 1/8, which is still no worse than the next best choice of going to evaluator N-1.)

      What if we change your setup so that applicants don’t know xi either, and have to infer based on past data? Intuitively, no evaluator would want to reduce xi, since it can’t attract any additional applicants that way. Raising xi is costly in the current round, so it won’t be done as long as discount rate is high enough… But no, that depends too much on each applicant being able to apply to only one evaluator, which is not realistic…

  • stevesailer

    What does your model predict for Mark Andreessen’s six-year-old venture capital start-up?

    • Well the spirit of the model says it is surprising that a new VC firm could be ranked so highly. He needed to bring in some strong status markers to make that work.

      • stevesailer

        Thanks. Andreessen and Horowitz went to Mike Ovitz for advice, and they conceive of their VC firm as resembling a Hollywood talent agency. Ovitz and four other agents left the William Morris Agency in 1975 and started CAA, so that’s a famous example of a successful entry into an Elite Evaluator business.

        My vague impression is that movie talent agencies tend to have a lot of nominal stability in terms of the William Morris Agency and CAA being a big deal decade after decade, but there is also much tumult behind the scenes at agencies with coups and desertions and the like. The movie trade papers follow the ups and downs within agencies closely, but I don’t follow them.

  • stevesailer

    The existing Hollywood Stock Exchange, which plays without real money, is useful because it encourages insider trading.

  • Pingback: Grade inflation and college investment incentives - foreXiv()

  • Pingback: Why you maybe shouldn’t start (or invest in) a startup | topherhallquist()

  • Pingback: Lessig is the most patriotic candidate – Mike Linksvayer()