College Admission Markets

This article by Ron Unz is long and rambles a bit, but deserves its provocative reputation. It offers data suggesting that over the last few decades the most elite US colleges have had systematically biased admissions, against asians and for jews, when measured against other standards, like tests and top math/sci competitions. Given the strong academic rhetoric against racial discrimination, you might expect this to cause a fervor, and to result in big changes soon. But I don’t expect much soon – most academics are from those schools, and benefited from those biases, loud complaining isn’t the asian style, and the larger society doesn’t much care because this discrimination is mostly limited to these schools.

The problem comes mainly from granting discretion to admissions personal to make subjective judgements. One solution is to just use objective features like test scores. But Unz worries about ambitious kids wasting their youth in mostly useless test prep. Also, application packets may contain other useful but harder to read clues about promising students. Unz instead prefers to admit “qualified” students at random, at least for most of the slots. But once everyone knew for sure that the elite schools didn’t actually have much better students, it isn’t clear why they would remain the elite schools.

As usual, my solution involves prediction markets. As I posted here five years ago, we could hide clearly identifying info about students, post their application packets to the web for all to see, and let anyone bet on the consequences of each student going to each school. Students might care about their chance of graduating, their income later, and some measure of satisfaction. Elite schools might care more about the chances of students being “successful” someday. Different schools might use different measures of success, such as with different weights for achievement in sports, politics, business, arts, etc. Schools could admit the students with the best chance to succeed by their measure, and students could apply to and then go to the school giving the best chance if achieving their goals. Or students could not go to school at all, if that was estimated to be best.

Of course speculators will favor students showing concrete signs of future success, and so ambitious students would spend their youth trying to achieve such signs. But instead of locking in particular limited metrics like standard test scores, where prep efforts are mostly wasted, this process would create an open competition to find signs of future success where efforts to gain them are more useful. After all, your chance of success later should be higher the more the signs you pursue push you to gain useful skills and habits in the process.

Yes it would be hard to get people to accept that such markets are accurate and hard-to-manipulate enough for this purposes. But equally hard, I expect, would be getting elite schools to say explicitly what sort of success they most want from students. They probably pretend to care more about admirable success, like being a famous writer, than they actually do.

Added 8p: Regarding anonymity, an obvious solution is for the official application to be completely public. Usually only a small fraction of the relevant application info will be things that are better kept private. Regarding that info, the applicant can just reveal that extra private info to a few trusted folks who are willing to trade in these markets. Markets do not need all traders know all relevant info to work well.

Added 30Oct2013: It appears that Unz’s data was faulty.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • gwern0

    > we could hide clearly identifying info about students

    Impossible. You can’t do this without eliminating essentially any and all information that investors could use. The history of anonymizing data is a long and dismal one with exactly one conclusion: anonymizing large datasets doesn’t work. (I collected a few of the more interesting results in http://www.gwern.net/Death%20Note%20Anonymity#de-anonymization but that’s just the tip of the iceberg.) You can’t even publish the school or class rank without destroying pseudonymity, much less key info like standardized tests/extracurriculars/awards/race/family-income etc. I won’t comment on the rest of it but the claim that you can evaluate the students anonymously (via admission markets or otherwise) has got to go and the bullet bitten.

    • Robin Hanson

      And yet people use such data sets all the time, and people’s identity is rarely revealed as a result. There is a big difference between an in principle lack of a guarantee that someone might combine clues to figure out who you are, and the in practice rate at which the people they wouldn’t want to find out actually do find out. 

      • Rasputin

        Wouldn’t people care quite a bit about identity in these markets though?

        For instance, if I’m going to bet on the prospects of a 16 year old kid I’d be really interested in knowing if he came from a politically connected family or not.

      • Carl Shulman

        When there are strong incentives to determine identities they do often get determined, e.g. by online marketers. Insurance companies are quite good at finding proxies for health in order to recruit the most profitable customers.

        However, they try to keep this knowledge from being too obtrusive or annoying for the customers. So I would guess that if there were large markets of this type, large enough for hedge funds and such to get involved, the investors would frequently identify the students. However, they wouldn’t be eager to publish this information for the rest of the world to know.

      • gwern0

         If the strategy is “depend on security by obscurity”, that’s something very different from “let’s anonymize their data” and one shouldn’t lie to readers in saying the latter if one actually means the former…

      • Stephen Diamond

        Robin wrote, “hide clearly identifying info about students.” [emphasis added.] That does not imply “anonymize,” since that interpretation would render superfluous the word “clearly.”

        And who is the “one” lying (no less) to readers?

      • gwern0

         Stephen, I think it clearly does imply anonymizing strategies if you apply the principle of charity and don’t assume that Robin is proposing safeguards even more futile and useless than the regular failed data anonymizing techniques.

        And switching from a harm-elimination implication – nobody’s privacy will be violated because we hide identifying information – to a security by obscurity justification, as Robin did, is a clear shift. If Robin thinks the collateral damage will be sufficiently small, he should say so, as he apparently has. If he thinks the damage won’t exist because removing ‘clearly identifying’ information is sufficient, well, hopefully he knows better now.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        proposing safeguards even more futile and useless than the regular failed data anonymizing techniques.

        I think you must be confused about the purpose of removing clearly identifying data. The accuracy of the predictions doesn’t depend on it; actually, prediction would be better with all the data. It’s just a matter a concession to privacy interests. The information provided might be limited to facts that aren’t vitally private, at some cost to prediction, but the only point is  to limit the impact on privacy through obscurity.

        Robin has made no contentions compelling him to defend the claim that privacy would be completely safeguarded: the point is  tangential. There are degrees of privacy, corresponding to the kind of information involved. Do you think who participated in what extracurricular activities, got what grades and test scores, etc. deserve the highest level of privacy protection (or if not, what?)

      • gwern0

         > Robin has made no contentions compelling him to defend the claim that privacy would be completely safeguarded: the point is  tangential.
        There are degrees of privacy, corresponding to the kind of information
        involved. Do you think who participated in what extracurricular
        activities, got what grades and test scores, etc. deserve the highest
        level of privacy protection (or if not, what?)

        To the contrary, pointing out the futility of such privacy safeguards shifts the argument Robin must make: as stated, there are no downsides and so the question is merely ‘would it be an improvement?’ Now that we know there will be downsides and harms to participants, the question becomes, ‘would it be an improvement *and* would this improvement outweigh the additional harms? Will security-by-obscurity reduce the harm enough? Or can we count on people being able to weigh the risks appropriately?’ etc.

        A good discussion cannot happen if we are not even starting with the right questions.

      • Srdiamond

        To the contrary, pointing out the futility of such privacy safeguards shifts the argument Robin must make: as stated, there are no downsides and so the question is merely ‘would it be an improvement? … A good discussion cannot happen if we are not even starting with the right questions.

         
        In typical LW style, you narcissistically presume that your question–the one about which you have info.with signaling-value–is the “right” question. (That’s why discussions on LW always disintegrate into trivia.)

        Robin’s case is for eliminating racism–at least that’s what it’s addressed to. The burden would be on you to show there’s a countervailing privacy risk. Showing that anonymity isn’t possible doesn’t establish a prima facie case that there will be substantial harms. You must first show that the invasions of privacy have substantial personal import. Yet you declined to answer my question as to the value you accord to privacy in seemingly minor matters. As Robin said, directly on point, people access this kind of information every day without harm being noted.

        Perhaps there’s a case to be made that such publicity would result in substantially more cases of  identity theft. That’s about the only realistic consequence I can think of, but you haven’t made the case or even mentioned the possibility.

      • gwern0

         A mechanism cannot do anything, much less eliminate racism, if it is *too dangerous to be used*.

        And I narcissistically presume because I’m a member of some group? Yeah, I think we’re done here.

      • Srdiamond

        And I narcissistically presume because I’m a member of some group?

        No, because (among other things mentioned) you egoistically distort arguments when you can’t answer them–like this point: which was merely that your behavior is typical of LW.

        You simply haven’t shown prima facie that the method is “too dangerous to be used.” No doubt, you’ve shown it in your own grandiose imagination. 

      • http://twitter.com/peteyMIT Chris Peterson

        e: nvm

  • spindritf

    Concrete “signs of future success” are usually also clearly identifying information.

    • http://twitter.com/peteyMIT Chris Peterson

      Correct. And ironically, though they may appear to be identifiers in a vacuum, they may not always actually be signs of future success in actual life. Math prodigies can end up as Harvard profs or homeless depending on a wide variety of interpersonal and psychological characteristics which would not necessarily be articulated in an application, especially an application with a global readership. 

  • Stephen Diamond

    The problem comes mainly from granting discretion to admissions personal to make subjective judgements. One solution is to just use objective features like test scores. But Unz worries about ambitious kids wasting their youth in mostly useless test prep. Also, application packets may contain other useful but harder to read clues about promising students. Unz instead prefers to admit “qualified” students at random, at least for most of the slots.

    It isn’t better to use a pure lottery among the qualified than to simply ignore some harder to read clues (unless ignoring these clues biases the results rather than merely increasing random error.) Let’s face it: there are more admissions’ officers who are Jews than Asians — “tribalism” and all.

    Tests, grades, and undergraduate school are pretty much the only bases for admission to law school, yet you only see a moderate amount of worthless test prep. Surely bright students are capable of learning that something is worthless–unless it’s not worthless for doing well on the test itself, in which case you need a better test.

    Break the power of the admissions office. It’s a vehicle for cronyism.

    • http://twitter.com/peteyMIT Chris Peterson

      “It isn’t better to use a pure lottery among the qualified than to simply ignore some harder to read clues (unless ignoring these clues biases the results rather than merely increasing random error.) ” 

      But what goes into “qualified”? This is a real question. Is it SAT scores and GPA? Do you include, say, interpersonal skills from an interview and support from teachers? If so, how do you measure and quantify such things in a meaningful way, as opposed to just sticking numbers on things to call them objective? How do you account “objectively” for, say, the broad differences not only on the SAT and ACT based on parental income, but more mysteriously the broad differences on SAT and ACT by geographic area once controlling for income? 

      If you could somehow gather these things into a number and put everyone above a certain baseline into a pool from which you drew numbers randomly then I would be OK with it. But that is a really hard – some would say impossible – problem to solve. 

      None of this is a defense of the odious admissions practices of many schools. There are serious, systemic problems with college admissions. But I don’t see how the lottery solution solves them. If anything, it makes them worse, but just delegates the blame to the teflon carapace of a nonhuman randomization algorithm. 

      “Tests, grades, and undergraduate school are pretty much the only bases for admission to law school, yet you only see a moderate amount of worthless test prep.” 

      As someone who applied and was admitted (though thankfully never attended) to law school, I would disagree; there is just as much worthless test prep for law schools, perhaps more to the extent that everyone knows law school admission depends almost entirely on the “objective” (scare quotes used advisedly) characteristics of GPA and LSAT (for the purposes of rising up the USN&WR rankings) and almost nothing else. 

  • Stephen Diamond

    Students might care about their chance of graduating, their income later, and some measure of satisfaction. Elite schools might care more about the chances of students being “successful” someday.

    Aren’t these predictions rather long-term by prediction-market standards?

    But instead of locking in particular limited metrics like standard test scores, where prep efforts are mostly wasted, this process would create an open competition to find signs of future success where efforts to gain them are more useful.

    But probably only to the extent that the tests aren’t good. When the measures of success are good enough, combining them with other information dilutes it; at least that’s the lesson I carry away from the competition between the pollsters and prediction markets.

    Of course, relying on tests carries a great stigma today (thanks to the early alliance between psychometricians and racists). But do prediction markets carry less or more. (I’m not sure.)

    I can’t resist the analogy between perfecting the polls and perfecting the tests when comparing the relative benefits of one or the other approach.

  • Robin Hanson

    I just added to the post.

  • Newerspeak

    > most academics are from these schools

    Citation Needed.

  • Ely Spears

    > “Different schools might use different measures of success, such as with different weights for achievement in sports, politics, business, arts, etc. Schools could admit the students with the best chance to succeed by their measure…”

    The result would be approximately the same distribution of students attending these universities as is seen now. These institutes are already more or less doing this. They make a random draw from a pool of applicants that satisfies their criteria for future promise, in terms of e.g. future political influence. The distribution over this pool of applicants is cap-weighted, so that students whose families have more wealth (in the forms that the school desires) effectively have more tickets in the raffle. At the graduate level, if a “regular person” can bring his or her own funding, then they can attend, giving just enough diamond-in-he-rough appeal to keep the places popular with the lower classes.

    Since these are private institutions, they prefer to control the private prediction market going on behind the scenes. They may lose some info for this, but then can be less criticized, can maintain an air of mystery and prestige that a more transparent process may threaten, and keep plausible deniability about trends.

    • Ely Spears

      I also suspect that, just as in most of the financial market, market participants won’t do a very good job. The effective R^2 achievable on predicting someone’s future success when they are 16, 17 (aside from large first order effects like political connections) is just too low.

      One place where this already happens is in the market for college football talent. Coaches routinely scout as young as middle school children for indicators of future athletic talent. Yes, the age at which players sign with a college has been pushed back a little (usually junior season), but there have been numerous articles written about the woefully bad correlation between ESPN and suchlike ratings of high school athletes and then their materialized football success.

      • Ely Spears

        I suppose the more concise point is this: if you imagine a college graduate who fulfills all the success properties that Elite School X wants, and then you try to invert the process of going through college as applied to that hypothetical graduate, I think you wind up with a vast space of incoming students who would do just fine. That is, it is an ill posed inverse problem. However well markets can aggregate info, they can’t help you much with sensitive inverse problems.

        Further, if we suppose that nonlinearities and noise dominate, much like in financial markets where predictors do a poor job on average, then sticking to a simple linear model with large, bulky, obvious predictors should probably get you 99.999999% as far as a full blown prediction market would.

      • Robin Hanson

        Yes, best estimates may be noisy. I fail to see why this implies that markets can’t produce the best available estimates.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Markets in general can’t produce the best estimates (whether they are the best available estimates is highly contingent). One reason they can’t, it seems to me, is the nature of biases that afflict markets. Take for example the bias in favor of the underdog (one reason the markets slightly overpredicted for Romney). This bias manifests by affecting the subjective value of an outcome rather than by affecting the perception of the evidence for the outcome. I don’t want to rest too much on this example, which I haven’t really explored, but it should be taken as a demonstration of how certain biases are likely to affect prediction markets. Anything that affects the perceived value of outcomes distinct from the monetary value will necessarily bias that market–but will not necessarily bias statisticians. This is because investment is motivated by subjective value rather than money, while their accuracy is predicated on the two being the same.

      • Ely Spears

         The extra info gained by having markets do it might be too little to be worth giving up things like plausible deniability. I guess the burden would be on everyone else (non elite school officials) to argue that it’s in the public interest to have markets do it.

        But essentially if you’re asking for a market to optimally allocate students to colleges according to the colleges’ different “success” criteria, and solving the problem of predicting that success criteria is very hard, then markets won’t do it much better than simplistic models, most participants will reach the low bar required to do just as well as peers in the market, and universities won’t see the point in basically just letting others choose the more-or-less the same allocation they would have privately chosen.

        The one thing that it might do, though, is to force schools to announce their success criteria ahead of time. Then, if the school “success” function is something like, “Has good test scores, and is very politically connected to X, Y and Z” then everyone can see what it takes to be admitted. Because of tribal cheering sort of tendencies, most average people would feel this is an outrage (but my son is just as smart, why does it matter that he doesn’t know X, Y, and Z!?) and to preserve some sort of passable political correctness, the school might have to amend its stated admission criteria to look more like “morally acceptable” criteria.

        But I suspect then they’ll just push it back a layer and find a way to bribe market officials, bribe government officials who regulate the market, etc., to achieve the same goal. They want to have their cake and eat it to: admit based on specific criteria that the public would think is morally questionable, but then not ever have to admit that they are doing so.

      • Robin Hanson

        Yes, the whole issue is that many fear that colleges are actually using criteria, e.g., racist leanings, different from what they profess. The whole question is how to enforce some declared preferences, and allow case specific judgement using subtle clues, without letting admissions committees substitute some other criteria. If you want the current racist criteria to continue, you of course don’t need this solution.

      • Srdiamond

        Ely Spears,

        But essentially if you’re asking for a market to optimally allocate students to colleges according to the colleges’ different “success” criteria, and solving the problem of predicting that success criteria is very hard, then markets won’t do it much better than simplistic models, most participants will reach the low bar required to do just as well as peers in the market [emphasis added]

        Isn’t that supposed to be exactly what markets are supposed to be good for: Accurate prediction drives out inaccurate, even if initially only a few are doing it right? You don’t need markets to predict what’s easy.

         
         

  • http://twitter.com/peteyMIT Chris Peterson

    Hi Robin – 

    I was linked your article by a friend. College admissions is hard. There are a lot of things wrong with it. But I *strongly* disagree with you in a number of respects. 

    You cannot “hide identifying information” from a student’s admissions packet, at least not in anything that we would recognize as an admissions package today. The things that are the most compelling aspects of a student’s case are precisely those things which would allow them to be identified. 

    How would you anonymize high-caliber distinctive achievements (like, say, membership on the IMO Team, which has 4 members)? How would you anonymize the anecdotes which teachers would tell? When I worked as an admissions officer at MIT, I read an application from a girl who was a homeless undocumented immigrant. Do you think she, or her teachers, would have told that part of her story, and risk her being re-identified and deported? Or would she have stayed silent and hid critical parts of her application to protect herself? 

    Respectfully, arguing for this sort of radical transparency – for that is what this is – further privileges the privileged: those who have either uncontroversial achievements and experiences (which there would be no problem with revealing in the most public of ways), or those who, even if they have banal activities, at least do not have sensitive / subaltern ones. 

    You can say that “only a small portion of the official app” would be public. But which parts would that be? What would the effects of that selection be? 

    This is even before we consider the implications of wealthy students gaming prediction markets to make it appear as if the wisdom of the crowds were selecting them. Such distortions would almost inevitably exist: witness the broad differences between InTrade and BetFair this past election season on the Presidential Election, potentially the most visible and highly engaged prediction market there is. 

    For that matter, why do we think prediction markets are the right option here? What makes us think that “the crowds” are well-equipped to make these sorts of predictions? I’d have to see you unpack that a lot more before I even bought that the mechanics could be correct. 

    Again, there are *a lot* of problems with college admissions. No one is more cognizant of that than I. But respectfully, this seems like a solution imposing itself on a problem which it is ill-equipped to solve.

    • http://www.mccaughan.org.uk/g/ gjm

      Pedantic note: IMO teams have 6 members, not 4. (Strictly: up to 6. Most teams have 6 members.)

      • http://twitter.com/peteyMIT Chris Peterson

        Fair point. 4 active members, 2 alternates. 

      • http://www.facebook.com/profile.php?id=723726480 Christopher Chang

        No, 6 active members; alternate(s) are on top of that.  Unz also gave a (different) wrong number in his article, for some reason; not sure why there’s so much confusion.

    • Robin Hanson

      Your comment is dated after I put the added on my post, but your comment doesn’t seem to reflect it.  Why would IMO membership need to be kept secret?  But even if it did, what is wrong with my proposal to tell this to select trusted traders?  Surely your homeless illegal immigrant case is a rare one. I’ve published a lot on manipulation in prediction markets; theory and data suggest that unless we put up artificial trading barriers (such as exist for Intrade), wealth folks could not “game” to their advantage.  And this is not at all about crowds vs. experts; it is the experts who would dominate the prediction market prices. 

      • VV

         Maybe in an highly liquid market without significant trading barriers, such as a presidential election prediction market done right, manipulation would not be an issue, but how much liquidity would be in a prediction market for a single student?

      • Robin Hanson

        Our theories about manipulation have been tested in small lab markets.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Our theories about manipulation have been tested in small lab markets.

        If you get enough entrants, which may be a fairly small number, the market will probably not be manipulable. That much should be conceded. But you have little reason to be smug about getting entrants. If your economic model is predicated on rational financial actors, after all, nobody will be predicted to enter. Gambling, after all, is financially irrational.

      • VV

         @google-f131022ef3ff62796ebb40e140061d9e:disqus
        With hundred thousands events to predict?

        We are talking about markets in the form:

        “Student number X will score = Y0, = Y1, < Y2"

        "Student number X+1 will score < Y0"

        As @google-8a859b151b507f070cefe46a035c0a99:disqus and @twitter-127880271:disqus pointed out, there are just too many options even without manipulation. If you take manipulation into account, it would be quite certain that pretty much the only one betting on student number X is student number X.

      • Robin Hanson

        We have these things called computers that let individuals each do millions of things daily. 

      • VV

         @google-f131022ef3ff62796ebb40e140061d9e:disqus  So it was not tested.

      • http://twitter.com/peteyMIT Chris Peterson

        “Why would IMO membership need to be kept secret?” 

        Because it’s a highly identifying characteristic, and you said you wanted applications anonymized. 

        If you mean “no one needs to be ashamed of IMO membership, so it’s ok to be identified”, then you are changing the terms of your own argument. 

      • http://juridicalcoherence.blogspot.com/ srdiamond

        you said you wanted applications anonymized.

        Quote? Actually, he suggested removing all clearly identifying information, not all identifying information. He never advocated “anonymization.” Your word, not his.

      • http://twitter.com/peteyMIT Chris Peterson

        @f26939f398e5b2e21ea353b06370c426:disqus – 

        Distinction without a difference. Else what’s the point? 

      • http://twitter.com/peteyMIT Chris Peterson

        @f26939f398e5b2e21ea353b06370c426:disqus  –

        That is a bad analogy and doesn’t make any sense. 

      • http://twitter.com/peteyMIT Chris Peterson

        I read your update, it just doesn’t make any sense. 

  • dmytryl

    There’s a lot of students a year, meaning, too little effort would go into trading on either, meaning, if there’s any trading at all, it’ll be done by couple automated tools that will be ‘racist’ in various bizarre and subtle ways. On the second thought, I do not expect there would be enough participants doing enough trading at all. It’s one thing when it’s Obama vs Romney, it’s other thing when it’s student 14953, student 14954, and the other tens thousands. Take N the number of students, M the number of trading decisions, and look what trading on typical student would look like: barely any trades at all, primarily statistical noise.

    • http://juridicalcoherence.blogspot.com/ srdiamond

      It’s one thing when it’s Obama vs Romney,

      One problem is that what makes information socially valuable isn’t the same thing as what makes it interesting to bet (…I mean trade) on. Did society really need more information about the probability that Obama would win? (Even if the prediction markets had succeeded in providing more.)

      What makes something interesting to bet on? Sports are exciting, even though the information provided when sporting events are predicted is socially trivial or less. Seems to me, without considering it too much, people like to bet on two things mainly: chance events (routlette; slot machines) and contests. One could easily concoct a Hansonian evol. psych. explanation of why we enjoy bets on contests: it taps our drive to size up conflicting sides.

      The public good provided by prediction markets has to do with the social importance of the information, but the markets themselves satisfy the need for exciting gambling about contests, which information rarely is socially valuable. 

      • dmytryl

        Yea, well, basically, Romney and Obama are, to put it mildly, well known. Student 14953 is not well known. And even if you got the whole student prediction market to be as popular as Romney vs Obama trading (which ain’t going to happen), the number of transactions is still divided by tens thousands.

        I recall there was a marathon contest on TopCoder for automatically predicting the user’s score based on profile data, inclusive of race, photo, and such. This naturally leads to very racist solutions. I guess their sponsors wanted automated racism in opaque package. Even if you hide the race data and such, if you have a piece of writing by the student, you could end up having bizarrely “racist” solutions as well, based on frequency of different words.

        Ultimately, what we need is not so much ‘accuracy’ as fairness. Make a good entrance exam and be done with this bullshit.

      • http://twitter.com/peteyMIT Chris Peterson

        “Ultimately, what we need is not so much ‘accuracy’ as fairness. Make a good entrance exam, ignore everything else, and be done with this bullshit. That’s how it is done elsewhere in the world.” 

        I would note that parts of “elsewhere in the world” is moving away from this model. Oxbridge and Korea are both transitioning towards including “subjective” characteristics for a variety of reasons, including that the set of prepared students is far larger than the subset you might like to constitute a given community for a variety of reasons (whether or not they are smart but jerks, whether or not they provide some perspective which might be valuable to your community, etc). 

      • dmytryl

        Chris Peterson:

        I don’t like that trend. It leads to racism, or filtering by political views, or other such issues. It is not demonstrably more predictive, yet it is demonstrably less fair. Subjective expert judgement is notoriously unreliable.

        If you have too many people that pass preparedness exam, you can e.g. throw in 5 really difficult problems.

        That being said, private higher education is shifting from education to trading in yet another kind of valuable paper, monetizing the existing reputation and relying more on marketing to create/maintain reputation. This means that they do not want any solid barrier, such as imposed by difficult problems.

      • Robin Hanson

        As I mentioned elsewhere, students could be bundled into larger packages to trade. If even after that markets were thin, then student advocates would be tempted to manipulate their favored students, and then others would trade to profit from the manipulators. In general, influential markets cannot be thin.

      • dmytryl

        Like, 2 packages, so that it works just like Obama vs Romney did (not) work as expected?

        I think what you need to convince people of, is that your ideas are not something reliant on magical thinking. There will only be others trading to profit from the manipulators if the required analysis is cost effective. You seem to neglect that nobody will put more effort into earning a dollar than they can get a dollar for elsewhere.

      • dmytryl

         Robin Hanson:

        > “bundled into larger packages to trade”

        They are, there’s a package called ‘asians’ and another called ‘whites’.

        > “If even after that markets were thin, then student advocates would be
        tempted to manipulate their favored students, and then others would
        trade to profit from the manipulators. In general, influential markets
        cannot be thin.”

        It’s unclear whenever that would ever fully remove the initial bias and noise even when there’s way more money laying on the table.

        What are the objections to just testing the students? That it would waste human-time of them practising useless stuff just for exam? Then change the exam.

        There’s no way exams and practice for them can be more wasteful than market of any accuracy.

      • VV

         @google-8a859b151b507f070cefe46a035c0a99:disqus

        That being said, private higher education is shifting from education to
        trading in yet another kind of valuable paper, monetizing the existing
        reputation and relying more on marketing to create/maintain reputation.
        This means that they do not want any solid barrier, such as imposed by
        difficult problems.

        I’m not American and I don’t live in the US, but it seems to me that this trend of theirs is not recent:  How in the hell could G. W. Bush get a degree from Yale if he had to pass an actually hard exam?

      • VV

         @twitter-127880271:disqus
        These are the effects of political correctness gone mad, in 20 – 50 years we will be all owned by the Chinese…

      • http://juridicalcoherence.blogspot.com/ srdiamond

        What are the objections to just testing the students? That it would waste human-time of them practicing useless stuff just for exam? Then change the exam.

        Without defending it, I think the most logically cogent argument against relying only on tests is that some student characteristics create a public good by their affect on the community (or even on society) without contributing to their individual achievement. Probably “diversity” is the most popular candidate for a characteristic contribution to public good, not just stereotype diversity but unusual life experience in general. But since the problem is with defining the criterion, it’s hard to see how prediction markets could measure these externalities that the admissions policies are supposed to reflect.

      • dmytryl

        srdiamond:

        Well, that would still be unfair in some way, likely with a negative effect even on those that are passed thanks to pro-diversity measures, later down the road.

        I have impression that prestige of university primarily matters in the less technical fields where there is not much criteria. If I am hiring for a technical job, I don’t care if the PhD is from MIT, or it is from some backwater place somewhere in Bulgaria for example, I’ll just read the PhD thesis itself. But if I am hiring for some sort of economical job, like, in investment or what ever, then I don’t know what constitutes a good student, I have to rely on various really low grade evidence such as the school they did go to, those would make some quite small difference in the expected performance but that’s the only difference I can make.

  • Furiouslysleepingidea

    What makes us think that the prediction markets wouldn’t be at least as racist or sexist as the individuals.  Markets are completely value neutral.  This means that if men have the best chance of being CEOs of big companies, the market will value men more highly.  This is true even if the reasons for different success are biases among other decision makers.  The markets will predict these biases, and could reinforce them. It may be a Keynesian Beauty Contest problem.

    • Mordatar

       Exactly. To see this, we can imagine someone setting up a high school for promising students and used this market idea to invest in kids that had the most chance of going to Ivy League universities. Then Asians would be biased against.

      • Robin Hanson

        If the ivies are biased against asians, the markets would predict that bias. That is not at all the same as being biased against them.

      • VV

         Markets could easily perpetuate and even reinforce these biases.

      • Mordatar

         Sure, markets themselves are value neutral. My point was that if we want Ivies to correct for a possible bias, then simply allowing  elite schools to “use different measures of
        success, such as with different weights for achievement in sports,
        politics, business, arts, etc. [and] admit the students with the
        best chance to succeed by their measure” would not correct this bias.
        I would hope that elite universities would filter bias and equalize opportunities based on talent and tenacity. Your proposed solution, even it is very efficient from a university-as-corporation viewpoint, would not do that.
        Do you agree?

    • http://juridicalcoherence.blogspot.com/ srdiamond

      But all this shows is that prediction markets could easily be racist; not that they must be. It’s all in the choice of criterion.

    • rrb

      Yes. The markets are estimating chance of success, which means being from historically disadvantaged groups will hurt.

      But universities consider more than just chance of success when they accept students. They could keep on accepting disadvantaged students with lower success probabilities even when those success probabilities come from a market rather than their own judgment.

  • http://entitledtoanopinion.wordpress.com TGGP

    I agree with dmytryl that there will be too many students for traders to bet on them individually. If they bet on factors or combinations of factors instead, that might serve the purpose.

    Chris Peterson, signalling theory is replete with signals that have less than 100% predictive power. They don’t need to be in order to be better than nothing. As it happens though, unstructured interviews have not been found to have any significant predictive power. Kahneman & Tversky, as well as Robyn Dawes, wrote about this decades ago. On a sidenote, I think you are the first person to use the word “subaltern” here. There are a couple different interpretations possible for that signal.

    • http://twitter.com/peteyMIT Chris Peterson

      Hi TGGP – 

      Yes, I’ve read K&T on this (though not Dawes), and I’m skeptical of interviews for that reason. However, in an admissions context, we don’t simply import interviews as data or as predictions per se. Instead, we look to see whether interviews confirm or oppose any trends which may seem to be emerging from other parts of the application. 

      It’s a tricky problem! How do you account for interpersonal skills / emotional intelligence / your chosen buzzword in a process like this? 

      I go back to the following graf by Kahneman: 


      True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse’s mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals’ experience and on the quality and speed with which they discover their mistakes…Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously.”

      As I wrote to the friend yesterday who had shared this piece with me, one thing I think would benefit admissions officers is to be as involved in the undergraduate community as they can to get good feedback, not only in exceptional cases (Rhodes scholars vs dropouts), but on general trends too. At MIT I became a freshman academic advisor, something which helped me understand the people behind the cases and their experiences once we got here. 

      Now obviously these sorts of approaches are full of problems too! How do you contend with availability bias in these kinds of situations, for example? 

      But I think feedback from decisions helps in the way Kahneman describes. Not naively in just reproducing interviews, but complexly as one input among many (as you say for signals).

      • http://entitledtoanopinion.wordpress.com TGGP

        Dawes is the author of “Rational Choice in an Uncertain World”, which is similar to K&T in documenting how people fail to reason according to the precepts of Bayesian probability theory. Another book by him is “House of Cards”, which deals with his colleagues in clinical psychology. Many of the examples of irrationality from the former book actually come from there. Clinical psychologists & psychotherapists believe that their experience grants them insights into patients, but as Dawes documents virtually all research (done by academic psychologists who persist in explaining away these inconvenient results) says otherwise. My recollection of “Judgement Under Uncertainty” is that weatherman were examples of experts who make pretty good & well-calibrated predictions.

        Below you mention the use of “subjective” evidence to filter for traits like being a jerk. Unz’ evidence should cause us to question whether that is what admissions offices are examining evidence for. Instead they look like they are filtering out asians and certain kinds of whites (the more rural, Christian & conservative variety). I’ve never heard it argued that Koreans or members of 4-H tend to be bigger jerks, though I suppose anything is possible. It’s well documented that colleges started de-emphasizing standardized tests and emphasizing more “holistic” criteria in order to reduce the number of Jews. It’s not a stretch to imagine that the public rhetoric about admissions masks something similar nowadays.

      • http://twitter.com/peteyMIT Chris Peterson

        That’s a very fair point, but I don’t think anyone has ever attributed the relative filtering out of the demographics you describe as being because of jerks, but rather because of the relatively straightforward “critical mass” argument for diversity advanced in affirmative action cases in the Supreme Court. 

        Selective admissions is a zero sum game. Every student is an individual, but some are more archetypical than others. I myself was an archetypical college applicant: a white, straight, male, with moderate leadership and good grades/scores from a solid suburban public school. I didn’t get into a lot of highly selective schools. Having sat on the other side of the desk, I now know why: because I was merely a good, not a great, applicant. It wasn’t because I was white / upper-middle class / etc. It was because I was, well, just like a lot of other good but unexceptional people. I realize that the cry of experts and insiders is always “ah, but you don’t understand!” And I understand that it is unconvincing. I also would say that several years of working in admissions at one of the most selective schools in the country completely changed my understanding of admissions by redefining my rubric of what exceptional meant. When you see college applicants as a vast pool, and not just the bright / fun kid down the street who you’ve known since she was a kid, it changes your entire viewpoint dramatically. 

    • http://twitter.com/peteyMIT Chris Peterson

      What I would also ask @google-f131022ef3ff62796ebb40e140061d9e:disqus is this: 

      1) Why would we expect “crowds” to be good at these sorts of decisions? It is not a priori obvious to me why this is a problem crowds would be good at picking and not one they would be bad at. 

      2) To answer that question we need to more specifically describe what they would be provided with. “Some application data” is too general. Is it SAT scores? Is it GPA? Is it essays? Is it teacher recommendations? Is it high caliber achievements? Some mix of all of the above? The composition of what you pick won’t just affect whether crowds are good at predicting it, but will affect how the students will turn out of the process. 

      Until these two questions are directly answered, I don’t see why crowds would be a value-add. That’s why I wrote in another comment that it is a solution imposing itself on a problem: because, at least in this post, I don’t see arguments for 1) and 2), but more of a faith-based approach that prediction markets will work somehow because they do. 

      • Robin Hanson

        Again, prediction markets are just not about “crowds”. Consistently successful traders are experts. While each college could offer a standard form, the students could provide any other info they want, either in a public application, or privately to selected traders. 

      • dEMOCRATIC_cENTRALIST

        True to his own theories, Robin is more concerned with signaling by responding to high-status commenters instead of trying to advance the discussion intellectually.

    • Robin Hanson

      You’d offer all the individuals to trade on, but also offer lots of bundles of students to trade on, and ways to make new kinds bundles. 

  • dEMOCRATIC_cENTRALIST

    Robin would have greater success if he pretended to some degree of academic detachment toward prediction markets—perhaps even recognized one or two legitimate arguments against them—rather than looking like a frank partisan.

  • Srdiamond

    Added 8p: Regarding anonymity, an obvious solution is for the official application to be completely public. Usually only a small fraction of the relevant application info will be things that are better kept private. Regarding that info, the applicant can just reveal that extra private info to a few trusted. folks who are willing to trade in these markets. Markets do not need all traders know all relevant info to work well.

    As subsequent discussion has proven, you haven’t correctly understood the original objection concerning anonymity, which was that limiting disclosures won’t keep dedicated seekers from piercing the veil, while providing incentives to do so.

    • Srdiamond

      This being bad from a privacy perspective rather than from one concerning market accuracy.

    • Robin Hanson

      Most relevant info doesn’t have much of a need to be kept private. Having some people peirce the veil doesn’t mean they will then reveal that info to the other people one doesn’t want to hear about it. I might not want my cute neighbor to know I’m in the chess club, but I probably don’t care if some bank knows.

  • Map2223

    OCB: “against asians and for Jews”  ?   How about against non-jewish white Americans.

    “As a consequence, Asians appear under-represented relative to Jews by a
    factor of seven, while non-Jewish whites are by far the most
    under-represented group of all, despite any benefits they might receive
    from athletic, legacy, or geographical distribution factors”

  • http://juridicalcoherence.blogspot.com/ srdiamond

    One obfuscation is the tacit idea that people “invest” in the prediction markets because they’re good at predicting and want to make money. This can’t be the reason under reasonable assumptions because if half of the traders gain, then half must lose (more or less). The expected value for the average trader is negative.

    The main incentive is the same as in any form of gambling: excitement from risk. Hence, people will bet on certain attractive (high-status?) questions, which correspond poorly to public significance, the success of students a dreadfully boring bet. It’s specious if you rely on the possibility of gain from expertise to drive the market when it’s a zero-sum game.

  • stevesailer

    Prestigious colleges appear to be doing an excellent job at staying prestigious. Why would they want to change?

    • http://juridicalcoherence.blogspot.com/ srdiamond

      Who ever said they should want to? Robin is saying that they should be made to: by the demands of supposedly antiracist academics.

      And asking why they don’t want to is revealing: they are often part of a cohesive advantaged group.

      But your point that the ultimate reason is contributions and Jews are generous that way is probably the crux of the matter, that everybody including me missed. (I didn’t read the full article, however.)

  • stevesailer

    Harvard’s endowment is $31 billion. Caltech, which is the good guy in the article, has an endowment of something like $1.8 billion. Maybe Harvard knows what it is doing, but is just a little reticent about telling us what exactly it is doing. 

    For example, maybe Jews donate more money on average than Chinese, and that’s why Harvard discriminates in favor of Jews? 

    I’ve been told that elite colleges have studied the donation question very closely, but the results appear to have been kept secret. Perhaps an economist could get some colleges to release the results of their modeling of donation proclivities. 

  • stevesailer

    What exactly are prediction markets supposed to predict about college applicants? If the criterion is donations to the alma mater, maybe elite colleges are already doing an excellent job. 

  • dEMOCRATIC_cENTRALIST

    If Robin had the courage of his convictions he would write a book espousing prediction markets instead of one on EMs. Robin thinks prediction markets are a panacea for existing problems, but nobody cares whether there’s an EM future a century hence. If Robin could prove the conclusion rather than merely establish its plausibility, maybe he’d inspire a few thousand despairing suicides among idealists. But who cares about there being a 25% chance that EMs will take over the world in a century. Only a few nerds. The topic isn’t even high status!

  • johnz1

     

    I have
    seen an article on phdguy website http://www.phdguy.com about future of web and
    death of google.

  • Drewfus

    Schools that cannot choose their students have to compete on quality instead of status.

    Lessons on School Choice from Sweden

  • Pingback: Unz on Meritocracy: Picking Our Elites at Random? | Ron Unz – Writings and Perspectives

  • Pingback: Meritocracy: Picking Our Elites at Random? | The American Conservative

  • Pingback: Meritocracy: Picking Our Elites at Random? | Tony Johnson