Bad College Quality Incentives

A week ago I puzzled:

If a measure of medical quality does not perfectly correlate with quality, that seems to many a sufficient reason to prevent people from seeing or acting on the measure.  … We prevent hospitals from publishing mortality statistics, because such stats may sometimes be "misinterpreted." … "As corporations and other organizations mine electronic data to draw conclusions about them … doctors could begin to `cherry pick’ healthier patients."

Many commentors defended such fears.  Toby Ord:

A systematically biased estimate of quality … is feared to create damaging incentives in the medical profession (cherry picking patients, not doing work on the unmeasured aspects etc). … doing more harm than good … Restricting the data to the government or supervisory bodies that understand its weaknesses may be the best solution.

Yet every industry with imperfect quality measures suffers similarly.  For example, consider the bad incentives from these imperfect college quality measures:

  • Student SAT scores:  Prefer to admit students with high scores, versus students who best benefit from your school.
  • Student GRE Scores:  Teach to the GRE test, neglecting other topics.
  • Graduation Rates:  Fail too few students, and give too many A grades.
  • Campus visits:  Invest too much in pretty grounds, and in visible events while students visit.
  • Research prestige:  Invest too much in prestigious professors who neglect teaching for research.
  • Sports success: Invest too much in winning teams that gain attention. 

To avoid these problems should we have the government assign students to colleges, or should we prevent schools from having researchers or sports teams, allowing campus visits, or publicizing test scores, graduation rates, or research success?  If not, what makes medicine so different? 

GD Star Rating
Tagged as: ,
Trackback URL:
  • Stuart Armstrong

    To avoid these problems should we have the government assign students to colleges, or should we prevent schools from having researchers or sports teams, allowing campus visits, or publicizing test scores, graduation rates, or research success? If not, what makes medicine so different?

    I’d personally be warm to the (impossible) idea of making research results hard for undergraduate students to get hold of. Research success seems to my mind a reliable indicator that the university is under-funding undergraduate teaching (though most people interpret it the other way).

    But the whole issue is a practical one. Biased, noisy measures create perverse incentives. They also provide benefits. And forbidding biased measures imposes costs. It’s just a question of figuring out what the net result is.

    As for why people are more willing to accept this type of argument in medicine than in university, there may be a variety of reasons. The obvious one is the status quo bias. The relationship between doctor and patient is often far more personal than between student and university – so interfering with one via noisy measures may be perceived as much more of an invasion than with the other. Doctors are trusted more than teachers. Many doctors have autonomy that most teachers lack, so will resent interference.

    I’ll stop there, because these reasons are mostly just vague rationalisations (more politely, “research suggestions”). (this was brought home to me when I was about to write something about doctors versus teachers, realised I’d inverted the characteristics of them both, and found the argument still sounded plausible).

  • Robin, I first want to make clear that I am merely unsure about whether ratings for doctors and hospitals should be made public. I lean to the skeptical end of the unsure spectrum, but am quite amenable to evidence. I have a similar position regarding some of the above measurements. The sports success is just silly and thankfully nowhere near as present in Australia and the UK (and perhaps anywhere out of the US). If that could be factored out, it would be great. I am no fan of the GRE system (particularly the general test), and think it is better if the graduation rates are not made public. For the other factors, I think that there benefits probably outweigh their costs. I take these things on a case by case basis and I see the case for these, but not for medical results (or not yet…).

    In looking for any more general differences, I noticed a large disanalogy between students selecting schools and patients selecting doctors (or hospitals, practices etc), which is that there is a major advantage to certain students pairing with certain schools. There are two aspects to this. Firstly, there are advantages of better students going to better schools. This both narrows the distribution of ability within each school, making it easier to teach to the students’ level, and makes the most of the intellectual resources of the next generation. Secondly, there is the matter of the students wanting a school which they (personally) prefer. For example, a school with a good computer science department, or a computer science department with a more theoretical approach, or a supportive environment, or a geeky environment or whatever. The students want the school that is best for them and the schools want the top students. This tends to lead to a stratification which makes the most of our intellectual capital and moves students into the schools that suit them personally. These are worthy goals and the publication of (most of) the above data directly helps.

    In contrast, for medicine everyone wants a good doctor, and the ideal matching relationship between patients and doctors/hospitals is much more complicated. Perhaps the best thing would be to have the best doctors treat the most ill patients and the worst doctors the least ill, but perhaps this would mean they all fail to cure their patients and we should do it the other way around, or in some more complex pattern. In the absence of a coherent strategy of optimizing patients’ places in the system (and incentives to make the doctors/hospitals want the right patients), the analogy fails. There is a (rough) ideal distribution of students into universities and a way to (roughly) achieve it, but there is no such thing for patients/doctors/hospitals. Sure there is still incentivization to make the doctors/hospitals better (just as there is for university enrollments), but the direct public benefit is missing.

  • Toby, the fact that you are uncertain about how to best match patients and doctors does not seem a good reason to prevent them from trying. It seems to me you are too quick to assume better students should go to better schools, but that richer or sicker patients should not go to better doctors.

    In any case, there is a good reason to let people evaluate quality even if there are no sorting benefits from matching who goes with who: better quality evaluation creates better incentives to produce quality.

  • Being a PhD student and probably never leaving the university, I’m always interested in how to assess program quality and make comparisons between them. Thus far, for me, the most illuminating indication of quality of undergraduate education has been the reports of students who are graduate students at my school who came from other ones. For example, I got an undergraduate degree in computer science, and when I run into other people who also got undergraduate degrees in computer science but from different institutions, I find that often I learned a lot more math than they did, but they learned more about software engineering, reflecting a difference in program goals. Similar stories, I expect, appear across most programs. The trouble is that it’s hard to get this kind of information to make comparisons before you enter college because universities, in general, focus more on advertising clear indications of status rather than describing the objectives of degree programs and the social setting in each program.

    As I always say to my students, algebra is the same whether you’re learning it in Cambridge, MA or right here. What matters is your desire to learn.