Experts Agree

Mental health diagnoses are evaluated in part by the consistency with which professionals assign diagnoses. Turns out, there is often a low correlation between the diagnoses different folks assign to a patient:

The DSM-5 revision has been intensely controversial, with critics … charging that poorly drafted changes would lead to millions more people being given unnecessary and risky drugs. The field trials used a statistic called kappa. This measures the consensus between different doctors assessing the same patient, with 1 corresponding to perfect diagnostic agreement, and 0 meaning concordance could just be due to chance. In January, leaders of the DSM-5 revision announced that kappas as low as 0.2 should be considered “acceptable”.

“Most researchers agree that 0.2 to 0.4 is really not in the acceptable range,” says Dayle Jones of the University of Central Florida in Orlando, who is tracking DSM-5 for the American Counseling Association.

One proposed diagnosis failed to reach even this standard. Some patients turning up in doctors’ offices are both depressed and anxious, so mixed anxiety/depression was tested as a new category: the kappa for adults was less than 0.01.

Attenuated psychosis syndrome, meanwhile, was intended to catch young people in the early stages of schizophrenia and other psychotic disorders. While field trials gave a kappa of 0.46, the variability was so large that Darrel Regier, APA’s head of research, told the meeting that the result was “uninterpretable”. Both disorders are now headed for DSM-5’s appendix …

The low kappas recorded for major depressive disorder and generalised anxiety disorder – 0.32 and 0.2 respectively in the adult trials – raise serious questions. (more)

Similarly low levels of agreement are found in academic peer review – referees judging papers submitted to journals, for example, rarely agree on whether the paper should be accepted. Yet, not only are academics and mental health professionals still considered experts, expert agreement remains one of the main ways the public uses to judge who is an expert.

In the public eye, experts on X are people who tend to agree when outsiders ask them questions about X, such as the meaning of special words or phrases about X, or who is an expert on X.  After all, this is pretty much the only concrete data they have to go on. It helps if these experts also do some things that outsiders see as impressive, but this usually isn’t necessary to be considered an expert.

I have two observations:

  1. On the one hand, this is a depressingly low standard. For example, even if religious priests can agree on what statements are religious heresy, we wouldn’t necessarily want to empower them to torture such heretics. So the fact that psychiatrists can agree on how to diagnose certain types of mental illness doesn’t by itself mean we should empower them to detain such patients against their will. Yet in practice mere agreement among experts is the main criteria the public uses to decide which experts to empower.
  2. On the other hand, given how important expert agreement is to expert reputation, it might seem surprising that experts don’t try harder to find simple ways to agree with each. For example, mental health experts could coordinate on hair color, weight, or vocabulary as simple ways to make sure they assign the same labels to the same patients. Yes, they’d have to do this on the sly, and overtly pretend to be using other criteria. But how hard could that be for homo hypocritus to do? Apparently, the fact that they agree enough on who is an expert gives them some slack to disagree about some other things. Their pride and beliefs about the basis of their expertise prevent them from coordinating too consciously on simple ways to agree, such as diagnosing mental illness based on hair color, etc.
GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Phil

    Does this hold true for economists?  My perception is that the public, generally, often refuses to accept expert economic consensus (in, for instance, favoring rent control laws).

    Maybe another condition is that people will only recognize experts in fields where they don’t have a strong viewpoint themselves?

  • JoachimSchipper

    I’m not sure that humans can be *that* hypocritical.

    Note that experts do agree, and recognize experts, and to some extent signal status, by having a common vocabulary. Some jargon is undeniably useful, but it’s hard to argue that many fields don’t use more than strictly necessary.

  • Douglas Knight

    Diagnostic consistency is simply not noticed by laymen. Certainly this test won’t be noticed. If common diagnoses like depression aren’t consistent, laymen might notice and might care, but I don’t think this is a good test of the practical consequences of DSM5. For more rare conditions, the new examiner will usually know the old diagnosis and will certainly know if the patient wants a new diagnosis.

    Yes, one reason the DSM exists is to establish agreement on what to say when asked by laymen, but that doesn’t mean it has an impact on diagnosis. Probably part of the problem with the recent test is that the psychiatrists aren’t really aware of how far their practice is from DSM4, so they don’t agree on how much a change means they should shift.

    The Last Psychiatrist has a great discussion of what borderline often means. So, yes, they do manage to coordinate the way Robin suggests. But I’m not sure that they’re doing it in order to look like experts.

    Borderline: …1. Very attractive female, who comes for problems the psychiatrist
    considers ordinary: men, work/school, problems with parents, etc.  It is
    diagnosed here most often by female psychiatrists, and carries the
    connotation: “Grow up.”…

  • http://www.facebook.com/profile.php?id=5310494 Sam Dangremond

    “expert agreement remains one of the main ways the public uses to judge who is an expert.”
    Yes, but the opposite of this is also true: we use experts’ reputation to give legitimacy to decisions that are actually just subject to high randomness. 

  • http://twitter.com/Johnicholas Johnicholas

    Is there a knob that one could turn to adjust the difficulty of coordinating? 

    For example, suppose that there is a prisoners-dilemma style reward structure for pointing out that a group of experts is coordinating on hair color or whatever.Then you could tune the knob until the experts are barely coordinated, and use their borderline, residual success at overcoming the difficulty of coordinating as evidence of some underlying reality to their claims.

  • http://daedalus2u.blogspot.com/ daedalus2u

    Real experts do agree.  Posers and
    fake experts may agree among themselves, but they do not agree with
    the true experts.   For many of them, the kappa is less than
    zero, that is the correlation with actual experts and actual reality
    is worse than random, that is there is anti-correlation.

    Real experts know the limits of their
    expertise and very rarely make the kinds of extreme mistakes that
    fake experts and posers make. Real experts can explain the basis for
    their decisions. Real expertise does not rely on gut feelings.

    Some good examples of fake experts that
    are (unfortunately) in charge of things is the “economists” who
    are telling the politicians that tax cuts for the wealthy and
    austerity for the poor are exactly what is needed to stimulate the
    economy. That cutting the deficit is more important than taking
    advantage of historic low interest rates to repair the infrastructure
    which needs repair and which will never be cheaper to repair than
    right now while there is plenty of spare capacity to do it.

    The fake experts who are AGW deniers
    are another. The fake experts who are evolution deniers. The fake
    experts who are HIV deniers. All of the CAM practitioners who claim
    to heal people with homeopathy, reiki, purging of toxins, chelation.

  • Michael Wengler

    The fact that psychological experts do disagree openly seems to me to be a feature, not a bug.  Questioning this agreement as something homo hypocriticus could cheat around feels like a non-productive way to go here.  There are plenty of cognitive biases that pshrinks actually do have, why flog the one they don’t in a short post about them?

  • cas

    psychiatry sucks but it’s hella betta than having folks fire speedballs

  • Pingback: Voodoo Psychology | John Goodman's Health Policy Blog | NCPA.org

  • Pingback: A few other problems with diagnosis in Psychiatry - Maggie's Farm