Rah Second Opinions

What many people like about being religious is being part of a community built on the idea of being and doing good. They can meet and discuss how to be and do good, share practical tips and sometimes just do good together. That sure can feel great.

What many people dislike about other people being religious is their habit of presuming that if you aren’t religious in their way, you aren’t being or doing good; you are bad. Religious people often prefer similarly religious people to be their teachers, grocers, leaders, etc., because they can’t trust bad people in such roles and shouldn’t support bad people even if they can.

Many non- or otherly-religous folks say they have nothing against doing good, but say it is laughable to presume that people who are religious in your way are actually much better than others. Most religions do little to actually sort people by how much good they are or do; they mostly sort by loyalty, conformity, impressiveness, and local social status. Religions could sort people better if they spent lots of time together doing things most everyone agrees are clearly good, like healing the sick, but that is pretty rare.

My ex-co-blogger Eliezer Yudkowsy left this blog in 2009 to start the Less Wrong (LW) blog, which helped seed a growing community that sees itself self-consciously as “rationalists”. They meet online and in person and often discuss how to be more rational. Which is a fine goal. I’ve supported it by listing recent LW posts on the sidebar of this blog, and I’ve attended many LW-based social events. Some high status members of that community now offer (not-free) workshops where they teach you how to be more rational.

As with religion, the main problem comes when a self-described rationalist community starts to believe that they are in fact much more rational than outsiders, and thus should greatly prefer the beliefs of insiders. This happens today with academia, which generally refuses to consider non-academic beliefs as evidence of anything, and with political ideologies that consider themselves more “reality-based.”

Similarly, I’ve noticed a substantial tendency of folks in this rationalist community to prefer beliefs by insiders, even when those claims are quite contrarian to most outsiders. Some say that since most outsiders are quite irrational, one should mostly ignore their beliefs. They also sometimes refer to the fact that high status insiders tend to have high IQ and math skills. Now I happen to share some of their contrarian beliefs, but disagree with many others, so overall I think they are too willing to believe their insiders, at least for the goal of belief accuracy. For the more common goal of acceptance within a community, their beliefs can be more reasonable.

Some high status members of this rationalist community (Peter Thiel, Jaan Tallin, Zvi Mowshowitz, Michael Vassar) have a new medical startup, MetaMed, endorsed by other high status members (Eliezer Yudkowsky, Michael Anissimov). (See also this coverage.) You tell MetaMed your troubles, give them your data, and pay them $5000 or $200/hour for their time (I can’t find any prices at the MetaMed site, but those are numbers mentioned in other coverage). MetaMed will then do “personalized research,” summarize the literature, and give you “actionable options.” Presumably they somehow try to stop just short of the line of recommending treatments, as only doctors are legally allowed to do that. But I’d guess you’ll be able to read between the lines.

Of course that is usually what you pay doctors to do – study your charts and recommend treatment. And if you didn’t trust your main doctor, you could always get a second or third opinion. So why use MetaMed instead? The main evidence offered at the MetaMed site is data on high rates of misdiagnosis and mistreatment in medicine. Which of course means there is room for improvement via second and third opinions. But it doesn’t tell you that MetaMed is a relatively cost effective source of such opinions.

I wrote this post because I know several of the folks involved, and they asked me to write a post endorsing MetaMed. And I can certainly endorse the general idea of second opinions; the high rate and cost of errors justifies a lot more checking and caution. But on what basis could I recommend MetaMed in particular? Many in the rationalist community think you should trust MetaMed more because they are inside the community, and therefore should be presumed to be more rational.

But any effect of this sort is likely to be pretty weak, I think. Whatever are the social pressures than tend to corrupt the usual medical authorities, I expect them to eventually corrupt successful new medical firms as well. I can’t see that being self-avowed rationalists offers much protection there. Even so, I would very much like to see a much stronger habit of getting second opinions, and a much larger industry to support that habit. I thus hope that MetaMed succeeds.

Added 8:45p 23Mar: Sarah Constantin, MetaMed VP of research, replies to this post at Marginal Revolution (!):

Investigating your condition in depth, in the context of your entire medical history, genetic data, and personal priorities, may well turn up opportunities to do better than the standardized medical guidelines which at best maximize average health outcomes. That’s basically MetaMed’s raison d’etre. … Fundamentally the thing we claim to be able to do is give you finer-grained information than your doctor will. …

Robin Hanson seems to be implying that MetaMed is claiming to be useful only because we’re members of the “rationalist community.” This isn’t true. We think we’re useful because we give our clients personalized attention, because we’re more statistically literate than most doctors, because we don’t have some of the misaligned incentives that the medical profession does (e.g. we don’t have an incentive to talk up the benefits of procedures/drugs that are reimbursable by insurance), because we have a variety of experts and specialists on our team, etc. (more)

I was asking why pick MetaMed over ordinary medical specialists. I expect most doctors will disagree strongly with the claims that they don’t give patients personalized attention, only improve average health outcomes, and don’t offer the finest-grain advice available. But they could be wrong, and it would be great if MetaMed could show that somehow. On misaligned incentives, a reason to ask a different ordinary doctor for a second opinion is exactly that they can know they won’t get paid for any treatments they recommend.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • http://www.facebook.com/marc.geddes.108 Marc Geddes

    Let’s get some more second-opinions on the overall belief-set of this so-called ‘rationalist’ community (‘Less Wrong’)! I will list 6 positions (*) that together seem to form the unique world-view of ‘Less Wrong’ insiders, giving my view on each.

    *Many Worlds Interpretation of QM

    Less Wrong says: TRUE
    I say: TRUE

    Summary: The arguments for MWI are strong. If MWI were not true and we adhere to a realist picture, this would violate fundamental principles of physics such as non-locality. The balance of evidence indicates that LW got this one right.

    *Cryonics a good bet

    Less Wrong says: TRUE
    I say: TRUE

    Summary: Whilst there is no known technology capable of reviving cro patients as of yet, and freezing does severe damage on the cellular level, it is known that technologies can exist consistent with the laws of physics that allow cellular repair (e.g. nano-tech). Further, it would only take a small probability of success (greater 1%) to make cryo a good bet.

    *Artificial super-intelligence possible

    Less Wrong says: TRUE
    I say: TRUE

    Summary: All the evidence indicates that mind is computational and amenable to the scientific method. Further, there is no evidence to suggest that humans represent an intelligence ceiling.

    *Significant chance of hard-take off Singularity by a Singleton

    Less Wrong says: TRUE
    I say: TRUE

    Summary: This one’s highly debatable, as shown by the Hanson-Yudkowsky FOOM debate. But on balance, I find the hard-take side more convincing. Anything a society of minds can do, a single powerful agent should be able to do. Further, feed-back loops could make self-improvement very fast on a human time-scale.

    *Bayes as the fundamental model of rationality

    Less Wrong says: TRUE
    I say: FALSE

    Summary: Whilst Bayesian methods appear to be a powerful statistical technique, as of yet, they have failed to produce general intelligence equal to that of a 2-year old. Further, endless strings of statistical correlations alone don’t appear to lead to any great insights… that needs *concepts* or *categorization* to interpret the *meaning* of the correlations, and categorization appears to be prior to induction. There is no clear evidence that Bayes can fully handle categorization yet, thus, there is no clear reason for regarding Bayesian methods as the ultimate ‘theory of rationality’.

    Summary:

    Intelligence and morality orthogonal (no universal terminal values)

    Less Wrong says: TRUE
    I say: FALSE

    Summary: Highly debatable. The question is about properties of minds in general, but the only case study we have of general intelligence so far is a data-set of one (humans). There is no clear empirical basis for a strong position one way or the other yet. The idea that UAI is a big threat is based on supposition only.

    Of the 6 starred(*) positions that make LW unique, how many would be likely to receive majority support from other respected scientists in the relevant fields? What are views of other readers of ‘Overcoming Bias’ on each of these 6 positions? Personally, I only believe 4 of the 6.

    • Fadeway

      Let’s do this (I consider myself more attached to LW than to OB).

      *Many Worlds Interpretation of QM

      Less Wrong says: TRUE
      I say: –

      Summary: I don’t have enough information and refuse to make a judgment either way.

      *Cryonics a good bet

      Less Wrong says: TRUE

      (Do they really? Here are some stats from the 2012 survey that show a heavy split, though still more optimistic than the rest of the world:
      Have you signed up for cryonics?
      No, don’t want to: 275, 23.2%
      No, still thinking: 472, 39.9%
      No, procrastinating: 178, 15%
      No, unavailable: 120, 10.1%
      Yes, signed up: 44, 3.7%
      Never thought about it: 46, 3.9%
      No answer: 48, 4.1%)

      I say: TRUE

      *Artificial super-intelligence possible

      Less Wrong says: TRUE
      I say: TRUE

      *Significant chance of hard-take off Singularity by a Singleton

      Less Wrong says: TRUE
      I say: TRUE

      *Bayes as the fundamental model of rationality

      Less Wrong says: TRUE
      I say: —-

      Summary: Again, I haven’t worked on creating decision theories and I have only been familiar with Bayes for about a year, so I don’t consider myself informed enough to have an opinion.

      Summary:

      Intelligence and morality orthogonal (no universal terminal values)

      Less Wrong says: TRUE
      I say: TRUE

      (I wrote summaries but then deleted them. I want this to be more a statistical sample than anything.)

      • Jake Taubner

        Why would you consider yourself “attached” to either community?

      • Fadeway

        Since the claim was that LWers hold certain beliefs, I presented myself as a random LWer and stated my positions on those topics. My “Let’s do this” remark was hoping that people would state whether they are a member of LW or just a passing-by rationalist/Hanson reader and post their beliefs (to get some anecdotal comparison).

    • IRRATIONAL HULK

      *Many Worlds Interpretation of QM

      Less Wrong says: TRUE
      Hulk say: FALSE

      Summary: When Hulk say “HULK SMASH!!” he just mean smash, not “both smash and not smash”.

      *Cryonics a good bet

      Less Wrong says: TRUE
      Hulk say: FALSE

      Summary: If Hulk dipped in liquid nitrogen, Hulk might one day fall over and SMASH.

      *Artificial super-intelligence possible

      Less Wrong says: TRUE
      Hulk say: TRUE

      Summary: Hulk proof that artificial super-strength possible, so why not artificial super-intelligence?

      *Significant chance of hard-take off Singularity by a Singleton

      Less Wrong says: TRUE
      Hulk say: HUH?

      Summary: Hulk not know what these long words mean. Hulk not 100% convinced they have a meaning.

      *Bayes as the fundamental model of rationality

      Less Wrong says: TRUE
      Hulk say: FALSE

      Summary: Hulk believe that Bayes not fundamental. Bayes just application of deeper principles.

      Intelligence and morality orthogonal (no universal terminal values)

      Less Wrong says: TRUE
      Hulk say: FALSE

      Summary: Everybody loves when Hulk say “HULK SMASH!!”, therefore Hulk deduce there is at least one universal terminal value.

    • Tom Breton

      *Many Worlds Interpretation of QM

      Less Wrong says: TRUE

      I say: Not a true/false question, but coercing it to a question about preferred interpretation: TRUE

      Summary: Other interpretations are not Occam-satisfactory. They include needless parts such as wavefunction collapse, apparently only in order to avoid human mental or psychological discomfort.

      *Cryonics a good bet

      Less Wrong says: TRUE

      I say: TRUE

      Summary: My yes/no is pretty conventional. I’d add (summarizing a blog post):
      a) Save more “perspectives”, even if you can’t envision how future restorers will use them. DNA, EEG under known conditions, other brain tracing as it becomes available.
      b) Right now, let’s debug the save-restore-emulate process. Suggestion: use plastinated fruit flies.

      *Artificial super-intelligence possible

      Less Wrong says: TRUE

      I say: TRUE

      Summary: Quoting Minsky, “We’re machines and we think”

      *Significant chance of hard-take off Singularity by a Singleton

      Less Wrong says: TRUE

      I say: FALSE as I understand “hard take-off”.

      Summary: The intelligence/time curve will still be held back by physical constraints such as the time it takes to build a new fab plant, well past the point in time when machines are doing the intellectual heavy lifting of machine design.

      *Bayes as the fundamental model of rationality

      Less Wrong says: TRUE

      I say: FALSE

      Summary: Just one tool in the rationality toolbox.

      Intelligence and morality orthogonal (no universal terminal values)

      Less Wrong says: TRUE

      I say: FALSE

      Summary: Again summarizing a blog post too briefly, what I call the obligational stance pre-requires the intentional stance (a la Dennett) in the same way that that pre-requires the design stance.

    • Scott Messick

      Here’s mine.

      *Many Worlds Interpretation of QM

      Less Wrong says: TRUE
      I say: DON’T KNOW

      *Cryonics a good bet

      Less Wrong says: TRUE
      I say: TRUE

      *Artificial super-intelligence possible

      Less Wrong says: TRUE
      I say: TRUE

      *Significant chance of hard-take off Singularity by a Singleton

      Less Wrong says: TRUE
      I say: FALSE

      Actually, I do think there is a chance of this, but very small by LW standards. I’d still call it “significant” because of the danger level if true.

      *Bayes as the fundamental model of rationality

      Less Wrong says: TRUE
      I say: FALSE

      I’m not really that sure what this was supposed to mean.

      *Intelligence and morality orthogonal (no universal terminal values)

      Less Wrong says: TRUE
      I say: TRUE

      I can easily imagine beings who are very intelligent but lack both selfless values and the ability to make credible precommitments.

      I guess I got 3.5/6.

    • VV

      According to the surveys, LW opinions on these topics might be much more diverse as you seem to imply.
      Mind the false-consensus effect, which might be particularly relevant for LW due to the social dynamics and forum rules that incentive groupthink.

  • Tim Tyler

    > Many in the rationalist community think you should trust MetaMed more because they are inside the community, and therefore should be presumed to be more rational.

    Here, the supplied reference doesn’t really support the claim. Instead they are claiming that how MetaMed does could be used to measure the scale (and sign) of any effect due to this association. This seems reasonable, though it would only be one data point – and most start-ups fail.

    • http://www.facebook.com/stephen.r.diamond Stephen R. Diamond

      But the giveaway is how they dwell on physicians’ alleged Bayesian incompetence–where, of course, “rationalists” know different.
      (It isn’t so clear that the now famous reasoning errors of physicians in experimental contexts affect the way physicians reason with actual patients, and there’s a simple corrective: think in terms of frequencies.)

      • VV

        Physicians are not calibrated to perform maximum a posteriori probability estimation. And with good reason, since the cost of false negatives and false positives is generally different.

  • Pingback: Recomendaciones | intelib

  • IMASBA

    “This happens today with academia, which generally refuses to consider non-academic beliefs as evidence of anything, and with political ideologies that consider themselves more “reality-based.” ”

    Don’t you have to do that when you have multiple conflicting opinions? Only one (or none) can be right and it doesn’t make sense to pursue them all (too costly or impossible because of conflicts), so at some point you are going to have to attach more value to one of the opinions. It’s unlikely all politicians are equally wrong on all subjects (because there beliefs are so different), there will always be one who is the least worst.

    “wrote this post because I know several of the folks involved, and they asked me to write a post endorsing MetaMed. And I can certainly endorse the general idea of second opinions; the high rate and cost of errors justifies a lot more checking and caution. But on what basis could I recommend MetaMed in particular?”

    Good question, them asking you to endorse them based on nothing does indeed signal a tribal mentality, then again, it’s possible doctors are guilty of this behavior too, only recommending people they know and use similar methods (though two wrongs don’t make a right: MetaMed claims to be more rational, so they should back that up).

    • dmytryl

      > Don’t you have to do that when you have multiple conflicting opinions?

      Yeah. Keep in mind that the mainstream opinion is a result of compositing the opinions already. Adding a “second opinion” doesn’t improve it at all, it just lets the result be influenced by choice of the second opinion. 2nd through 10 000th opinions might, if the opinions were chosen in a very unbiased manner, or might not, because, you see, maximum extremeness of an opinion grows faster than any computable function of it’s length, a very nasty distribution which doesn’t play well with any sampling error.

  • dmytryl

    “As with religion, the main problem comes when a self-described
    rationalist community starts to believe that they are in fact much more
    rational than outsiders, and thus should greatly prefer the beliefs of
    insiders.”

    Rather than somehow gradually sliding in such a direction, from the very beginning the sole purpose of it all had been to get people to give their money to him, through a “charity” which he founded. Something as ridiculous as giving money to those kids (given their backgrounds) for saving the world from technological dangers requires a whole host of other crazy beliefs; a lot of work has been put into identification and promotion of such beliefs.

  • Cambias

    They’re almost certainly right that their advice may be actionable. They just need to look up what that means.

  • Pingback: Assorted links

  • Arthur

    I think the service is different from medical services in general. Is not only a second opinion.

    They will review the literature on your condition/symptoms . If one of the problems with medicine is that doctors don’t know/understand the the literature, and won’t read it just because you are one of his patients.

    I’m not sure how much the doctors on this price tag know/understand/search the literature relevant to their patients But if they don’t the service is likely useful. Also it seems to me that they give you more information than doctors usually do. That’s probably good also.

    It seems to me that the way the thing is set makes the incentives really different from doctors.

    It think it can go into some vices, but it seems a really better set up than medical services in general.

    Maybe you should try to look at it in a different frame.

  • JeffLonsdale

    The alternative isn’t just seeing other doctors – there are also online communities of people with similar diseases who discuss what they’ve tried. Many of these people are motivated to do the work of looking through published literature and they share those results for free. I know one of my friends found a diet that got his Crohn’s disease under control from such a group.

  • http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

    Most doctors spend very little time per patient. They are not in the business of spending hours to research every single of their cases.

    A person who’s job it is to spend 40 hours per week reading and summarizing the academic literature is bound to give different recommendations than the average doctors.

    Plurality of approaches is good.

    MetaMed will then do “personalized research,” summarize the literature, and give you “actionable options.” Presumably they somehow try to stop just short of the line of recommending treatments, as only doctors are legally allowed to do that.

    I would hope that they have at least one doctor on their stuff that can sign the reports in a way where they actually include medical advice.

    • IMASBA

      “Most doctors spend very little time per patient. They are not in the business of spending hours to research every single of their cases.”

      But they went through many years of medical training where they did exactly that. Also, I can tell you the answer to 1+1 right here on the spot, a team of analysts spending hours on the same problem won’t give you a better answer than I would right here, right now. Finally there’s the huge cost of the whole MetaMed procedure: you could have spend that money on a nice vacation or early retirement instead of having people hunt for that 1/1000 chance they spot something important that your doctor and the second opinion didn’t, if I could choose to retire two years early or live one year longer (no doubt in pain and bad health because a medical procedure is necessary to give me that extra year) I’d choose retiring early.

      All in all MetaMed needs to back their claims up with something more concrete than “we think we are the best”, especially for what they’re charging.

      • John

        “Also, I can tell you the answer to 1+1 right here on the spot, a team of analysts spending hours on the same problem won’t give you a better answer than I would right here, right now.”

        If you were suffering from sleep deprivation because you’d spent eighty-plus hours in the past week (and the week before that, almost every week for the past ten years) doing similar math problems, and my life depended on this one being answered correctly but your career didn’t, I might just want to get that team of analysts involved anyway.

      • http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

        I can tell you the answer to 1+1 right here on the spot, a team of analysts spending hours on the same problem won’t give you a better answer than I would right here, right now.

        If medical problems would be comparable to solving 1+1, than you would be right. In the real world problems are harder. Doctors often don’t make optimal treatment decisions as is described in the LessWrong post about MetaMed.

        That training isn’t up to date but likely older than a decade. In the meantime there’s new knowledge.

        if I could choose to retire two years early or live one year longer (no doubt in pain and bad health because a medical procedure is necessary to give me that extra year) I’d choose retiring early.

        If you can retire two years earlier for $5000 than you aren’t in the target audience for MetaMed.

        If you however have a yearly income of $100k things are different.

  • http://www.facebook.com/stephen.r.diamond Stephen R. Diamond

    Is “Rah second opinions”–even third opinions–compatible with halving national medical expenditures?
    On another strand, Robin seems disappointed that the “rationalists” turned out to be an ordinary bunch of money grubbers. Robin is awfully idealistic for a cynic.

    • Kyle

      A Cynic is an Idealist who learns from his mistakes

    • http://twitter.com/srdiamond srdiamond

      Robin is being exceeding generous (and probably inconsistent, too) in his backhanded endorsement. Medical diagnosis is (ideally) the domain of expert diagnosticians. If you have $5000 to spend and have a serious illness, you should spend it on expert diagnosticians, not on literature reviewers.

      The probable inconsistency is that Robin has slammed the quality of medical research; now, he wishes well an enterprise based on research and nothing but research.

      This seems like a way of milking the desperately ill, looking for a miracle. Does milking the gullible sound familiar?

      • dmytryl

        Well basically the enterprise is to sell an equivalent of what any one of us had done – googling the illness while not having any domain specific expertise, reading papers – for 200$/hour.

        The idea is ostensibly that ‘rationality’ renders one able to see some unpicked fruit. But that’s not really the idea – if they believed this they could try to work on a better MRI or a vaccine for something or the like – focus effort on some one thing and produce extremely impressive returns, first.
        These folks know to stay clear of the areas where the fruit can be put up to test. Which makes it so much more destructive than honest overconfidence.

  • Robert Koslover

    I checked their “About” and the “Team” page. Maybe I’m just getting old, but several of them simply look too young to hold the professional titles that they bear, whether it’s VP, chief of this or that, medical adviser, etc. If I really need someone wiser than my friendly middle-aged MD to help rescue me from dying of a terrible illness, should I expect to find that critical knowledge inside the heads of people who are in their late 20′s and early 30′s? I suppose it’s possible. But it sure doesn’t work that way in most (yes, I accept not all) other scientific fields.

    • http://www.facebook.com/peterdjones63 Peter David Jones

      20-something VP’s and P’s are common in Silicon Valley. But that’s a young person’s game. One favourite LessWrong theme is that anything can be run as a start up. another is that raw IQ is all you need.

      • MFawful

        “another is that raw IQ is all you need.”

        Hardly anyone on Less Wrong would endorse this claim. The entire point of pursuing rationality is that it is teachable, not fixed like IQ.

      • VV

        So you can reformulate that as raw “rationality” is all you need, and you can buy it from CFAR for a price.

      • http://www.facebook.com/peterdjones63 Peter David Jones

        OK. Then EY is unqualified for everything.

    • Matthew Graves

      “should I expect to find that critical knowledge inside the heads of people who are in their late 20′s and early 30′s?”

      This depends. If what will help you is recent, then yes, the younger the person the better, because of superior flexibility. If what will help you is old, then the older the person the better, because of superior experience.

  • Robert Koslover

    Here’s another thought: Maybe these folks should give IBM a call about a possible collaboration? See http://www.research.ibm.com/articles/watson_medical_school.shtml

  • John Maxwell IV

    “Whatever are the social pressures than tend to corrupt the usual medical authorities, I expect them to eventually corrupt successful new medical firms as well.”

    Potentially relevant: Alyssa Vance, MetaMed’s president, has deleted several comments I’ve written on her blog that disagreed with her posts. As far as I can tell, the comments were polite, intelligent, and relevant. Despite having my email address, Alyssa didn’t contact me about them, she just deleted them silently.

    Not suppressing dissent on your blog really doesn’t seem like a very high bar for someone who claims to have a commitment to truth and rationality.

  • Nancy Lebovitz

    ” Which of course means there is room for improvement via second and third opinions. But it doesn’t tell you that MetaMed is a relatively cost effective source of such opinions.”

    I don’t think the cost effectiveness of MetaMed can be determined until people have been using it for a while. It’s at least plausible that better research will turn up some good stuff.

    I’m concerned that their business plan seems to depend on, not just that there’s low hanging fruit in medical research (which I find plausible), but that the low hanging fruit is so evenly distributed that there will be worthwhile answers to a high proportion of the questions which come in.

    • Robin Hanson

      It is plausible that their research will turn up good stuff. I very much hope they do.

      • dmytryl

        Well, the prior for exceptional performance is low to begin with (by definition), and then things like lack of moderately exceptional performance marks such as track record, degrees, etc. brings it even lower. If there is low hanging fruit everywhere for rationalists, then why can’t they pick some testable/verifiable fruit? The answer is, of course, there isn’t much such fruit that they can actually pick, but it’s fall, and there’s a lot of colourful leaves on the ground that you can pass for fruit as long as no one’s looking too closely.

        There’s fruit for the picking that’s unethical to pick, though – there’s a lot of people who are at 1 in 1000 by gullibility, leaving a lot of room for further specialization (e.g. 1 in 1000 by gullibility plus specific beliefs).

    • http://profiles.google.com/externalmonologue Matthew Fuller

      A large proportion of podiatrists customers have plantar fasciitis. The best treatment method is in fact doctor free: plantar fascia stretch, hard shoe inserts to support the fascia, new shoes that offer lots of support like running shoes, and night splints to prevent the foot from being in a relaxed position at night.

      This treatment modality worked for me and many doctors have told me that patients don’t want to here the news that they can’t have injections of steroids to ease the pain or expensive surgery. Doctors are here to serve patients needs, not give accurate info.

      This service is badly needed, but not at 5k. Maybe 100 dollars.

  • Russ Andersson

    I’m just curious as to why if the LW community is so rational, and rationality is so core to “winning” why they haven’t inherited the earth yet, or “won” as they say … i guess they are so focused on saving humanity from the looming singularity (the self described largest single contribution to mankind) that they don’t have much time for basic things, like developing simple social skills and getting along with normal folks.

    I have learned a lot from LW, there are several great posts there, but the general dialog there seems to be long on intellectual masturbation, and short on practical application … more bun than hamburger. its changing a little bit now for the better … but that brand of rationality is not something I find particularly effective in real world scenarios… If find OB to be better at providing practical lessons that a dumb-s&^t like me can understand and use.

    • Stephen R. Diamond

      If “winning” is the criterion, I don’t know they’re doing so bad: Thiel made his billion, Yudkowsky has the cushiest job ever landed by a high-school dropout. Need I go on?

      • Russ Andersson

        Thanks for bringing that to my attention Stephen. Of course Peter Thiel and Yudkowsky are representative of the average member of the LW site … theil posts there often and is a central member of that community … he attends meetups regularly… not. Theil’s recent presentation at SXWS was filled with LW teachings … not …
        I personally wouldn’t confuse LW with the singularity university … they are related but not centrally so. SU is highly credible, Less Wrong Less So, but who cares anyway. The point of this post was that MetaMed attempting to rely on its LW credentials maybe misguided… maybe they should rather have focused on the fact that they have a big market opportunity a great idea and a platinum backer… I wish them well, I wish LW well too but the general perception of your average person is that they are crackpots.

      • dmytryl

        > but the general perception of your average person is that they are crackpots.

        That’s because they are crackpots. They tick every checkbox: techno woo, better-than-science, none of physicists seen what we seen, PhDs are counter useful, alternative medicine, paid self improvement courses, paranoia such as other AI researchers are going to kill us all so lets think how we might slow down moore’s law, bizzrare things such as basilisk… and they don’t have usual positive stuff that even crackpots tend to, such as technical stuff that definitely does work, or anything that definitely requires competence in a technical field (in fact, failed attempts – before becoming a full blown crackpot, Yudkowsky tried to make some kind of trading software, a programming language, and an actual AI that does something). Typical case of a zillion weak reasons why they might be super duper awesome and not a single strong reason why they’re even particularly good.

      • VV

        Well, Thiel made his bucks well before LW existed, and according to the financial data that was relased, Yudkowsky’s income seems to be about average or even slightly below average for his reference class (ideologues/cult leaders).

    • wumpa

      LW definitely attracts a lot of people who are into mental masturbation… and I suspect those are largely the same people who post (or at least comment) the most often. Basically, if one spends all their time posting on LW, it tends to mean they’re not spending that time on “winning”. Unless, of course, their idea of winning involves posting on LW all day.

      OTOH, it also has readers who are too busy “winning” to bother posting — people who are actively applying the more useful LW ideas instead of just talking about it.

      Personally, I enjoy reading the site (and thinking about how to use the ideas) when I have time, but I’m usually too busy enjoying my dream job and my love life, and actively taking steps to improve my life and the lives of people around me. I’m not a goddess and I’m not saving the world, but I think I’m doing pretty well for a mere mortal.

      Though I’m not very active on the site, I’ve at least attended a couple rationality meet-ups. I found the meetings to be interesting, entertaining, thought-provoking, and completely irrelevant to anything in my life. A fun and different way to spend an evening once in a while.

    • http://www.facebook.com/marc.geddes.108 Marc Geddes

      At the end of the day, the only thing that counts is real-world results. We must ask… are they #winning? Certainly, the sort of people posting to LW tend to think they’re hot-shots, and seem to want to spend lots time and energy on internet forums trying to convince us of this.

      The question: is there clear evidence of #winning in this community? Remember: Objective results are the only valid criteria.

      Hackers Maxim #11
      ‘The one true alpha move is #winning’

      • http://twitter.com/srdiamond srdiamond

        At the end of the day, the only thing that counts is real-world results.

        This is the common premise held by LW and most of its critics, but here I must plead Contrarianism. If you thought the Sequences were really brilliant, would you really say they don’t count because your opinion lacks objectivity?

        If I believed Yudkowsky posted that volume of pure brilliance, I wouldn’t withhold my admiration merely because others didn’t recognize him–if that’s what’s meant by “objective results.” I think Yudkowsky is entitled to be judged by the Sequences, which I happen to hold in low regard.

    • http://profiles.google.com/katsaris Aris Katsaris

      Russ, can you quantify this “inherited the earth”? What does it mean, becoming millionaires? Becoming noteworthy enough to be listed in Wikipedia?

      And once you’ve quantified this criterion, can you show me an online forum whose participants have achieved it more?

      • Russ Andersson

        Aris: this is an excellent question. Thank you. The inherited the earth comment was a flippant sarcastic statement more about my perception that the LWers tend to (somewhat arrogantly) presume they have ALL the answers.

        They strike me as being rather judgmental and somewhat elitist focusing on IQ/intelligence as an important measure in the community. This might be a result of my own insecurities not sure … its possible its just me.

        The brand of “rationality” they promote tends to be beyond the grasp of your average person, who doesn’t have high math or analytical ability. It is not very practical, focusing more on increasing theoretical knowledge than actual practice. Knowing is not the same as doing. The net result is that their brand of rationality is only applicable to very smart folks like them. In aggregate, this rules out the vast majority of the population seeking simple guidance on being more reasoned thinkers, it tends to be an echo chamber for smart people sharing unpractical theories on rationality. Then there are several extreme views there, like HAL in t the basement, taking over the earth that strike me as more than a little over the top.

        In terms of providing an objective criterion, I would say the number the people whose philosophy and lives have been positively impacted would be a good measure of a communities success.

        From what I understand this group at Penn has an online forum achieving excellent results in a related arena, 2 million or so people have taken courses with them and they seem to be doing well.

        http://www.authentichappiness.sas.upenn.edu/Default.aspx

        I have nothing against LW per se, I just find the arrogance and high end intellectual focus of the dialog there to be a little off-putting. For its audience it provides a lot of value, has millions of page views etc I am sure, but for your average person, it doesn’t seem to provide many practical answers.

      • http://profiles.google.com/katsaris Aris Katsaris

        “In terms of providing an objective criterion, I would say the number the people whose philosophy and lives have been positively impacted would be a good measure of a communities success.”

        Well, listing some small-scale positive impact from my own life — I’ve gained some thousands of dollars because I got interested in and invested in bitcoins after I saw a discussion thereof in the LessWrong community back when bitcoin was going between 1$ to 2$ (current price is around $70). I’ve also gotten great enjoyment out of reading or watching fiction I saw recommended in LessWrong, e.g. Greg Egan, the movie “Limitless”, the Madoka Magica anime, all of which became all-time favourites. I decided not to pursue judicially a case against a neighbor where the time I’d have to waste would be much higher against any potential monetary benefit while explicitly thinking about the “learning how to lose” passage from Harry Potter & the Methods of Rationality.

        Does all the above count as positive impact that LessWrong has had in my life? It’s all positive yes, but the problem is determining that it’s impact that *LessWrong* has had, and that I wouldn’t have reached the same conclusion and the same actions even without its influence. And either way it’s obviously not large enough in scale to qualify for the “inherited the earth” label.

      • Russ Andersson

        That’s really cool Aris. I’m pleased you got value from the experience and am sure many community members there have had similarly positive experiences. Its not quite my cup of tea but that is irrlelevant … De gustibus non est disputandum etc … In matters of taste, there can be no disputes … so I have no major issue with LessWrong … and wish them well … the fundamental point I was attempting to make was that MetaMed has a lot more going for it than merely being associated with LW.

  • Douglas Knight

    Whatever are the social pressures than tend to corrupt the usual medical authorities, I expect them to eventually corrupt successful new medical firms as well.

    If you have concrete theories about what’s going on, you can change it. For example, treatment deviates from the medical literature because doctors don’t read the literature. They don’t read it because they charge fee service, not for time. If you pay them differently, you should expect different results. This is a different second opinion than asking another doctor for the same procedure.

    Or do you want to go back a step further and say that the some social pressures will force a shift to fee for service? Maybe the median medical care can’t shift to this system, but patients who want to pay by the hour can tell which providers do that.

    • Robin Hanson

      I very much doubt paying docs by the hour is enough to change things. Kaiser pays their docs by the hour, for example.

      • IMASBA

        Nothing will ever be “enough” if you sole goal is life extension. Life extension can soak up all resources if you allow it to, especially with all the diminishing returns you’ll eventually run into. Yes people might live 78 years instead of 77 years if a lot more money went to health care in any given country, but the quality of those 77 years would be markedly less because health care would take up so much resources that you can’t enjoy life anymore, life has to be worth living. Besides, medical care is the least efficient way to improve your health, living healthy is much less costly.

  • The Tetronian

    Hanson specializes in contrarian criticism, doesn’t support anything except prediction markets, and his comments are now mostly by people who hate LessWrong and anything connected with it. He is congenitally incapable of anything but back-handed compliments. And they asked him for an endorsement? Asking Hanson for an ‘endorsement’ is a stronger criticism than anything he has said here.

    • http://www.facebook.com/profile.php?id=635291179 Stephen Bachelor

      You’re tempting me to launch a startup which lets people pay a fee to describe their symptoms on a website, and let people bet on effective treatments.

  • Lord

    I am not sure not believing something because it is believed by a group you aren’t part of isn’t worse. That is different from giving consideration and accepting or rejecting and belonging to a group, most of whose opinions you may not accept but have no basis to reject either.

  • http://profiles.google.com/externalmonologue Matthew Fuller

    It would be better if we all called our doctors and demanded they empirically analyze their own med decisions. Only patients can make docs better by demanding better evidence rather than treatments, but patients trust too much and docs exploit this trust.

  • Pingback: Overcoming Bias : Why Good Is Crazy

  • manwhoisthursday

    There already is a site where you can ask medical questions of an academic who specializes in a certain area: MedHelp. They charge 35 bucks a question or something.

  • http://jacobageller.com/ Jacob A. Geller

    Most important sentence in this piece: “Whatever are the social pressures than tend to corrupt the usual medical authorities, I expect them to eventually corrupt successful new medical firms as well.”

  • rrb

    “Similarly, I’ve noticed a substantial tendency of folks in this
    rationalist community to prefer beliefs by insiders, even when those
    claims are quite contrarian to most outsiders. Some say that since most
    outsiders are quite irrational, one should mostly ignore their beliefs.
    They also sometimes refer to the fact that high status insiders tend to
    have high IQ and math skills.”

    Why did you leave out the most obvious reason to trust insider beliefs, which is that LessWrong insiders read and think and practice to develop general skills for acquiring true beliefs, and most outsiders don’t?

    • VV

      There is no evidence that the LW folks are particularly effective at that.

      On the contrary, claims of exceptional skill and understanding without corresponding evidence of exceptional achievements, are evidence of incompetence.

      http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

    • VV

      (to expand my previous comment)

      One of the most thing that you should do if you want to practice actual rational thinking, is to realize that, as a human, you can be affected by cognitive biases.

      Being a member of a community of people who proclaim themselves “aspiring rationalists” and love to talk about how rational they are doesn’t make you immune from biases.

      Excessive trust in the claims of the group and authority is one of the strongest bias in humans. Thus, when you come upon a claim that is made by high status people of your beloved community, you should exercise a lot of caution in analyzing them before you accept them.

      • http://www.facebook.com/peterdjones63 Peter David Jones

        Mr Supersmart does not seem to have done anything at all to forestall groupthink biases in setting up his various orga nisations, and seems happy to enjoy the benefits, in terms of agreement, ego-boosting and otherwi$se.Even though the same thing has already happened at least once over in the form of Objectivism.

    • http://www.facebook.com/peterdjones63 Peter David Jones

      Here’s a reason:you can judge someone’s competence and rationality by factors like what they say, and their specific qualifications, rather than a sweeping insider versus outsider. distinction;.

      Here’s another reason: LessWrong doesn’t attract random visitors. One they are good at is signalling a high level of intellectual/academic accomplishment.

    • dmytryl

      which is that LessWrong insiders read and think and practice to develop general skills for acquiring true beliefs

      Yeah, as evident by them making various important discoveries… ohh wait, that’s not how I’m supposed to check it.

      Let me quote directly from one of pages linked from their about: “We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.” . Yeah. I should totally start evaluating quality of rendering by comparing the way my software propagates the light, to rules of quantum electrodynamics. Ohh. It is all wrong – it doesn’t use the rules anywhere in the source. Sarcasm fails me. These folks really haven’t got a slightest clue about anything, and least of all about rationality.

      • http://www.facebook.com/marc.geddes.108 Marc Geddes

        Actually, it looks like they’re finally managing to come up with a proof of an idea I suggested to SL4 over 9 years ago. They finally think that it might be possible to overcome Lob/Godel limitations of math systems by assigning probabilities to math statements, just as I suggested back in 2004 on SL4. Look:

        http://acceleratingfuture.com/sl4/archive/0408/9705.html

        Progress, I suppose.

      • dmytryl

        Are you talking of something cousin_it or Wei Dai are doing? They aren’t stupid, but quite silly. I’m not looking at it since the ‘chicken with the universe’ substitute (?, not sure of timing) for what I told was necessary – substantially black boxing their AI itself during evaluation of counterfactuals. They get invested in some idea (e.g. that one can reflectively evaluate counterfactuals), and then they seem literally unable to read correctly any argument which collides with that. The worse one is that they got very undue certainty that they’re making some sort of ground breaking work in decision theory – this is despite Wei Dai lamenting that he can’t get anyone interested

      • dmytryl

        Ahh, I found it… it is kind of interesting. More interesting thing is that this entire “omfg we’re all going to die” thing relies on the AI being reflective and able to genuinely self improve as part of pursuit of another goal (rather than have to be fed parts of source of itself and a few requirements).

      • rrb

        Nobody’s saying you’re supposed to use probability theory to make all your decisions. The point is just to get the same results, most of the time, as you would if you *were* using probability theory.

        In your metaphor – you do compare the results of the renderer to actual images. It doesn’t mean your renderings have to be computed the same way that the universe computes the images, bu the results have to line up.

        You have to have a standard of quality before you can improve something. I’d say that this method of comparing to probability theory is one of the rationality-enhancing things you can learn from LessWrong, that makes you better able to give medical advice.

  • http://www.facebook.com/marc.geddes.108 Marc Geddes

    ‘Shut up and calculate, but first donate’
    -Fake EY Quotation #1

    ‘Donate, or Clippy is gonna get medieval on your arses’
    -Fake EY Quotation #2

    ‘I’m starting another polyamorous fund raiser tomorrow. Donate and join the fun’
    -Fake EY Quotation #3

  • Pingback: Sarah Constantin replies on MetaMed

  • http://overcomingbias.com RobinHanson

    I just added to this post.

    • dmytryl

      Very interesting replies on that blog post (once you ignore what looks like obvious fake it before we make it crud), especially the one about shrinking number of “employees” and a guy who lost his medical license for selling prescription drugs.
      Since the benefits of their offering are essentially non testable, unless they do something very stupid (which is quite likely), they will linger like a chronic disease, wasting people’s money, offering incompetent medical advice, and so on, while being cheered by their patients as the best deal ever.

      • VV

        I had also noticed that their employees page used to list a web designer (do they really have to have one permanently in their staff?) and Will “Eden”, listed as a researcher, a former economist who used to work at a “life coaching” company.

  • http://www.facebook.com/CronoDAS Douglas Scheinberg

    My thought: If you have a weird disease and are seeing someone who is a specialist in that disease, then, in theory, it’s part of their job to be keeping up with the latest research on that particular disease; I see no particular reason why MetaMed would be able to give better advice to a Multiple Sclerosis patient than the doctors ata Multiple Sclerosis center. On the other hand, someone who isn’t a specialist will mostly know the conventional wisdom from when they were last trained on that topic, so for problems on which their conventional wisdom isn’t good enough, a team of people who are good at finding the right literature to read could indeed give better advice.

    • http://twitter.com/srdiamond srdiamond

      Perhaps, but that team should be selected by qualified doctors rather than desperate patients. (Problem is, from Mr. Moneybags perspective, doctors wouldn’t remunerate at $200 per hour for reviewing literature.)

      And if the patient has $5000 to burn, why not spend it on a highly qualified generalist diagnostician? (Even the very rich would probably be better served by seeking yet another expert diagnostician than opting for lay research, which will likely add only confusion. — “rationalists” notwithstanding, more information isn’t always better, even if you know enough to discount it.)

  • http://www.thepolemicalmedic.com/ Thrasymachus

    I’m a med student, and I’m pretty sceptical that MetaMed is going to beat ‘bog standard’ medicine often enough to be worth $5000 a throw.

    Their selling pitch seems to be:
    1) Unlike ‘one size fits all’ guidelines, we can get personalized recommendations that generally do better.
    2) We can beat the average doctor on rationality, time spent researching the condition, etc.

    I don’t buy (1), because it implies a really optimistic view of medical science where we are aggregating or averaging out all this fine-grained data we have when making guidelines. In fact, much of our practice is based on very shaky evidence, and medical science is more often at the stage of “does this drug actually do anything?” (Or, within the last 30 years: “Oh, this drug we thought was helping actually increases mortality, oops.”)

    So I don’t see much opportunity for “Well, guidelines generally say you should do X, but because of A, B, and C in your case, Y would be even better for you!” sorts of evidence which metamed can exploit frequently enough. Anyway, given the different bits of evidence that strict guideline following seems to outperform experts making exceptions, better medicine seems to be generally *less* personalized.

    For (2), Although metamed can outcompete the average doctor, I suspect people like NICE (in the NHS) or Uptodate, which have groups of domain experts who regularly comb the data to get recommendations for a given condition, will likely outperform it (metamed doesn’t have a coven of each speciality). I’m sceptical the average NICE technology appraisal is far wrong in a way rationalists can improve upon, but I look forward to being proven wrong.

    Ultimately, the crucible of metamed will be the evidence of its own efficacy. If it can generate a steady stream of cases where it gets better outcomes when making contra-guideline recommendations, it should do well. If not, then the cynicism of other commenters will be justified.

    • IMASBA

      “I’m a med student, and I’m pretty sceptical that MetaMed is going to beat ‘bog standard’ medicine often enough to be worth $5000 a throw.”

      You’re thinking of the wrong target demographic here: it’s not about the average result for all people, it’s about the result for people who have more money than they could ever spend. MetaMed would definitely be a failure if implemented in a national health system on a large scale because it would draw funds away from more cost-effective things. But if you are a multi-millionaire who can choose to spend as much on private health care as you wish then it might be worth it since $5000 is pocket change to you.

      • VV

        Or you may be a middle-class person desperate for your life willing to sell your house (the $5,000 figure seems to be the fee for the minimum service. I suppose that they planned their business so that the typical service might be perhaps $50,000)

      • http://twitter.com/srdiamond srdiamond

        I think their clients are likely to be the same “rationalists” they milk for the Machine Intelligence whatever. They advertise “For serious medical conditions, you need direct access to the world’s best researchers.” Who, other than fellow “rationalists,” will believe this motley crew are among the world’s best researchers? This makes spot on Robin’s point that they base their credibility on group membership and ideological affinity. What else could it be based on?

        But while Robin scores high on inference to the best explanation, he flubs on the application of insights. The poor quality of medical research, which he has demonstrated, implies that you need trained diagnosticians to distinguish valid results. These perpetual graduate students may gain traction by finding obscure but worthless research.

  • Contemplationist

    My sentiment is that people here are underestimating the amount of low-hanging fruit in medicine. My dad is a Nephrologist and researcher and concurs with my opinions that most doctors don’t have much clue about statistics or non-pharmacological interventions. ER surgery is a miracle, infectious disease management is fantastic, but treatment of chronic conditions is woeful.

    A small example is that a ketogenic diet may (emphasis on may) benefit some cancer patients, especially if they are undergoing chemo. A simple awareness of this fact has the potential to raise cancer survival rates.

  • http://morningtableaux.blogspot.com/ Benquo

    “I expect most doctors will disagree strongly with the claims that they
    don’t give patients personalized attention, only improve average health
    outcomes, and don’t offer the finest-grain advice available.”

    I’m not sure this is quite a direct response. Obviously an individual doctor gives patients personalized attention, and the only way to improve average outcomes is to improve outcomes for some specific patients. The point is that the advice, best practices, and recommended treatments the doctors rely on are all about the average outcome.

    I don’t know if that’s actually true, or if even if that is true there is some low hanging fruit, but that’s the steelmanned case.

  • Pingback: HWR and More « Healthcare Economist

  • Pingback: Overcoming Bias : In favour of finite meta

  • http://www.facebook.com/maxim.khesin Maxim Khesin

    I don’t think it’s fair to characterize MetaMed service as “second opinion”. It’s an entirely different proposition: for one no doctor will spend days and mine a lot of literature as a “second opinion” – doctors don’t specialize in that kind of activity, researchers do. It remains to be seen whether this is a good value proposition – time will tell – but I think this is quite different from a “second opinion”.

  • dmytryl

    I didn’t initially notice that these guys compare CAT scans to Hiroshima survivors:

    One million children every year have had unnecessary CT scans, which risks exposing them to radiation levels up to those experienced by survivors of Hiroshima and Nagasaki. [xlv]

    http://lesswrong.com/r/discussion/lw/h64/how_to_evaluate_data/8qj4

    Rest of their “Vital facts and statistics” likewise consists of misinterpretations, chinese whispers game of citation chains, miscitations, misleading numbers (98 000 instead of “about 100 000″), and the like.

  • tyler

    Doctors spend years
    being trained and educated in their respective fields. However, doctors do make
    mistakes so it is in the best interest of patients to get a second opinion. I
    used https://secondopinions.com when I felt an MRI I had done was
    misdiagnosed. SecondOpinions.com provided me with a second opinion that turned
    out to be detailed and accurate. I would have had several health problems if I
    had not chosen to use SecondOpinions.com services. They also gave me a coupon
    code to share “jy15″ to take 15% off your entire order which should still
    work. I recommend getting a second
    opinion because it can potentially save your life.