From ’97 to ’99 I was a RWJF Health Policy Scholar (at UC Berkeley), and my final project and presentation was on what I called “treatment futures”, i.e., the idea of using decision markets to forecast treatment-conditional health outcomes for individual patients. I proposed:
I used to create elaborate, high quality mods for videogames that I shared online for free and I know plenty of useful programs that were created 100% for free by everyone involved. Believe, me, free software works. What it doesn't do is work all the time, at any place, in any culture and at any scale, that is something we have to keep in mind and it's why someone like you would generally (and most of the time rightfully so) advice against betting on it. It cannot be commercialized, generalized, or reproduced on demand, just like memes and virals can't. CrowdMed can work, just don't expect it to work for a 100.000 employee company in China 20 years from now, but that's not a problem, it will help some people in the here and now (which is the goal) and gather data that can be used for future diagnostics software.
IMASBA - you are right. I regret being so outspoken in my criticisms. The team is doing their best and I wish them well. The point I was attempting to make is that to build a business of scale, one needs to have strong incentives at every point in the delivery chain. These incentives are usually financial, but they can also be status, altruistic, entertainment, etc. The evidence suggests that free/low paying/altruistic prediction markets don't work and structures that supply enough incentives to scale have yet to be discovered.
Your point about free software needs further analysis. Little software is truely "free" i.e nobody pays ... these companies usually have a business model where somebody is paying, (the economic buyer/true customer) while users use it for "free." There is a clear difference between users and customers. Google is a good example of that, users use it for free, but advertisers pay. There is also the fremium approach whereby users get a "free" version of the software, but for it to be useful, they need to upgrade and pay for premium versions. Open source is built on consulting, value added services, and upgrades. RedHat, mySQL etc are not free if you want to use them in any meaningful way. Its the old gillette story of giving away the razors but charging for the blades. So software tends not really free in the full sense of the word, its free to some, but somewhere money is usually changing hands.
Also, maybe Jared Heyman shouldn't be overly concerned about the commercial judgment of someone who thinks people really like ads.
Russ, I wouldn't be so damning if I were you. According to economics 101 theory no good can come from free software either, yet a lot of good things come from it in practice. There will be skilled people out there with motivations other than the prospect of making money. Yes, this will be culturally and time dependent but CrowdMed is operating in a certain time frame in a certain culture, it's not being forced on the whole universe forever.
The percent of patient payments that go to our detectives is highly variable, ranging between 0% (if nobody suggests or bets on the correct diagnosis) to over 100% (if the correct diagnosis is suggested early in the market and lots of detectives bet on it.) Once we've run enough cases to establish a stable average payout I'm happy to report it to you.
I know that play money markets have enjoyed mixed results over the years. From what I've read, they tend to work best when the market topic is of intrinsic interest to the participants (sports markets for sports fans, box office markets for movie buffs, etc.) We're trying to recruit detectives who like solving medical mysteries for its own sake, just like some people like solving crossword puzzles. On top of that, we stress on our website that they're helping to solve a real-world medical mystery and potentially help to save a life, plus of course the charitable contribution piece. Not everyone will be motivated by these things, but our hypothesis is that we can recruit enough people in the world who are. If our hypothesis proves incorrect then we'll quickly change course -- stubborn adherence to flawed beliefs is a luxury that few startups can afford.
You are free to tell us what % of patient payments you expect will go to detectives, but in the absence of your telling us I have to estimate based on the parameters you've given us.
I have been to your site/read a few backgroundarticles, and have a reasonable grasp of how your proposed system is intended to work … based on the frequency that your premise/pitch is changing relativeto what it was only a few months ago, it seems like you don’t have as many answers as you claim you do.
The fundamental issue you are facing is how to getmore reliable medical diagnoses into the hands of patients. What you seem to be offering at the moment is part literature review part bragging about how smart you are versus the medical community and part croudsourcing buzzword speak. I'm far from sold.
Let's assume your challenge is to diagnose thelikely set of potential afflictions more reliably than existing medical practice does. Prove that you can. AND have your audience believe you can.
I would suggest analyzing how related problems getsolved today. Its not through a prediction market or any of the high end math you seem to be so fond of. A related problem might be understanding how people figure out what is wrong with their car? You are doing the same thing, the obvious parallel is helping patients figure out what is wrong with them. Thecar in this instance is merely more complex. As are the tests. So what might be helpfull is to study the elements of how the discovery process for diagnosing cars problems work and then figure out how to apply those findings to diagnosing medical conditions using a scalable business model. Look at the thousands of DYI websites/communities that work, they work for clearly observable reasons.
The way your average bear finds out what is wrongwith their car, is to 1. find an audience with experience, and 2. post car symptoms to that community, and 3. allow the community to trouble shoot it by asking the poster to perform tests and then report back their findings and then 4. repeating. (Does it have park, or not, if it does, then its likely fuel related etc. It has no spark, its electrical, have you checked your battery … etc), so the process basically it’s a series of symptoms followed by suggested tests repeated. Its basic troubleshooting Jared. No need for calculus.
Maybe you consider a parallel approach thing, linkingsymptoms, with suggested clinical tests, followed by feedback, followed by more tests, until you discover the root cause. The low hanging fruit here is that theoverwhelming majority of doctors are so stupid that they don't know which clinical tests to order, and what these results actually mean. Speak to any credible Clinical Lab Scientist about the disconnect between the testadministration and the doctors who ask for and interpret the tests and you will find out that this disconnect is massive. The CLSs feel incredibly sorry for patients everytime the lab phone rings and there is a docter on the other side of the call. They know gross stupidity is nearby. Little wonder medicine is in the state it is, imagine getting car advice from a mechanic that doesn’t understand which tests to perform and what the results mean.
Perhaps developing a discussion board framework foriteratively linking symptoms with appropriate clinical tests while providing explanations for what the findings mean would be a huge improvement over current day practice. So maybe get a community of CLSs, find out how to incentivize them, post some symptoms, order appropriate clinical tests, and repeat. Soon best practices and patterns will emerge etc, and you will be on your way. You make money by facilitating the ordering of the correct clinical test. Leave the treatment to to the docters.
The holy grail would be to develop a system that canmake more refined medical predictions (a challenge of prediction), AND then translate them into actionable treatment decisions (a challenge of decisionanalysis). Maybe I know how to do this, and perhaps I will tell you how to do so for free, but the business model, economics/incentives won’t scale, and yourfinancial sponsors won’t think there is a credible business there because there isn’t. So maybe rather than building a prediction market that will never work, perhaps go and build a discussion board community frequented by CLSs, that help patients to tell their idiot doctors which tests to perform
Keep it simple. you asked for feedback and here it is, download PHPBB, find a few CLS to review patient histories/symptom descriptions, order some tests, repeat, and then report back to us here ... and we will tell you want to do next ...
see how it works Jared, no need for math: symptom description, test, feedback, more tests ... eventually root cause discovery.
What makes you think that CrowdMed keeps >95% of the money that patients pay?
The question obviously deserves an answer, although "we don't limit the charitable payouts" is phrased in jargon I must admit not understanding.
Robin in his innuendo seems to take cynical delight in wielding the weapons of "forager morality" by inveighing against greed. (It seems that being aware of psychological underpinnings can serve as either caveat or as justification. The guns of critical thought can be fired for hypocrisy.)
As a dispassionate observer who has absolutely no affection for markets ( http://tinyurl.com/blhdluc ), I would say it would be misleading to call your enterprise a prediction market, since Robin's objections mark defining criteria for those entities.
But whether your enterprise will provide value is a purely empirical question. If Robin wants to argue against it, he needs to point to evidence that only true prediction markets work.
ADDED. The sentence by Robin most relevant to his "cynical delight" is:
I think that players deserve a much higher fraction of the patient payments than this startup seems willing to give them.
Russ, I'm open to learning from those who know more about prediction markets than myself, and certainly count Dr. Hanson within that group. My contention is that one should have a thorough understanding of how our system works and why before criticizing it.
I'm happy to speak with anyone out there who has created a similar system in the past and failed -- it's much better to learn from the mistakes of those who came before us than repeat them.
We've tried to think through each of the issues that you mention and have come up with the best model we could think of. If there are specific paths we've chosen that you (or anyone) knows to be incorrect, then I welcome that specific feedback.
What makes you think that CrowdMed keeps >95% of the money that patients pay? We don't limit the charitable payouts for any of our markets, and in fact it's mathematically possible that our payout *exceeds* what we charge the patient. You could accuse us of having a flawed business model, but not being greedy.
Again, we don't believe that our Medical Detectives are motivated by money (or at least, we don't wish to attract those that are.) We're a business, and as such we're obligated to charge for our service and produce a return for our shareholders. I find nothing immoral or internally inconsistent in charging patients for the service we provide while not providing a cash-based incentive for our Medical Detectives, as long as this model motivates the right behavior and satisfies all of our constituents.
I admit to being disappointed in your post ...I expected better from you and your team.
You would be wise to listen to Hanson as he is telling you something very important. The structure of what you are planning on implementing is flawed i.e. it won't work!
Prediction markets are not magical devices, they are based on simple mathematical principles, they have significant limitations, and they only work well under very specific scenarios. Scenarios which your proposed structure lacks. I know this because I created the largest set of prediction market data on the planet, and hansonknows this because he invented them. So rather than argue points which are irrelevant you should pay attention to the advice Robin is kindly giving you.
In order for prediction markets to work, at the very least, you need
1. a group of people that have "information" 2. who can process/express that information through 3. a methodology that aggregates this information.
Jared, your structure barely has any of these essential elements in it. UI is the least of your worries my friend. Medical detectives meets prediction markets with a twist of altruism thrown in. Seriously?
The structure that you are describing is a joke! It hasn't worked well anywhere else. Ever.
I am dissapointed that you think "medical detectives" reading case histories of patients and other self-reported data and then trading with each other for the good of others is going to produce any prediction remotely reliable. I'm more dissapointed that Thiel gave you capital for this hair brained scheme.
You are extremely naive to: A).think you can make this structure work when all before you have failed B). come onto this blog and argue with the guy that invented the conceptC). while starting your post off with some pc rambling drivel about thanking Hanson for his contribution to the field (both academic and industry no less) and then basically dismissing his opinions.
To succeed, you need to do a lot better than this Jared. A lot lot better. You might be capable of it, but you need to try something else by having more clarity on: what information is needed to improve medical diagnoses, where you can get this information, how you can share this information, who/which structure is best positioned to process that information, and the collective incentives to make the entire scheme work.
Right now you limited clarity on any of these important points.
Unless they've got specific domain expertise, most people can probably do no better on average than WebMD, so I suspect that plugging in the symptoms (or your best estimate of them) into WebMD and outputting whatever WebMD gives you is probably Nash for this kind of incentives.
If that's the case, then prediction markets serve the intensely useful function of discerning which fields are susceptible to real expertise.
Markets are fine for (most) goods and services, not for truth finding. Like I said, the only time prediction markets beat experts is when the experts aren't really experts and then you might as well have tossed a coin to make a decision.
You could be right that prediction markets might not work/be subject to some kind of manipulation in these types of circumstances but in many cases the act of buying and selling has proven to be an effective way to solve problems and disseminate problems. I'd like to see us gain much more experience from prediction markets.
I like the idea of not using real money with the argument that that would open the door to manipulation and that the amounts would have to be huge to really pose a risk for traders (and that would exclude a lot of people from joining).
I also like how you're not trying to replace doctors or overrule them. If the contribution consists of suggestions that can help doctors remember some obscure, but important detail they had forgotten since med school or otherwise inspire them to think in a direction they hadn't thought about then the project can be a *healthy* addition.
Could the disagreement between crowdmed and Robin Hanson be based on the difference between practice and theory? the difference between the entrepeneur and the (libertarian-leaning) economist?