Tag Archives: Prediction Markets

SciCast Pays HUGE

I’ve posted twice before when SciCast paid out big. The first time we just paid for activity. The second time, we paid for accuracy, but weakly, as it was measured only a few weeks after each trade. Now we are paying HUGE, for longer-term accuracy. We’ll pay out $86,000 to the most accurate participants, as measured from November 7 to March 6:

SciCast is running a new special! The most accurate forecasters during the special will receive Amazon gift cards:

• The top 15 participants will win $2250 to spend at Amazon.com

• The other 135 of the top 150 participants will win $225 to spend at Amazon.com

Participants will be ranked according to their total expected and realized points from their forecasts during the special. Be sure to use SciCast from November 7 through March 6! (more)

Added: At any one time about half the questions will be eligible for this contest. We of course hope to compare accuracy between eligible and ineligible questions.

GD Star Rating
a WordPress rating system
Tagged as:

Why Not Egg Futures?

Older women often find themselves too old to have kids, and regretting it. Such women would have gained by freezing some eggs when they were younger. But when younger, they didn’t think they’d ever want kids, or thought the issue could wait.

Such women might be helped by an egg futures business, paid to take on this risk for them. Such a business could buy eggs from women when young, freeze them, and sell them back to these same women when old.

Of course, to compensate for the wait and risk that the women wouldn’t want eggs later, this business would have to sell eggs back a high price. But still, if the women bought the egg later, that would show they expected to gain from the deal.

Also, not all women would make equally good prospects. So such a business would focus on women likely to wait too long, be well off, and want kids later. So this business would “discriminate” by class in its purchases, paying more to upper class women. A lot like we now discriminate when we pay more for used clothes, cars, or houses from richer people.

Several people have told me that, while they were not personally offended, they expect others to be offended by such a business. Especially if men were involved in the business – a female only business would offend less. I’m somewhat mystified, which is partly why I’m writing this post. Maybe others can help me understand the objection.

Interestingly, we could add some personal prediction markets, which would probably be legal. For each possible young woman, there could be a market where one buys and sells conditional shares in an egg from that customer. If you owned a conditional share, you’d own a share of the profit from later selling that customer her egg. And you’d owe a share of the cost to buy her egg from her, freeze it, and store it. Imagine the fun buying and selling conditional shares regarding the young women that you know. And the fact that this is a share of a real physical object should make it legal.

Ok, I can see how people might be offended at this last suggestion. After all, there’s a risk that people might have fun on something that is supposed to be serious! ;)

GD Star Rating
a WordPress rating system
Tagged as: , ,

SciCast Pays Big Again

Back in May I said that while SciCast hadn’t previously been allowed to pay participants, we were finally running a four week experiment to reward random activities. That experiment paid big and showed big effects; we saw far more activity on days when we paid cash.

In the next four weeks we’ll run another experiment that pays even more:

SciCast is running a new special! For four weeks, you can win prizes on some days of the week:

  • On Tuesdays, win a $25 Amazon gift card with activity.
  • On Wednesdays, win an activity badge for your profile.
  • On Thursdays, win a $25 Amazon gift card with accurate forecasting.
  • On Fridays, win an accuracy badge for your profile.

On each activity prize day, up to 80 valid forecasts and comments made that day will be randomly selected to win. On each accuracy prize day, your chance of winning any of 80 prizes is proportional to your forecasting accuracy. Be sure to use SciCast from July 22 to August 15!

So this time we’ll compare activity incentives to accuracy incentives. Will we get more activity on days when we reward activity, and more accuracy on days when we reward accuracy? Now our accuracy incentives are admittedly weak, in that we’ll evaluate the accuracy of each trade/edit via price changes over only a few weeks after the trade. But hey, its something. Hopefully we can do a better experiment next year.

SciCast now has 532 questions on science and technology, and you can make conditional forecasts on most of them. Come!

GD Star Rating
a WordPress rating system
Tagged as:

Bets As Loyalty Signals

Why do men give women engagement rings? A standard story is that a ring shows commitment; by paying a cost that one would lose if the marriage fails, one shows that one places a high value on the marriage.

However, as a signal the ring has two problems. On the one hand, if the ring is easy to sell for its purchase price, then it detracts from the woman’s signal of the value she places on the marriage. Accepting a ring makes her look mercenary. On the other hand, if the ring can’t be sold for near its purchase price, and if the woman values the ring itself at less than its price, then the couple destroys value in order to allow the signal.

These are common problems with loyalty signals – either value is destroyed, or stronger signals on one side weakens signals from other sides. Value-destroying loyalty signals are very common in couples, clubs, churches, firms, professions, and nations. For example, we might give up poker nights for a spouse, pork food for a religion, casual clothes to be a manager, or old-world customs for a new nation.

A few days ago I had an idea for a more efficient loyalty signal. Imagine that when he was twenty a man made a $5000 bet that he would never marry before the age of fifty. Then when he is thirty-five and wants to marry, he can send a strong signal of his desire to marry just by his willingness to lose this bet. Since the bet is lost to a third party, it doesn’t hinder the bride’s ability to signal her loyalty. And assuming the bet is made at fair odds, the lost bets are on average paid to versions of this man in alternative scenarios where he doesn’t marry by fifty. So he retains the value, which is not destroyed.

Today this approach probably suffers from being weird, so doing this would also send an unwelcome signal of weirdness. But it is only a signal of one’s weirdness when one made the bet – maybe one can credibly claim to be less weird later when marrying. And the bet would remain potent as a signal of devotion.

There are many related applications. For example, a young person who bet that they would never join a religion might later credibly signal their devotion to that religion, and perhaps avoid having to eat and dress funny to show such devotion. Also, someone who bet that they would never change countries might signal their loyalty when they moved to a new nation. To let my future self signal his devotion to his political party, perhaps I should bet today that I’ll never join a political party. Do I have any takers?

Added 20July: Of course the need to lose a bet to get married would discourage some from getting married. But the same harm happens for any expectation of needing to send a loyalty signal if one gets married. This effect isn’t particular to bets as loyalty signals; it happens for all kinds of loyalty signals.

Mechanically one way to implement marriage bets as loyalty signals would be for parents to buy their sons male spinster insurance, which pays money to the son when he is fifty if he never marries, and otherwise gives him a nice visible cheap pin/brooch when he gets married. His new wife can wear the pin to brag about his devotion. The pin might be color coded to indicate how much money he sacrificed.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Don’t Be “Rationalist”

The first principle is that you must not fool yourself — and you are the easiest person to fool. Richard Feynman

This blog is called “Overcoming Bias,” and many of you readers consider yourselves “rationalists,” i.e., folks who try harder than usual to overcome your biases. But even if you want to devote yourself to being more honest and accurate, and to avoiding bias, there’s a good reason for you not to present yourself as a “rationalist” in general. The reason is this: you must allocate a very limited budget of rationality.

It seems obvious to me that almost no humans are able to force themselves to see honestly and without substantial bias on all topics. Even for the best of us, the biasing forces in and around us are often much stronger than our will to avoid bias. Because it takes effort to overcome these forces, we must choose our battles, i.e., we must choose where to focus our efforts to attend carefully to avoiding possible biases. I see four key issues:

1. Priorities – You should spend your rationality budget where truth matters most to you. You can’t have it all, so you must decide what matters most. For example, if you care mainly about helping others, and if they mainly rely on you via a particular topic, then you should focus your honesty on that topic. In particular, if you help the world mainly via your plumbing, then you should try to be honest about plumbing. Present yourself to the world as someone who is honest on plumbing, but not necessarily on other things. In this scenario we work together by being honest on different topics. We aren’t “rationalists”; instead, we are each at best “rationalist on X.”

2. Costs – All else equal, it is harder to be honest on more and wider topics, on topics where people tend to have emotional attachments, and on topics close to the key bias issues of the value and morality of you and your associates and rivals. You can reasonably expect to be honest about a wide range of topics that few people care much about, but only on a few narrow topics where many people care lots. The close you get to dangerous topics, the smaller your focus of honesty can be. You can’t be both a generalist and a rationalist; specialize in something.

3. Contamination – You should try to avoid dependencies between your beliefs on focus topics where you will try to protect your honesty, and the topics where you are prone to bias. Try not to have your opinions on focus topics depend on a belief that you or your associates are especially smart, perceptive, or moral. If you must think on risky topics about people, try to first study other people you don’t care much about. If you must have an opinion on yourself, assume you are like most other people.

4. Incentives – I’m not a big fan of the “study examples of bias and then will yourself to avoid them” approach; it has a place, but gains there seem small compared to changing your environment to improve your incentives. Instead of pulling yourself up by your bootstraps, step onto higher ground. For example, by creating and participating in a prediction market on a topic, you can induce yourself to become more honest on that topic. The more you can create personal direct costs of your dishonesty, the more honest you will become. And if you get paid to work on a certain topic, maybe you should give up on honesty about who if anyone should be paid to do that.

So my advice is to choose a focus for your honesty, a narrow enough focus to have a decent chance at achieving honesty. Make your focus more narrow the more dangerous is your focus area. Try to insulate beliefs on your focus topics from beliefs on risky topics like your own value, and try to arrange things so you will be penalized for dishonesty. Don’t persent yourself as a “rationalist” who is more honest on all topics, but instead as at best “rationalist on X.”

So, what is your X?

GD Star Rating
a WordPress rating system
Tagged as: ,

Big Signals

Between $6 and $9 trillion dollars—about 8% of annual world-wide economic production—is currently being spent on projects that individually cost more than $1 billion. These mega-projects (including everything from buildings to transportation systems to digital infrastructure) represent the biggest investment boom in human history, and a lot of that money will be wasted. …

Over the course of the last fifteen years, [Flyvbjerg] has looked at hundreds of mega-projects, and he found that projects costing more than $1 billion almost always face massive cost overruns. Nine out of ten projects faces a cost overrun, with costs 50% higher than expected in real terms not unusual. …

In fact, the number of mega-projects completed successfully—on time, on budget, and with the promised benefits—is actually too small for Flyvbjerg to determine why they succeeded with any statistical validity. He estimates that only one in a thousand mega-projects fit that criteria. (more; paper)

You can probably throw most big firm mergers into this big inefficient project pot.

There’s a simple signaling explanation here. We like to do big things, as they make us seem big. We don’t want to be obvious about this motive, so we pretend to have financial calculations to justify them. But we are purposely sloppy about those calculations, so that we can justify the big projects we want.

It would be possible to make prediction markets that accurately told us on average that these financial calculations are systematically wrong. That could enable us to reject big projects that can’t be justified by reasonable calculations. But the people initiating these projects don’t want that, so it would have to be outsiders who set up these whistleblowing prediction markets. But alas as with most whistleblowers, the supply of these sort of whistleblowers is quite limited.

GD Star Rating
a WordPress rating system
Tagged as: , ,

SciCast Pays Out Big!

When I announced SciCast in January, I said we couldn’t pay participants. Alas, many associated folks are skeptical of paying because they’ve heard that “extrinsic” motives just don’t work well relative to “intrinsic” motives. No need to pay folks since what really matters is if they feel involved. This view is quite widespread in academia and government.

But, SciCast will finally do a test:

SciCast is running a special! For four weeks, you can win prizes on some days of the week:
• On Wednesdays, win a badge for your profile.
• On Fridays, win a $25 Amazon Gift Card.
• On Tuesdays, win both a badge and a $25 Amazon Gift Card.
On each prize day 60 valid forecasts and comments made that day will be randomly selected to win (limit of $575 per person).
Be sure to use SciCast from May 26 to June 20!

Since we’ve averaged fewer than 60 of these activities per day, rewarding 60 random activities is huge! Either activity levels will stay the same and pretty much every action on those days will get a big reward, or we’ll get lots more activities on those days. Either you or science will win! :)

So if you or someone you know might be motivated by a relevant extrinsic or intrinsic reward, tell them about our SciCast special, and have them come be active on matching days of the week. We now have 473 questions on science and technology, and you can make conditional forecasts on most of them. Come!

Added 21May: SciCast is mentioned in this Nature article.

GD Star Rating
a WordPress rating system
Tagged as: ,

Who/What Should Get Votes?

Alex T. asks Should the Future Get a Vote? He dislikes suggestions to give more votes to “civic organizations” who claim to represent future folks, since prediction markets could be more trustworthy:

Through a suitable choice of what is to be traded, prediction markets can be designed to be credibly motivated by a variety of goals including the interests of future generations. … If all we cared about was future GDP, a good rule would be to pass a policy if prediction markets estimate that future GDP will be higher with the policy than without the policy. Of course, we care about more than future GDP; perhaps we also care about environmental quality, risk, inequality, liberty and so forth. What Hanson’s futarchy proposes is to incorporate all these ideas into a weighted measure of welfare. … Note, however, that even this assumes that we know what people in the future will care about. Here then is the final meta-twist. We can also incorporate into our measure of welfare predictions of how future generations will define welfare. (more)

For example, we could implement a 2% discount rate by having official welfare be 2% times welfare this next year plus 98% times welfare however it will be defined a year from now. Applied recursively, this can let future folks keep changing their minds about what they care about, even future discount rates.

We could also give votes to people in the past. While one can’t change the experiences of past folks, one can still satisfy their preferences. If past folks expressed particular preferences regarding future outcomes, those preferences could also be given weight in an overall welfare definition.

We could even give votes to animals. One way is to make some assumptions about what outcomes animals seem to care about, pick ways to measure such outcomes, and then include weights on those measures in the welfare definition. Another way is to assume that eventually we’ll “uplift” such animals so that they can talk to us, and put weights on what those uplifted animals will eventually say about the outcomes their ancestors cared about.

We might even put weights on aliens, or on angels. We might just put a weight on what they say about what they want, if they ever show up to tell us. If they never show up, those weights stay set at zero.

Of course just because we could give votes to future folks, past folks, animals, aliens, and angels doesn’t mean we will ever want to do so.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Michael Covel Interview

Investment advisor Michael Covel interviewed me on prediction markets for his podcast show here. I couldn’t be very encouraging about his main strategy of trend-following, but we covered many interesting issues.

GD Star Rating
a WordPress rating system
Tagged as:

Fixing Academia Via Prediction Markets

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see here, herehere, here). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehere, here, here). I’ve also talked a lot lately about what I see as the main social functions of academia (see here, here, here, here). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques. First, people willing to bet on theories are not a good source of revenue to pay for research. There aren’t many of them and they should in general be subsidized not taxed. You’d have to legally prohibit other markets to bet on these without the tax, and even then you’d get few takers.

Second, Alexander says to subsidize markets the same way they’d be taxed, by adding money to the betting pot. But while this can work fine to cancel the penalty imposed by a tax, it does not offer an additional incentive to learn about the question. Any net subsidy could be taken by anyone who put money in the pot, regardless of their info efforts. As I’ve discussed often before, the right way to subsidize info efforts for a speculative market is to subsidize a market maker to have a low bid-ask spread.

Third, Alexander’s plan to have bettors vote to agree on a question tester seems quite unworkable to me. It would be expensive, rarely satisfy both sides, and seems easy to game by buying up bets just before the vote. More important, most interesting theories just don’t have very direct ways to test them, and most tests are of whole bundles of theories, not just one theory. Fourth, for most claim tests there is no obvious definition of “everyone in the field,” nor is it obvious that everyone should have opinion on those tests. Forcing a large group to all express a public opinion seems a huge cost with unclear benefits.

OK, now let me review my proposal, the result of twenty five years of thinking about this. The market maker subsidy is a very general and robust mechanism by which research patrons can pay for accurate info on specified questions, at least when answers to those questions will eventually be known. It allows patrons to vary subsidies by questions, answers, time, and conditions.

Of course this approach does require that such markets be legal, and it doesn’t do well at the main academic function of credentialing some folks as having the impressive academic-style mental features with which others like to associate. So only the customers of academia who mainly want accurate info would want to pay for this. And alas such customers seem rare today.

For research patrons using this market-maker subsidy mechanism, their main issues are about which questions to subsidize how much when. One issue is topic. For example, how much does particle physics matter relative to anthropology? This mostly seems to be a matter of patron taste, though if the issue were what topics should be researched to best promote economic growth, decision markets might be used to set priorities.

The biggest issue, I think, is abstraction vs. concreteness. At one extreme one can ask very specific questions like what will be the result of this very specific experiment or future empirical measurement. At the other extreme, one can ask very abstract questions like “do grapes cure cancer” or “is the universe infinite”.

Very specific questions offer bettors the most protection against corruption in the judging process. Bettors need worry less about how a very specific question will be interpreted. However, subsidies of specific questions also target specific researchers pretty directly for funding. For example, subsidizing bets on the results of a very specific experiment mainly subsidizes the people doing that experiment. Also, since the interest of research patrons in very specific questions mainly results from their interest in more general questions, patrons should prefer to directly target the more general questions directly of interest to them.

Fortunately, compared to other areas where one might apply prediction markets, academia offers especially high hopes for using abstract questions. This is because academia tends to house society’s most abstract conversations. That is, academia specializes in talking about abstract topics in ways that let answers be consistent and comparable across wide scopes of time, space, and discipline. This offers hope that one could often simply bet on the long term academic consensus on a question.

That is, one can plausibly just directly express a claim in direct and clear abstract language, and then bet on what the consensus will be on that claim in a century or two, if in fact there is any strong consensus on that claim then. Today we have a strong academic consensus on many claims that were hotly debated centuries ago. And we have good reasons to believe that this process of intellectual progress will continue long into the future.

Of course future consensus is hardly guaranteed. There are many past debates that we’d still find to hard to judge today. But for research patrons interested in creating accurate info, the lack of a future consensus would usually be a good sign that info efforts in that area less were valuable than in other areas. So by subsidizing markets that bet on future consensus conditional on such a consensus existing, patrons could more directly target their funding at topics where info will actually be found.

Large subsidies for market-makers on abstract questions would indirectly result in large subsidies on related specific questions. This is because some bettors would specialize in maintaining coherence relationships between the prices on abstract and specific questions. And this would create incentives for many specific efforts to collect info relevant to answering the many specific questions related to the fewer big abstract questions.

Yes, we’d  probably end up with some politics and corruption on who qualifies to judge later consensus on any given question – good judges should know the field of the question as well as a bit of history to help them understand what the question meant when it was created. But there’d probably be less politics and lobbying than if research patrons choose very specific questions to subsidize. And that would still probably be less politics than with today’s grant-based research funding.

Of course the real problem, the harder problem, is how to add mechanisms like this to academia in order to please the customers who want accuracy, while not detracting from or interfering too much with the other mechanisms that give the other customers of academia what they want. For example, should we subsidize high relevant prestige participants in the prediction markets, or tax those with low prestige?

Those promised quotes: Continue reading "Fixing Academia Via Prediction Markets" »

GD Star Rating
a WordPress rating system
Tagged as: , ,