Tag Archives: Prediction Markets

Challenge Coins

Imagine you are a king of old, afraid of being assassinated. Your king’s guard tells you that they’ve got you covered, but too many kings have been killed in your area over the last century for you to feel that safe. How can you learn of your actual vulnerability, and of how to cut it?

Yes, you might make prediction markets on if you will be killed, and make such markets conditional on various policy changes, to find out which policies cut your chance of being killed. But in this post I want to explore a different solution.

I suggest that you auction off challenge coins at some set rate, say one a month. Such coins can be resold privately to others, so that you don’t know who holds them. Each coin gives the holder the right to try a mock assassination. If a coin holder can get within X meters of you, with a clear sight of a vulnerable part of you, then they need only raise their coin into the air and shout “Challenge Coin”, and they will be given N gold coins in exchange for that challenge coin, and then set free. And if they are caught where they should not be then they can pay the challenge coin to instead be released from whatever would be the usual punishment for that intrusion. If authorities can find the challenge coin, such as on their person, this trade can be required.

Now for a few subtleties. Your usual staff and people you would ordinarily meet are not eligible to redeem challenge coins. Perhaps you’d also want to limit coin redeemers to people who’d be able to kill someone; perhaps if requested they must kill a cute animal with their bare hands. If a successful challenger can explain well enough how they managed to evade your defenses, then they might get 2N gold coins or more. Coin redeemers may be suspected of being tied to a real assassin, and so they must agree to opening themselves to being investigated in extra depth, and if still deemed suspicious enough they might be banned from ever using a challenge coin again. But they still get their gold coins this time. Some who issue challenge coins might try to hide transmitters in them, but holders could just wrap coins in aluminum foil and dip them in plastic to limit odor emissions. I estimate that challenge coins are legal, and not prohibited by asset or gambling regulations.

This same approach could be used by the TSA to show everyone how hard it is to slip unapproved items past TSA security. Just reveal your coin and your unapproved item right after you exit TSA security. You could also use this approach to convince an audience that your accounting books are clean; anyone with a coin can point to any particular item in your books, and demand an independent investigation of that item, paid for at the coin-issuer’s expense. If the item is found to not be as it should, the coin holder gets the announced prize; otherwise they just lose their coin.

In general, issuing challenge coins is a way to show an audience what rate of detection success (or security failure) results from what level of financial incentives. (The audience will need to see data on the rates of coin sales and successful vs. unsuccessful redemptions.) We presume that the larger the payoff to a successful challenge, the higher the fraction of coins that successfully result in a detection (or security failure).

GD Star Rating
loading...
Tagged as: ,

News Accuracy Bonds

Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. This false information is mainly distributed by social media, but is periodically circulated through mainstream media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue. (more)

One problem with news is that sometimes readers who want truth instead read (or watch) and believe news that is provably false. That is, a news article may contain claims that others are capable of proving wrong to a sufficiently expert and attentive neutral judge, and some readers may be fooled against their wishes into believing such news.

Yes, news can have other problems. For example, there can be readers who don’t care much about truth, and who promote false news and its apparent implications. Or readers who do care about truth may be persuaded by writing whose mistakes are too abstract or subtle to prove wrong now to a judge. I’ve suggested prediction markets as a partial solution to this; such markets could promote accurate consensus estimates on many topics which are subtle today, but which will eventually become sufficiently clear.

In this post, however, I want to describe what seems to me the simple obvious solution to the more basic problem of truth-seekers believing provably-false news: bonds. Those who publish or credential an article could offer bonds payable to anyone who shows their article to be false. The larger the bond, the higher their declared confidence in their article. With standard icons for standard categories of such bonds, readers could easily note the confidence associated with each news article, and choose their reading and skepticism accordingly.

That’s the basic idea; the rest of this post will try to work out the details.

While articles backed by larger bonds should be more accurate on average, the correlation would not be exact. Statistical models built on the dataset of bonded articles, some of which eventually pay bonds, could give useful rough estimates of accuracy. To get more precise estimates of the chance that an article will be shown to be in error, one could create prediction markets on the chance that an individual article will pay a bond, with initial prices set at statistical model estimates.

Of course the same article should have a higher chance of paying a bond when its bond amount is larger. So even better estimates of article accuracy would come from prediction markets on the chance of paying a bond, conditional on a large bond amount being randomly set for that article (for example) a week after it is published. Such conditional estimates could be informative even if only one article in a thousand is chosen for such a very large bond. However, since there are now legal barriers to introducing prediction markets, and none to introducing simple bonds, I return to focusing on simple bonds.

Independent judging organizations would be needed to evaluate claims of error. A limited set of such judging organizations might be certified to qualify an article for any given news bond icon. Someone who claimed that a bonded article was in error would have to submit their evidence, and be paid the bond only after a valid judging organization endorsed their claim.

Bond amounts should be held in escrow or guaranteed in some other way. News firms could limit their risk by buying insurance, or by limiting how many bonds they’d pay on all their articles in a given time period. Say no more than two bonds paid on each day’s news. Another option is to have the bond amount offered be a function of the (posted) number of readers of an article.

As a news article isn’t all true or false, one could distinguish degrees of error. A simple approach could go sentence by sentence. For example, a bond might pay according to some function of the number of sentences (or maybe sentence clauses) in an article shown to be false. Alternatively, sentence level errors might be combined to produce categories of overall article error, with bonds paying different amounts to those who prove each different category. One might excuse editorial sentences that do not intend to make verifiable newsy claims, and distinguish background claims from claims central to the original news of the article. One could also distinguish degrees of error, and pay proportional to that degree. For example, a quote that is completely made up might be rated as completely false, while a quote that is modified in a way that leaves the meaning mostly the same might count as a small fractional error.

To the extent that it is possible to verify partisan slants across large sets of articles, for example in how people or organizations are labeled, publishers might also offer bonds payable to those than can show that a publisher has taken a consistent partisan slant.

A subtle problem is: who pays the cost to judge a claim? On the one hand, judges can’t just offer to evaluate all claims presented to them for free. But on the other hand, we don’t want to let big judging fees stop people from claiming errors when errors exist. To make a reasonable tradeoff, I suggest a system wherein claim submissions include a fee to pay for judging, a fee that is refunded double if that claim is verified.

That is, each bond specifies a maximum amount it will pay to judge that bond, and which judging organizations it will accept.  Each judging organization specifies a max cost to judge claims of various types. A bond is void if no acceptable judge’s max is below that bond’s max. Each submission asking to be paid a bond then submits this max judging fee. If the judges don’t spend all of their max judging fee evaluating this case, the remainder is refunded to the submission. It is the amount of the fee that the judges actually spend that will be refunded double if the claim is supported. A public dataset of past bonds and their actual judging fees could help everyone to estimate future fees.

Those are the main subtleties that I’ve considered. While there are ways to set up such a system better or worse, the basic idea seems robust: news publishers who post bonds payable if their news is shown to be wrong thereby credential their news as more accurate. This can allow readers to more easily avoid believing provably-false news.

A system like that I’ve just proposed has long been feasible; why hasn’t it been adopted already? One possible theory is that publishers don’t offer bonds because that would remind readers of typical high error rates:

The largest accuracy study of U.S. papers was published in 2007 and found one of the highest error rates on record — just over 59% of articles contained some type of error, according to sources. Charnley’s first study [70 years ago] found a rate of roughly 50%. (more)

If bonds paid mostly for small errors, then bond amounts per error would have to be very small, and calling reader attention to a bond system would mostly remind them of high error rates, and discourage them from consuming news.

However, it seems to me that it should be possible to aggregate individual article errors into measures of overall article error, and to focus bond payouts on the most mistaken “fake news” type articles. That is, news error bonds should mostly pay out on articles that are wrong overall, or at least quite misleading regarding their core claims. Yes, a bit more judgment might be required to set up a system that can do this. But it seems to me that doing so is well within our capabilities.

A second possible theory to explain the lack of such a system today is the usual idea that innovation is hard and takes time. Maybe no one ever tried this with sufficient effort, persistence, or coordination across news firms. So maybe it will finally take some folks who try this hard, long, and wide enough to make it work. Maybe, and I’m willing to work with innovation attempts based on this second theory.

But we should also keep a third theory in mind: that most news consumers just don’t care much for accuracy. As we discuss in our book The Elephant in the Brain, the main function of news in our lives may be to offer “topics in fashion” that we each can all riff on in our local conversations, to show off our mental backpacks of tools and resources. For that purpose, it doesn’t much matter how accurate is such news. In fact, it might be easier to show off with more fake news in the mix, as we can then show off by commenting on which news is fake. In this case, news bonds would be another example of an innovation designed to give us more of what we say we want, which is not adopted because we at some level know that we have hidden motives and actually want something else.

GD Star Rating
loading...
Tagged as: , , ,

My Market Board Game

From roughly 1989 to 1992, I explored the concept of prediction markets (which I then called “idea futures”) in part via building and testing a board game. I thought I’d posted details on my game before, but searching I couldn’t find anything. So here is my board game.

The basic idea is simple: people bet on “who done it” while watching a murder mystery. So my game is an add-on to a murder mystery movie or play, or a game like How to Host a Murder. While watching the murder mystery, people stand around a board where they can reach in with their hands to directly and easily make bets on who done it. Players start with the same amount of money, and in the end whoever has the most money wins (or maybe wins in proportion to their winnings).

Together with Ron Fischer (now deceased) I tested this game a half-dozen times with groups of about a dozen. People understood it quickly and easily, and had fun playing. I looked into marketing the game, but was told that game firms do not listen to proposals by strangers, as they fear being sued later if they came out with a similar game. So I set the game aside.

All I really need to explain here is how mechanically to let people bet on who done it. First, you give all players 200 in cash, and from then on they have access to a “bank” where they can always make “change”:

Poker chips of various colors can represent various amounts, like 1, 5, 10, 25, or 100. In addition, you make similar-sized cards that read things like “Pays 100 if Andy is guilty.” There are different cards for different suspects in the murder mystery, each suspect with a different color card. The “bank” allows exchanges like trading two 5 chips for one 10 chip, or trading 100 in chips for a set of all the cards, one for each suspect.

Second, you make a “market board”, which is an array of slots, each of which can hold either chips or a card. If there were six suspects, an initial market board could look like this:

For this board, each column is about one of the six suspects, and each row is about one of these ten prices: 5,10,15,20,25,30,40,50,60,80. Here is a blow-up of one slot in the array:

Every slot holds either the kind of card for that column, or it holds the amount of chips for that row. The one rule of trading is: for any slot, anyone can swap the right card for the right amount of chips, or can make the opposite swap, depending on what is in the slot at the moment. The swap must be immediate; you can’t put your hand over a slot to reserve it while you get your act together.

This could be the market board near the end of the game:

Here the players have settled on Pam as most likely to have done it, and Fred as least likely. At the end, players compute their final score by combining their cash in chips with 100 for each winning card; losing cards are worth nothing. And that’s the game!

For the initial board, fill a row with chips when the number of suspects times the price for that row is less than 100, and fill that row with cards otherwise. Any number of suspects can work for the columns, and any ordered set of prices between 0 and 100 can work for the rows. I made my boards by taping together clear-color M512 boxes from Tap Plastics, and taping printed white paper on tops around the edge.

Added 30Aug: Here are a few observations about game play. 1) Many, perhaps most, players were so engaged by “day trading” in this market that they neglected to watch and think enough about the murder mystery. 2) You can allow players to trade directly with each other, but players show little interest in doing this. 3) Players found it more natural to buy than to sell. As a result, prices drifted upward, and often the sum of the buy prices for all the suspects was over 100. An electronic market maker could ensure that such arbitrage opportunities never arise, but in this mechanical version some players specialized in noticing and correcting this error.

Added 31Aug: A twitter poll picked a name for this game: Murder, She Bet.

Added 9Sep: Expert gamer Zvi Mowshowitz gives a detailed analysis of this game. He correctly notes that incentives for accuracy are lower in the endgame, though I didn’t notice substantial problems with endgame accuracy in the trials I ran.

GD Star Rating
loading...
Tagged as: , ,

Bad-News Boxes

Many firms fail to pass bad news up the management chain, and suffer as a result, even though simple fixes have long been known:

Wall Street Journal placed the blame for the “rot at GE” on former CEO Jeffrey Immelt’s “success theater,” pointing to what analysts and insiders said was a history of selectively positive projections, a culture of overconfidence and a disinterest in hearing or delivering bad news. …The article puts GE well out of its usual role as management exemplar. And it shines a light on a problem endemic to corporate America, leadership experts say. People naturally avoid conflict and fear delivering bad news. But in professional workplaces where a can-do attitude is valued above all else, and fears about job security remain common, getting unvarnished feedback and speaking candidly can be especially hard. …

So how can leaders avoid a culture of “success theater?” … They have to model the behavior, being realistic about goals and forecasts and candid when things go wrong. They should host town halls where employees can speak up with criticism, structuring them so bad news can flow to the top. For instance, he recommends getting respected mid-level managers to first interview lower-level employees about what’s not working to make sure tough subjects are aired. …

Doing that is harder than it sounds, making it critical for leaders to create systemic ways to offer feedback, rather than just talking about it. She tells the story of a former eBay manager who would leave a locked orange box near the office bathrooms where people could leave critical questions. He would later read them aloud in meetings — with someone else unlocking the box to prove he hadn’t edited its contents — hostile questions and all. “People never trusted anything was really anonymous except paper,” she said. “He did it week in and week out.”

When she worked at Google, where she led online sales and operations for AdSense, YouTube and Doubleclick, she had a crystal statue she called the “I was wrong, you were right” statue that she’d hand out to colleagues and direct reports. (more)

Consider what signal a firm sends by NOT regularly reading the contents of locked anonymous bad news boxes at staff meetings. They in effect admit that they aren’t willing to pay a small cost to overcome a big problem, if that interferes with the usual political games. You might think investors would see this as a big red flag, but in fact they hardly care.

I’m not sure how exactly to interpret this equilibrium, but it is clearly bad news for prediction markets in firms. Such markets are also sold as helping firms to uncover useful bad news. If firms don’t do easier simpler things to learn bad news, why should we expect them to do more complex expensive things?

GD Star Rating
loading...
Tagged as: ,

Capturing The Policy Info Process

Brink Lindsey and Steven Teles’ book The Captured Economy: How the Powerful Enrich Themselves, Slow Down Growth, and Increase Inequality came out November 10. Steven Pearlstein titled his review “What’s to blame for slower growth and rising inequality?” Robert Samuelson says:

As societies become richer, so does the temptation for people to advance their economic interests by grabbing someone else’s wealth, as opposed to creating new wealth. … This sort of economy may be larger than you think. That’s the gist of the provocative new book …

And so on. The book’s marketing, intro, and reviews all suggest that the book is about who to blame for bad trends. And on how exactly (i.e., via what bad policies) bad guys have achieved their nefarious ends. Which to my mind is a dumb topic. Yes, it is what everyone wants to talk about for moral posturing purposes. But it is a far less useful topic than what exactly are the fundamental causes of our problems, and what we could do to address them.

However, sometimes when people play dumb, and observers treat them as dumb, they are not actually dumb. And this book in fact contains a brief but thoughtful analysis of the political obstacles to solving our many policy problems. It also suggests solutions. The problems: Continue reading "Capturing The Policy Info Process" »

GD Star Rating
loading...
Tagged as: ,

Villain Markets

Imagine that you have a large pool of cases, where in each case you weakly suspect some sort of villainous stink. But you have a limited investigative resource, which you can only apply to one case, to sniff for stick there.

For example, you might have one reporter, who you could assign for one month to investigate the finances of any one member of Congress. Or you might have one undercover actor, whom you could assign to offer a bribe to one member of the police force of a particular city. Or you might assign a pretty actress to meet with a Hollywood producer, to check for harassment.

Imagine further that you are willing to invite the world to weigh in, to advise you on where to apply your investigative resource. You are willing to say, “Hey world, which of these cases looks stinky to you?” If this is you, then I offer you villain markets.

In a villain market, some investigative resource will be applied at random to one case out of a set of cases. It will report back a verdict, which in the simplest case will be “stinky” or “not stinky”. And before that case is selected for investigation, we will invite everyone to bet anonymously on the chances of stickiness in each case. That is, anyone can bet on the probability that the verdict of case C will be found stinky, given that case C is selected for investigation. So if you have reason to suspect a particular member of Congress, a particular police officer, or a particular Hollywood producer, you might expect to gain by anonymously betting against them.

Imagine that we were sure to investigate case C87, and that the market chance of C87 being found stinky was 2%, but that you believed C87’s stinkiness chances were more like 5%. In this situation, you might expect to profit from paying $3 for the asset “Pays $100 if C87 found stinky”. After your bet, the new market chance might be 4%, reflecting the information you had provided the market via your bet.

Now since we are not sure to investigate case C87, what you’d really do is give up “Pays $3 if C87 investigated” for “Pays $100 if C87 investigated and found stinky.” And you could obtain the asset “Pays $3 if C87 investigated” by paying $3 cash and getting a version of this “Pays $3 if C investigated” investigation asset for every possible case C.

So you could reuse the same $3 to weigh in on the chances of stinkiness in every possible case from the set of possible cases. And not only could you bet for and against particular cases, but you could bet on whole categories of cases. For example, you might bet on the average stinkiness of men, or people older than 60, or people born in Virginia.

To get people to bet on all possible cases C, there needs to be at least some chance of picking every case C in the set of possible cases. But these choice chances do not need to be equal, and they can even depend on the market prices. The random process that picks a case to investigate could set the choice chance to be a strongly increasing function of the market stinkiness chance of each case. As a result, the overall chance of the investigation finding stink could be far above the average market chance across the cases C, and it might even be close to the maximum stinkiness chance.

So far I’ve describe a simple version of villain markets, but many variations are possible. For example, the investigation verdict might choose from several possible levels of stink or villainy. If the investigation could look at several possible areas A, but would have to choose one area from the start, then we might have markets trading assets like “Pays $100 if stink found, and area A of case C is investigated.” The markets would now estimate a chance of stink for each area and case combination, and the random process for choosing cases and areas could depend on the market stinkiness chance of each such combination.

Imagine that a continuing investigative resource were available. For example, a reporter could be assigned each month to a new case and area. A new set of markets could be started again each month over the same set of cases. If an automated market maker were set up to encourage trading in these markets, it could be started each month at the chances in the previous month’s markets just before the randomization was announced.

Once some villain markets had been demonstrated to give well-calibrated market chances, other official bodies who investigate villainy might rightly feel some pressure to take the market stinkiness chances into account when choosing what cases to investigate. Eventually villain markets might become our standard method for allocating investigation resources for uncovering stinking villainy. Which might just make for a much less stinky world.

GD Star Rating
loading...
Tagged as:

More Prediction Market Criticism

Back in August I commented on a paper by Mike Thicke that criticized prediction markets:

With each of his reasons, Thicke compares prediction markets to some ideal of perfection, instead of to the actual current institutions it is intended to supplement.

Now Saana Jukola and Henrik Roeland Visser weigh in:

We largely agree on the worry about inaccuracy. .. An alternative worry, which Thicke does not elaborate on, is the fact that peer review .. is also valued for its deliberative nature, which allows it to provide reasons to those affected by the decisions made in research funding or the use of scientific knowledge in politics. .. By pointing out defects and weaknesses in manuscripts or proposals, and by suggesting new ways of approaching the phenomena of interest, peer reviewers are expected to help authors improve the quality of their work. .. peer review .. guards against the biases and blind spots that individual researchers may have. .. Criticism of evidence, methods and reasoning is essential to science, and necessary for arriving at trustworthy results. ..

The severity of the potential obstacles that Thicke and we identify depends on whether science prediction markets would replace traditional methods such as peer review, or would rather serve as addition or even complement to traditional methods. .. Prediction markets do not provide reasons in the way that peer review does, and if the only information that is available are probabilistic predictions, something essential to science is lost. ..

As someone who has often experienced the business end of peer review, I can assure you that peer review far from the most useful channel of criticism for scientists today. And I know of no one who proposes forbidding scientists to talk with or criticize each other! Such talk and criticism was common long before peer review became common in science, and if allowed it should remain common. (Peer review only became common in the last century.) Even in the extreme case (which I have not advocated) where prediction markets were our only channel of research funding, and our only source of scientific consensus.

Jukola and Visser cite my blog post on how markets might pick a best qualitative explanation, but complain:

We could also imagine that there are cases in which science prediction markets are used to select the right answer or at least narrow down the range of alternatives, after which a qualitative report is produced which provides a justification of the chosen answer(s). Perhaps it is possible to infer from trading behavior which investors possess the most reliable information, a possibility explored by Hanson. Contrary to Hanson, we are skeptical of the viability of this strategy. Firstly, the problem of the underdetermination of theory by data suggests that different competing justifications might be compatible with the observation trading behavior. Secondly, such justifications would be post-hoc rationalizations, which sound plausible but might lack power to discriminate among alternative predictions.

Again with comparing an alternative to perfection, and ignoring how existing institutions can also fail such a perfection standard. The under-deterimination of theory by data, and a temptation toward post-hoc rationalization, can exist in all other institutions one might use to elicit explanations. Jukola and Visser make no attempt to argue that prediction markets do worse by such criteria.

GD Star Rating
loading...
Tagged as:

Markets That Explain, Via Markets To Pick A Best

I recently heard someone say “A disadvantage of prediction markets is that they don’t explain their estimates.” I responded: “But why couldn’t they?” That feature may cost you more, and it hasn’t been explored much in research or development. But I can see how to do it; in this post, I’ll outline a concept.

Previously, I’ve worked on a type of combinatorial prediction market built on a Bayes-Net structure. And there are standard ways to use such a structure to “explain” the estimates of any one variable in terms of the estimates of other variables. So obviously one could just apply those methods directly to get explanations for particular estimates in Bayes-Net based prediction markets. But I suspect that many would see such explanations as inadequate.

Here I’m instead going to try to solve the explanation problem by solving a more general problem: how to cheaply get a single good thing, if you have access to many people willing to submit and evaluate distinguishable things, and you have access to at least one possibly expensive judge who can rank these things. With access to this general pick-a-best mechanism, you can just ask people to submit explanations of a market estimate, and then deliver a single best explanation that you expect to be rated highly by your judge.

In more detail, you need five things:

  1. a prize Z you can pay to whomever submits the winning item,
  2. a community of people willing to submit candidate items to be evaluated for this prize, and to post bonds in the amount B supporting their submissions,
  3. an expensive (cost J) and trustworthy “gold standard” judge who has an error-prone tendency to pick the “better” item out of two items submitted.
  4. a community of people who think that they can guess on average how the judge will rate items, with some of these people being right about this belief, and
  5. a costly (amount B) and only mildly error-prone way to decide if one submission is overly derivative of another.

With these five things, you can get a pretty good thing if you pay Z+J. The more Z you offer, the better will be your good thing. Here is the procedure. First, anyone in a large community may submit candidates c, if they post a bond B for each submission. Each candidate c is publicly posted as it becomes available.

A prediction market is open on all candidates submitted so far, with assets of the form “Pays $1 if c wins.” We somehow define prices pc for such assets which satisfy 1 = pY + Sumc pc, where pY is the price of the asset “The winner is not yet submitted.” Submissions are not accepted after some deadline, and at that point I recommend the candidate c with the highest price pc; that will be a good choice. But to make it a good choice, the procedure has to continue.

A time is chosen randomly from a final time window (such as a final day) after the deadline. We use the market prices pc at that random time to pick a pair of candidates to show the judge. We draw twice randomly (with replacement) using the price pc as the random chance of picking each c. The judge then picks a single tentative winning candidate w out of this pair.

Anyone who submitted a candidate before w can challenge it within a limited challenge time window, claiming that the tentative winner w is overly derivative of their earlier submission e. An amount B is then spent to judge if w is derivative of e. If w is not judged derivative, then the challenger forfeits their bond B, and w remains the tentative winner. If w is judged derivative, then the tentative winner forfeits their bond B, and the challenger becomes a new tentative winner. We need potential challengers to expect a less than B/Z chance of a mistaken judgement regarding something being derivative.

Once all challenges are resolved, the tentative winner becomes the official winner, the person who submitted it is given a large prize Z, and prediction market betting assets are paid off. The end.

This process can easily be generalized in many ways. There could be more than one judge, each judge could be given more than two items to rank, the prediction markets could be subsidized, the chances of picking candidates c to show judges might be non-linear in market prices pc, and when setting such chances prices could be averaged over a time period. If pY is not zero when choosing candidates to evaluate, the prices pc could be renormalized. We might add prediction markets in whether any given challenge would be successful, and allow submissions to be withdrawn before a formal challenge is made.

Now I haven’t proven a theorem to you that this all works well, but I’m pretty sure that it does. By offering a prize for submissions, and allowing bets on which submissions will win, you need only make one expensive judgement between a pair of items, and have access to an expensive way to decide if one submission is overly derivative of another.

I suspect this mechanism may be superior to many of the systems we now use to choose winners. Many existing systems frequently invoke low quality judges, instead of less frequently invoking higher quality judges. I suspect that market estimates of high quality judgements may often be better than direct application of low quality judgements.

GD Star Rating
loading...
Tagged as:

Dealism, Futarchy, and Hypocrisy

Many people analyze and discuss the policies that might be chosen by organizations such as governments, charities, clubs, and firms. We economists have a standard set of tools to help with such analysis, and in many contexts a good economist can use such tools to recommend particular policy options. However, many have criticized these economic tools as representing overly naive and simplistic theories of morality. In response I’ve said: policy conversations don’t have to be about morality. Let me explain.

A great many people presume that policy conversations are of course mainly about what actions and outcomes are morally better; which actions do we most admire and approve of ethically? If you accept this framing, and if you see human morality as complex, then it is reasonable to be wary of mathematical frameworks for policy analysis; any analysis of morality simple enough to be put into math could lead to quite misleading conclusions. One can point to many factors, given little attention by economists, but which are often considered relevant for moral analysis.

However, we don’t have to see policy conversations as being mainly about morality. We can instead look at them as being more about people trying to get what they want, and using shared advisors to help. We economists make great use of the concept of “revealed preference”; we infer what people want from what they do, and we expect people to continue to act to get what they want. Part of what people want is to be moral, and to be seen as moral. But people also want other things, and sometimes they make tradeoffs, choosing to get less morality and more of these other things. Continue reading "Dealism, Futarchy, and Hypocrisy" »

GD Star Rating
loading...
Tagged as: , ,

Prediction Markets Update

Prediction markets continue to offer great potential to improve society at many levels. Their greatest promise lies in helping organizations to better aggregate info to enable better key decisions. However, while such markets have consistently performed well in terms of cost, accuracy, ease of use, and user satisfaction, they have also tended to be politically disruptive – they often say things that embarrass powerful people, who get them killed. It is like putting a smart autist in the C-suite, someone who has lots of valuable info but is oblivious to the firm’s political landscape. Such an executive just wouldn’t last long, no matter how much they knew.

Like most promising innovations, prediction markets can’t realize their potential until they have been honed and evaluated in a set of increasingly substantial and challenging trials. Abstract ideas must be married to the right sort of complementary details that allow them to function in specific contexts. For prediction markets, real organizations with concrete forecasting needs related to their key decisions need to experiment with different ways to field prediction markets, in search of arrangements that minimize political disruption. (If you know of an organization willing to put up with the disruption that such experimentation creates, I know of a patron willing to consider funding such experiments.)

Alas, few such experiments have been happening. So let me tell you what has been happening instead. Continue reading "Prediction Markets Update" »

GD Star Rating
loading...
Tagged as: