Tag Archives: Prediction Markets

`Best’ Is About `Us’

Why don’t we express and follow clear principles on what sort of inequality is how bad? Last week I suggested that we want the flexibility to use inequality as an excuse to grab resources when grabbing is easy, but don’t want to obligate ourselves to grab when grabbing is hard.

It seems we prefer similar flexibility on who are the “best” students to admit to elite colleges. Not only do inside views of the admission process seem to show careful efforts to avoid clarity on criteria, ordinary people seem to support such flexibility:

Half [of whites surveyed] were simply asked to assign the importance they thought various criteria should have in the admissions system of the University of California. The other half received a different prompt, one that noted that Asian Americans make up more than twice as many undergraduates proportionally in the UC system as they do in the population of the state. When informed of that fact, the white adults favor a reduced role for grade and test scores in admissions—apparently based on high achievement levels by Asian-American applicants. (more)

Matt Yglesias agrees:

This is further evidence that there’s no stable underlying concept of “meritocracy” undergirding the system. But rather than dedicating the most resources to the “best” students and then fighting over who’s the best, we should be allocating resources to the people who are mostly likely to benefit from additional instructional resources.

But this seems an unlikely strategy for an elite coalition to use to entrench itself. If we were willing to admit the students who would benefit most by objective criteria like income or career success, we could use prediction markets. The complete lack of interest in this suggests that isn’t really the agenda.

Much of law is like this, complex and ambiguous enough to let judges usually draw their desired conclusions. People often say the law needs this flexibility to adapt to complex local conditions. I’m skeptical.

GD Star Rating
loading...
Tagged as: , , ,

Missing Measurements

Luke Muehlhauser quotes from Douglas Hubbard’s How to Measure Anything:

By 1999, I had completed … analysis on about 20 major [IT] investments. … Each of these business cases had 40 to 80 variables, such as initial development costs, adoption rate, productivity improvement, revenue growth, and so on. For each of these business cases, I ran a macro in Excel that computed the information value for each variable. … [and] I began to see this pattern:

  • The vast majority of variables had an information value of zero. …
  • The variables that had high information values were routinely those that the client had never measured…
  • The variables that clients [spent] the most time measuring were usually those with a very low (even zero) information value. …

Since then, I’ve applied this same test to another 40 projects, and… [I’ve] noticed the same phenomena arise in projects relating to research and development, military logistics, the environment, venture capital, and facilities expansion. (more)

In his book summary at Amazon, Hubbard seems to explain this sort of pattern in terms of misconceptions: read his book to fix the three key misconceptions that keep people from measuring stuff. But the above pattern seems hard to understand as mere random errors in guessing each variable’s info value or measurability.

In my experience trying to sell prediction markets to firms, I’ve noticed that when we suggest they make markets on the specific topics that seem to be of the most info value, they usually express strong reluctance and even hostility. They choose instead to estimate safer safer things, less likely to disrupt the organization.

For example, the most dramatic successes of prediction markets, i.e., where correct market forecasts most differ from official forecasts, are for project deadlines. Yet even hearing this few orgs are interested in starting such markets, and those that do and see dramatic success usually shut them down, and don’t do them again. One plausible explanation is that project managers want the option to say after a failed project “no one could have known about those problems.” Prediction markets instead create a clear record that people did in fact know.

But that is just one reason for one kind of example. It isn’t a general explanation for what seems to be an important general and quite lamentable trend. So why exactly do we spend the most to measure the variables that matter the least, and refuse to even measure the variables that matter most?

GD Star Rating
loading...
Tagged as: ,

I’m On After Bill Gates

This Monday I’ll speak for 15 min on Prediction Engines at a Microsoft Faculty Summit. The summit is private, but selected talks and interviews will be streamed publicly. I’ll be in a public interview 13:30–14:00 PDT, right after Bill Gates’ keynote talk. My private talk session is also right after Gates speaks, but in a breakout session; I probably won’t meet him.

Added: video here.

GD Star Rating
loading...
Tagged as: ,

Why Do Bets Look Bad?

Most social worlds lack a norm of giving much extra respect to claims supported by offers to bet. This is a shame because such norms would reduce insincere untruthful claims, and so make for more accurate beliefs in listeners. But instead of advocating for change, in this post I wonder: why are such norms rare?

Yes there are random elements in which groups have which norms, and yes given a local norm that doesn’t respect bets it looks weird to offer bets there. But in this post I’m looking more to explain which norms appear where, and less who follows which norms.

Bets have been around for a long time, and by now most intellectuals understand them, and know that all else equal those who really believe more strongly are willing to bet more. So you might think it wouldn’t be that hard for a betting norm to get added on to all other local norms and cultural factors; all else equal respect bets as showing confidence. But if this happens it must be counter-balanced by other effects, or bets wouldn’t be so rare. What are these other effects?

While info often gets overtly shared in casual conversation, most of that info doesn’t seem very useful.  I thus conclude that casual conversation isn’t mainly about overtly sharing info. So I assume the obvious alternative: casual conversation is mostly about signaling (which is covert or indirect info sharing). But still the puzzle remains: whatever else we signal via conversation, why don’t we typically expect a betting offer to signal overall-admirable confidence in a claim?

One obvious general hypothesis to consider here is that betting signals typically conflict with or interact with other signals. But which other signals, and how? In the rest of this post I explore a few bad-looking features that bets might signal:

  • Sincerity – In many subcultures it looks bad to care a lot about most any topic of casual conversation. Such passion suggests that you just don’t get the usual social functions of such conversations. Conversationalists ideally skip from topic to topic, showing off their wits, smarts, loyalties, and social connections, but otherwise caring little about the truth on particular topics. Most academia communities seem to have related norms. Offers to bet, in contrast, suggest you care too much about the truth on a particular topic. Most listeners don’t care if your claim is true, so aren’t interested in your confidence. Of course on some topics people are expected to care a lot, so this doesn’t explain fewer bets there.
  • Conflict – Many actions we take are seen as signals of cooperation or conflict. That is, our actions are seen as indicating that certain folks are our allies, and that certain other folks are our rivals or opponents. A bet offer can be seen as an overt declaration of conflict, and thus make one look overly confrontational, especially within a group that saw itself as mainly made of allies. We often try to portray any apparent conflict in casual conversations as just misunderstandings or sharing useful info, but bets are harder to portray that way.
  • Provinciality – Bets are most common today in sports, and sport arguments and bets seem to be mostly about showing loyalty to particular teams. In sports, confrontation is more ok and expected about such loyalties. Offering to bet on a team is seen as much like offering to have a fist fight to defend your team’s honor. Because of this association with regional loyalties in sports, offers to bet outside of sports are also seen as affirmations of loyalties, and thus to conflict with norms of a universal intellectual community.
  • Imprudence – Some folks are impulsive and spend available resources on whatever suits their temporary fancy, until they just run out. Others are careful to limit their spending via various simple self-control rules on how much they may spend how often on what kinds of things. Unless one is in the habit of betting often from a standard limited betting budget, bets look like unusual impulsive spending. Bettors seem to not sufficiently keep under control their impulsive urges to show sincerity, make conflict, or signal loyalties.
  • Disloyalty – In many conversations it is only ok to quote as sources or supports people outside the conversation who are “one of us.” Since betting markets must have participants on both sides of a question, they will have participants who are not part of “us”. Thus quoting betting market odds in support of a claim inappropriately brings “them” in to “our” conversation. Inviting insiders to go bet in those markets also invites some of “us” to interact more with “them”, which also seems disloyal.
  • Dominance - In conversation we often pretend to support an egalitarian norm where the wealth and social status of speakers is irrelevant to which claims are accepted or rejected by the audience. Offers to bet conflict with that norm, by seeming to favor those with more money to bet. Somehow, who is how smart or articulate or has more free time to read are considered acceptable bases for conversation inequities. While richer folks could be expected to bet more, the conversation would have to explicitly acknowledge that they are richer, which is rude.
  • Greed - We often try to give the impression that we talk mainly to benefit our listeners. This is a sacred activity. Offering to bet money makes it explicit that we seek personal gains, which is profane. This is why folks sometimes offer to bet charity; the money goes to the winner’s favorite charity. But that looks suspiciously like bringing profane money-lenders into a sacred temple.

Last week I said bets can function much like arguments that offer reasons for a conclusion. If so, how do arguments avoid looking bad in these ways? Since the cost to offer an argument is much less than the cost to offer a bet, arguments seem less imprudent and less show sincerity. Since the benefits from winning arguments aren’t explicit, one can pretend to be altruistic in giving them. Also, you can pretend an argument is not directed at any particular listener, and so is not a bid for conflict. Since most arguments t0day are not about sports, arguments less evoke the image of a sports-regional-signal. As long as you don’t quote outsiders, arguments seem less an invitation to invoke or interact with outsiders.

If we are to find a way to make bets more popular, we’ll need to find ways to let people make bets without sending these bad-looking signals.

Added: It is suspicious that I didn’t do this analysis much earlier. This is plausibly due to the usual corrupting effect of advocacy on analysis; because I advocated betting, I analyzed it insufficiently.

GD Star Rating
loading...
Tagged as: , , ,

Suspecting Truth-Hiders

Tyler against bets:

On my side of the debate I claim a long history of successful science, corporate innovation, journalism, and also commentary of many kinds, mostly not based on personal small bets, sometimes banning them, and relying on various other forms of personal stakes in ideas, and passing various market tests repeatedly. I don’t see comparable evidence on the other side of this debate, which I interpret as a preference for witnessing comeuppance for its own sake (read Robin’s framing or Alex’s repeated use of the mood-affiliated word “bullshit” to describe both scientific communication and reporting). The quest for comeuppance is a misallocation of personal resources. (more)

My translation:

Most existing social institutions tolerate lots of hypocrisy, and often don’t try to expose people who say things they don’t believe. When competing with alternatives, the disadvantages such institutions suffer from letting people believe more falsehoods are likely outweighed by other advantages. People who feel glee from seeing the comeuppance of bullshitting hypocrites don’t appreciate the advantages of hypocrisy.

Yes existing institutions deserve some deference, but surely we don’t believe our institutions are the best of all possible worlds. And surely one of the most suspicious signs that an existing institution isn’t the best possible is when it seems to discourage truth-telling, especially about itself. Yes it is possible that such squelching is all for the best, but isn’t it just as likely that some folks are trying to hide things for private, not social, gains? Isn’t this a major reason we often rightly mood-affiliate with those who gleefully expose bullshit?

For example, if you were inspecting a restaurant and they seemed to be trying to hide some things from your view, wouldn’t you suspect they were doing that for private gain, not to make the world a better place? If you were put in charge of a new organization and subordinates seemed to be trying to hide some budgets and activities from your view, wouldn’t you suspect that was also for private gain instead of to make your organization better? Same for if you were trying to rate the effectiveness of a charity or government agency, or evaluate a paper for a journal. The more that people and habits seemed to be trying to hide something and evade incentives for accuracy, the more suspicious you would rightly be that something inefficient was going on.

Now I agree that people do often avoid speaking uncomfortable truths, and coordinate to punish those who violate norms against such speaking. But we usually do this when have a decent guess of what the truth actually is that we don’t want to hear.

If if were just bad in general to encourage more accurate expressions of belief, then it seems pretty dangerous to let academics and bloggers collect status by speculating about the truth of various important things. If that is a good idea, why are more bets a bad idea? And in general, how can we judge well when to encourage accuracy and when to let the truth be hidden, from the middle of a conversation where we know lots of accuracy has been being sacrificed for unknown reasons?

GD Star Rating
loading...
Tagged as: , ,

Bets Argue

Imagine a challenge:

You claim you strongly believe X and suggest that we should as well; what supporting arguments can you offer?

Imagine this response:

I won’t offer arguments, because the arguments I might offer now would not necessarily reveal my beliefs. Even all of the arguments I have ever expressed on the subject wouldn’t reveal my beliefs on that subject. Here’s why.

I might not believe the arguments I express, and I might know of many other arguments on the subject, both positive and negative, that I have not expressed. Arguments on other topics might be relevant for this topic, and I might have changed my mind since I expressed arguments. There are so many random and local frictions that influence on which particular subjects people express which particular arguments, and you agree I should retain enough privacy to not have to express all the arguments I know. Also, if I gave arguments now I’d probably feel more locked into that belief and be less willing to change it, and we agree that would be bad.

How therefore could you possibly be so naive as to think the arguments I might now express would reveal what I believe? And that is why I offer no supporting arguments for my claim.

Wouldn’t you feel this person was being unreasonably evasive? Wouldn’t this response suggest at least that he doesn’t in fact know of good supporting arguments for this belief? After all, even if many random factors influence what arguments you express when, and even if you may know of many more arguments than you express, still typically on average the more good supporting arguments you can offer, the more good supporting arguments you know, and the better supported your belief.

This is how I feel about folks like Tyler Cowen who say they feel little obligation to make or accept offers to bet in support of beliefs they express, nor to think less of others who similarly refuse to bet on beliefs they express. (Adam Gurri links to ten posts on the subject here.)

Yes of course, due to limited options and large transaction costs most financial portfolios have only a crude relation to holder beliefs. And any one part of a portfolio can be misleading since it could be cancelled by other hidden parts. Even so, typically on average observers can reasonably infer that someone unwilling to publicly bet in support of their beliefs probably doesn’t really believe what they say as much as someone who does, and doesn’t know of as many good reasons to believe it.

It would be reasonable to point to other bets or investments and say “I’ve already made as many bets on this subject as I can handle.” It is also reasonable to say you are willing to bet if a clear verifiable claim can be worked out, but that you don’t see such a claim yet. It would further be reasonable to say that you don’t have strong beliefs on the subject, or that you aren’t interested in persuading others on it. But to just refuse to bet in general, even though you do express strong beliefs you try to persuade others to share, that does and should look bad.

Added 4July: In honor of Ashok Rao, more possible responses to the challenge:

A norm of thinking less of claims by those who offer fewer good supporting arguments is biased against people who talk slow, are shy of speaking, or have bad memory or low intelligence. Also, by discouraging false claims we’d discourage innovation, and surely we don’t want that.

GD Star Rating
loading...
Tagged as: ,

Bits Of Secrets

“It’s classified. I could tell you, but then I’d have to kill you.” Top Gun, 1986

Today, secrets are lumpy. You might know some info that would help you persuade someone of something, but reasonably fear that if you told them, they’d tell others, change their opinion on something else, or perhaps just get discouraged. Today, you can’t just tell them one implication of your secret. In the future, however, the ability to copy and erase minds (as in am em scenario) might make secrets much less lumpy – you could tell someone just one implication of a secret.

For example, what if you wanted to convince an associate that they should not go to a certain party. Your reason is that one of their exes will attend the party. But if you told them that directly, they would then know that this ex is in town, is friendly with the party host, etc. You might just tell them to trust you, but what if they don’t?

Imagine you could just say to your associate “I could tell you why you shouldn’t go to the party, but then I’d have to kill you,” and they could reply “Prove it.” Both of your minds would then be copied and placed together into an isolated “box,” perhaps with access to some public or other info sources. Inside the box the copy of you would explain your reasons to the copy of them. When the conversation was done, the entire box would be erased, and the original two of you would just hear a single bit answer, “yes” or “no,” chosen by the copy of your associate.

Now, as usual, there are some complications. For example, the fact that you suggested using the box, as opposed to just revealing your secrets, could be a useful clue to them, as could the fact that you were willing to spend resources to use the box. If you requested access to unusual sources while in the box, that might give further clues.

If you let the box return more detail about their degree of confidence in their conclusion, or about how long the conversation took, your associate might use some of those extra bits to encode more of your secrets. And if the info sources accessed by those in the box used simple cacheing, outsiders might see which sources were easier to access afterward, and use that to infer which sources had been accessed from in the box, which might encode more relevant info. So you’d probably want to be careful to run the box for a standard time period, with unobservable access to standard wide sources, and to return only a one bit conclusion.

Inside the box, you might just reveal that you had committed in some way to hurt your associate if they didn’t return the answer you wanted. To avoid this problem, it might be usual practice to have an independent (and hard to hurt) judge also join you in the box, with the power to make the box return “void” if they suspected such threats were being made. To reduce the cost of using these boxes, you might have prediction markets on what such boxes would return if made, but only actually make them a small percentage of the time.

There may be further complications I haven’t thought of, but at the moment I’m more interested in how this ability might be used. In the world around you, who would be tempted to prove what this way?

For example, would you prove to work associates that your proposed compromise is politically sound, without revealing your private political info about who would support or oppose it? Prove to investigators that you do not hold stolen items by letting them look through your private stores? Prove to a date you’ve been thinking lots about them, by letting them watch a video of your recent activities? Prove to a jury of voters that you really just want to help them, by letting them watch all of your activities for the last few months? What else?

In general, this would seem to better enable self-deception. You could actually not know things anywhere in your head, but still act on them when they mattered a lot.

GD Star Rating
loading...
Tagged as: , , , ,

High Road Doubts

According to the intellectual norms that I learned when young, there is a high road and a low road for proposing reforms. The low road is populist and pandering – you ignore critics and try anything to get folks who could do something excited about your idea – sex appeal, group loyalties, demonizing opponents, overselling gains, whatever it takes. The high road is elitist and analytical – you carefully write up arguments, ideally with math models, randomized trials, and stat analysis, and present them to elites for evaluation.

Academics usually see the low road as deceptive – by ignoring critics and refusing to present careful arguments for evaluation, you admit your arguments are weak. Low road advocates counter that academic models and trials are often quite distant from actual applications — what really matters is that people try and evolve ideas in realistic contexts, and see how they feel about them there.

Twenty-five years ago, as a thirty year old wondering how to devote my life to pushing prediction markets, a mentor I respected basically suggested a low road – I should write a popular book to get lots of people excited. Instead I mostly chose a high road, going back to school to get a Ph.D., doing math models, lab experiments, etc.

Today I have reached a notable milestone along that road; my paper arguing for futarchy, a form of governance based on decision markets, is now published in the leading academic journal in the field of political philosophy: the Journal of Political Philosophy. This would be the abstract, if that journal had them:

Shall We Vote on Values, But Bet on Beliefs?

Democracies often fail to aggregate information, while speculative markets excel at this task. I consider a new form of governance, wherein voters would say what we want, but speculators would say how to get it. Elected representatives would oversee the after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare. Those who recommend policies that regressions suggest will raise GDP should be willing to endorse similar market advice. Using a qualitative engineering-style approach, I consider twenty-five objections, and present a somewhat detailed design intended to address most of these objections.

Of course I might do even better someday, perhaps publishing top journal articles on math models or lab experiments. Even so, this seems a good time to ask: is the high road really better?

I have doubts. What futarchy and decision markets mainly need, and have long needed, are organizations to try them out on small scales, to work out the little details that general ideas need for practical application. Small scale successes might then lead to larger trials, perhaps eventually at very large scales. And I doubt that publishing this paper, or further top journal papers, will do much to induce such trials.

A pandering popular book might do much more, if it actually got people to try the idea. They wouldn’t have to do it for the right reasons, by correctly evaluating pro and con arguments. In fact, it would be fine if the book gave most folks much worse estimates, as long as it induced a thicker high tail of enthusiasm to actually do something. A better idea for reform, with a big pool of rational advocates, might add much less value to the world than a worse idea for reform, matched with fewer less rational advocates willing to actually try and evolve their idea.

After all, beliefs mainly matter for inducing relevant actions. The high road might produce more accurate beliefs, but the low road may often get more things done.

GD Star Rating
loading...
Tagged as: , ,

US Record All Calls?

Many claim that the US Government saves recordings of all the phone calls, emails, etc. that it can get:

Wednesday night, [CNN's] Burnett interviewed Tim Clemente, a former FBI counterterrorism agent, about whether the FBI would be able to discover the contents of past telephone conversations between [terrorist Tamerlan Tsarnaev and his wife]. He quite clearly insisted that they could. … On Thursday night, Clemente again appeared on CNN, this time with host Carol Costello. … He reiterated what he said the night before but added expressly that “all digital communications in the past” are recorded and stored. …

Former AT&T engineer Mark Klein revealed that AT&T and other telecoms had built a special network that allowed the National Security Agency full and unfettered access to data about the telephone calls and the content of email communications for all of their customers. … His amazing revelations were mostly ignored and, when Congress retroactively immunized the nation’s telecom giants for their participation in the illegal Bush spying programs, Klein’s claims (by design) were prevented from being adjudicated in court.

That every single telephone call is recorded and stored would also explain this extraordinary revelation by the Washington Post in 2010:

Every day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications.

Bruce Schneier is skeptical, however:

I don’t believe that the NSA could save every domestic phone call, not at this time. Possibly after the Utah data center is finished, but not now.

This seems to me a great place for a prediction market. It seems quite likely that the truth will be revealed within a half century, and if this claim is true hundreds of people must know who might be tempted to make a little extra money via anonymous bets.

GD Star Rating
loading...
Tagged as: ,

False Flag Forecasts

As admitted by the U.S. government, recently declassified documents show that in the 1960′s, the American Joint Chiefs of Staff signed off on a plan to blow up American airplanes (using an elaborate plan involving the switching of airplanes), and also to commit terrorist acts on American soil, and then to blame it on the Cubans in order to justify an invasion of Cuba. (more; see also)

One in seven people are convinced that the U.S. government was involved in a conspiracy to stage the September 11 attacks which killed nearly 3,000 people. A survey, which interviewed 1,000 people in the UK and the same number in the U.S., found that 14 per cent of Britons 15 per cent of Americans think the past administration was involved in the tragedy. (more from ’11)

More from ’08:

whobehind911

Such conspiracies aren’t always, or even usually, uncovered eventually, but such uncovering does happen often enough to make it seem socially useful to have betting markets on such questions.

Yes, such markets would have to be long term, and might need to be subsidized. And they might need to be housed in a reasonable distant and independent nation, like New Zealand.

But such market odds might offer an independent and reasonably reliable source to which doubters could turn when they weren’t sure how much weight to put on conspiracy theories vs. their skeptics. If you doubted who was behind the 9-11 attacks, wouldn’t it be great if you could turn to a betting market to better calibrate your doubts?

GD Star Rating
loading...
Tagged as: ,