Bets Argue

Imagine a challenge:

You claim you strongly believe X and suggest that we should as well; what supporting arguments can you offer?

Imagine this response:

I won’t offer arguments, because the arguments I might offer now would not necessarily reveal my beliefs. Even all of the arguments I have ever expressed on the subject wouldn’t reveal my beliefs on that subject. Here’s why.

I might not believe the arguments I express, and I might know of many other arguments on the subject, both positive and negative, that I have not expressed. Arguments on other topics might be relevant for this topic, and I might have changed my mind since I expressed arguments. There are so many random and local frictions that influence on which particular subjects people express which particular arguments, and you agree I should retain enough privacy to not have to express all the arguments I know. Also, if I gave arguments now I’d probably feel more locked into that belief and be less willing to change it, and we agree that would be bad.

How therefore could you possibly be so naive as to think the arguments I might now express would reveal what I believe? And that is why I offer no supporting arguments for my claim.

Wouldn’t you feel this person was being unreasonably evasive? Wouldn’t this response suggest at least that he doesn’t in fact know of good supporting arguments for this belief? After all, even if many random factors influence what arguments you express when, and even if you may know of many more arguments than you express, still typically on average the more good supporting arguments you can offer, the more good supporting arguments you know, and the better supported your belief.

This is how I feel about folks like Tyler Cowen who say they feel little obligation to make or accept offers to bet in support of beliefs they express, nor to think less of others who similarly refuse to bet on beliefs they express. (Adam Gurri links to ten posts on the subject here.)

Yes of course, due to limited options and large transaction costs most financial portfolios have only a crude relation to holder beliefs. And any one part of a portfolio can be misleading since it could be cancelled by other hidden parts. Even so, typically on average observers can reasonably infer that someone unwilling to publicly bet in support of their beliefs probably doesn’t really believe what they say as much as someone who does, and doesn’t know of as many good reasons to believe it.

It would be reasonable to point to other bets or investments and say “I’ve already made as many bets on this subject as I can handle.” It is also reasonable to say you are willing to bet if a clear verifiable claim can be worked out, but that you don’t see such a claim yet. It would further be reasonable to say that you don’t have strong beliefs on the subject, or that you aren’t interested in persuading others on it. But to just refuse to bet in general, even though you do express strong beliefs you try to persuade others to share, that does and should look bad.

Added 4July: In honor of Ashok Rao, more possible responses to the challenge:

A norm of thinking less of claims by those who offer fewer good supporting arguments is biased against people who talk slow, are shy of speaking, or have bad memory or low intelligence. Also, by discouraging false claims we’d discourage innovation, and surely we don’t want that.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Intellectuals shouldn’t bet their beliefs because we shouldn’t be interested in their beliefs. What they have to contribute is their opinions, which don’t rationally control betting. (Opinion in my vernacular is belief with the Aumann Theorem factored out. See “The distinct functions of belief and opinion.” — .)

    Intellectuals who think they should “really believe” what they publicly argue have a puerile attitude.

  • Pingback: On Bets and Bullshit | Adam Gurri()


    I can think of many reasons not to bet:

    1) I don’t trust the arbiter who ultimately decides who’s right

    2) I don’t expect to get my money back within my lifetime

    3) I don’t have a lot of money to spare right now

    4) I expect the reward to be so small that I’d be better off if I invested the money somewhere else

    5) I expect my opinion to be the most probable answer, but to still have a less than 50% probability of being right

    6) I do not wish to participate in prediction markets because that may lead to authorities being tempted to base decisions on the state of unresolved prediction markets, opening up yet another way the rich can manipulate our lives (have fun trying to outbid the Koch Brothers): as soon as people start basing things on the state of unresolved prediction markets they invite massive manipulation

    7) I do not want to create an impression among the public that science is about gambling or a democracy

    8) I do not feel the need to gloat over being right and making others (who might be good researchers that had good arguments) poorer by doing so

    9) I do not believe past prediction market results are useful predictors: I may win three times in a row but that would mostly be coincidence, it would be tempting, but dangerous to start thinking otherwise (as Wall Street or any other gambling addict keeps showing us)

    10) combining 6) and 9) leads me to conclude there’s no point to prediction markets

    • Matthew Graves

      “5) I expect my opinion to be the most probable answer, but to still have a less than 50% probability of being right”

      That’s what odds are for; you don’t need to wager even amounts. (My last public bet was my $20 if something was a hoax against someone else’s $500 if it wasn’t; I thought it was more than ~4% likely to be true but less than 50% likely to be true, and they thought it was less than ~4% likely to be true.)

      • IMASBA

        Such coordination is not possible in a prediction market (and if forced it would mean the arbiter is partisan).

      • Matthew Graves

        Huh? That would correspond to them selling 50 contracts that pay out $10 if X, and me buying them for $0.40 apiece.

        In a normal prediction market, the price floats as people offer contracts to pay out if X and bid on those contracts. Like any asset, I would buy it if its price goes below a certain amount, and sell it if its price goes above a certain amount.

      • IMASBA

        “That would correspond to them selling 50 contracts that pay out $10 if X, and me buying them for $0.40 apiece.”

        Yeah, of course, my mistake, should’ve though a few seconds longer about that.

      • IMASBA

        I assumed rewards would be entirely funded from the pot (so they couldn’t sell contracts promising a fixed reward), but maybe I shouldn’t have assumed that.

    • Ari


      Do you do research on prediction markets? Do you read the Journal of Prediction Markets? What makes you so confident about having opinions about them over those who do? What makes you any different from the random guys who criticize climate science without ever having even read a textbook about climate science?!

      None of this is new information, just same people parroting same simple arguments (that #6 is a great example) which adds nothing new to the debate.

      I’ve seen this elsewhere where I have “expert” status (It had nothing to do with politics or money), and the ignorance of the people who know nothing is just astounding, which came apparent later on. Having a big mouth but no responsibility is a recipe for disaster.

      • IMASBA

        Prediction markets are hardly an exact science (climate science is) and you don’t need complicated math or terminology to understand them. Prediction markets weren’t born out of the discovery of a natural phenomenon, they’re simply an extension of (American) free market worship (there are some pretty explicit assumptions about human psychology and sociology involved, probably of the same quality as the assumption that a CEO who gets paid $3 million will work twice as hard for $6 million). My point 6) may not add anything new, neither is Robin’s advocacy of prediction markets and he did write about future governments basing decision on the state of unresolved prediction markets.

        Finally prediction markets are still very much experimental and the questions of their usefulness and effectiveness are far from being resolved even among the experts

      • IMASBA

        Also, what do you like to hear from us non-experts on this blog, “yes, mylord, thank you for sharing your wisdom mylord, us peasants bask in your enternal light mylord”? You can’t just snub us for not being experts.

      • Daniel Carrier

        Those arguments may be common, but Robin Hanson didn’t state them, so a large number of people would be reading this and not have seen the arguments yet. In addition, you don’t want to forget about an argument just because it has been used before. It’s important to remember the obvious.

      • IMASBA

        “you don’t want to forget about an argument just because it has been used before. It’s important to remember the obvious.”

        Yes, you can’t loose sight of the original assumptions, constraints and parameters of an idea. If you do you risk overextending the idea and then when things go south you can’t figure out why.

        Newtonian mechanics works great in daily life so you might forget that it doesn’t apply at extreme speeds if you leave that disclaimer out of it for convenience. Then someday you will find yourself on a spaceship at relativistic speeds and you crash into a star and die because you use Newtonian mechanics to calculate the ships’s trajectory.

        Economics is full of ideas that only hold under very specific circumstances. Mindlessly applying these ideas to all aspects of life without listening to criticism anymore is dangerous.

    • have fun trying to outbid the Koch Brothers

      Great fun it would be! The more money they throw into corrupting the result, the greater the gain from betting against them. Presumably, people don’t need to try to “outbid” Koch: they merely need to try to make money at his expense, and he’ll undo himself–as long as the pot’s large enough relative to the resources Koch is willing to expend.

      On the other hand, evading these conditions may supply ample room for possible manipulation. Arms race anyone?

      • IMASBA

        “Great fun it would be! The more money they throw into corrupting the result, the greater the gain from betting against them. Presumably, people don’t need to try to “outbid” Koch: they merely need to try to make money at his expense, and he’ll undo himself–as long as the pot’s large enough relative to the resources Koch is willing to expend.”

        The Koch Brothers wouldn’t be interested in the result, they would be interested in influencing decision making based on the state of unresolved prediction markets (like Robin once suggested for governments). Also the betting could be anonymous so you wouldn’t know for sure if a large influx of betting money on one side was genuine or manipulation. And of course we would see the rise of traders who bet solely on the belief that hugely uneven states are the result of manipulation, and that is in itself a form of manipulation since these people would only be looking at the current state, not the arguments behind the opposing positions. As with the stock market the state of the market would less and less represent the true value of different positions and more and more the moods and manipulations of crafty traders and people who “bet against the system”.

  • A bet might show what you believe. But I am not interested in what you believe, I am interested in the truth. The two are not the same thing. It is like an argument from popularity.

    Many people believe in false things, and some would be willing to bet on it. But what they can’t do is give convincing arguments. Should I trust them based on the amount of money they are willing to put on the line? Or based on whether their reasons for the believe make sense?

    • Jess Riedel

      One way to look at it might be that there are two types of disagreement situations (which bleed into each other): (1) Two people have access to some objective-and-transferable bits of info (including both data and arguments), and the sets of info differ between the two persons. Therefore, exchange of arguments and/or data is sufficient to resolve their disagreement. (2) Two people have access to the same objective and transferable info, but each also have access to differing non-transferable info. Some non-transferable info might be objective (i.e. data that is too large to easily transfer, especially when time is a factor), but a lot of it would be intuition. This intuition might be professional, aesthetic, or philosophical, and importantly is very difficult to articulate in a way that allows one to communicate it to another. Such intuition seems to be the source of much disagreement.

      Twan, your point is relevant to disagreements of type (1), while Robin is really talking about those of type (2).


    • Daniel Carrier

      There are many false things that people believe, but there are vastly more false things that people do not believe. As such, someone believing something is powerful evidence that it’s true.

      People are also more likely to believe something that is true than they are to claim to believe something is true. There is often a benefit to claiming something that is independent of its truth value, but the benefit of actually believing something is almost entirely about whether or not it’s true. As such, people actually believing something is evidence beyond them claiming to believe it.

    • joachimschipper

      Consider $FAMOUS_INVESTOR, known for making a fortune in the stock markets by repeatedly making good picks. You’ve just come across reliable, non-public, randomly-sampled information that $FAMOUS_INVESTOR has invested in $COMPANY. You didn’t get his/her reasons. Does this increase the probability that you will invest in $COMPANY?

  • Siddharth

    It seems that such a person (not willing to offer arguments or take bets) is saying that he believes X not because he thinks X is true but because of complex social/signalling reasons which have little to do with the truth.

  • Pingback: Assorted links()

  • Robert Koslover

    In summary: “Put your money where your mouth is.”

    Sounds good to me!

    • Put your money where your mouth is might be good advice when you’re dealing the near-mode prospects, but it isn’t even clear what it means when the issues are far-mode abstractions. Consider a question that’s related to the injunction: If you’re so smart, why aren’t you rich?–particularly germane in economics. Part of the fallacy involved is that it takes different (not necessarily incompatible) qualities of mind to extract the essential principles versus to combine that knowledge with background knowledge to apply the principles to particular cases in a chaotic world.

      Expertise in the sciences is properly evaluated by professional peers, not market success, which bears little relationship to scientific prowess.

      I have trouble concluding anything but that you conservatives and libertarians are looking to crown authority figures on which you can rely rather than to further scientific debate.

  • F.E. Guerra-Pujol (Enrique)

    I generally agree with Robin’s analysis and his critique of Tyler, but I would like to hear him respond more directly to Noah’s point about betting portfolios and arbitrage bets

  • Jef Allbright

    As a technical manager for many years, I observed among myself and my staff an inverse relationship between experience and certainty in hypothetical answers. And this tendency was stronger as the problem became more interesting (involved a broader context.)

    A plot of troubleshooting dynamics would have shown the rookie’s large fluctuations between cluelessness and certainty as they tried one solution after another, wheras the experienced agent would have shown much less oscillation (and less certainty) as they more reliably converged on a solution (but quite possibly never The Answer.)

    Stated another way, for an agent embedded in an environment of inherent uncertainty (and not like Archimedes, who somehow standing outside the system, seeks only a sufficient epistemological lever), wisdom consists in increasing knowledge of what something is _not_ rather than in presuming to know what something _is_.

    I believe this principle goes to the heart of this discussion–and is quite interesting–but I see no pragmatic approach to betting on such a belief.

    • I don’t see why one can’t bet in this situation. A bet has two sides, so if someone is betting on what the problem is, someone else is betting on what it is not.

      • Jef Allbright

        The difficulty is in the assumption of a closed context within which The Answer can be judged True versus working within an effectively open context within which there is no expectation of Truth but rather an appreciation of what works–for now.

        Lather, rinse, repeat, with consequences feeding back for increasing coherence over increasing context.

        My point is that as embedded agents, we _never_ have access to the (capital-T) Truth–never had it, never will–so what is most valuable (what we should be, and actually are betting on as an inescapable matter of our ongoing meaningful growth) is not the first-order consequences that are the object of prediction markets, but the discovery of higher-order principles by which we arrive at predictions that are not necessarily “True” but which work in the present but evolving context.

      • You don’t need to assume a closed context to make bets! If you observe rookies making claims that are soon proved wrong, those claims could be the basis of bets.

      • Jef Allbright

        This discussion has become amusinglyfrustratingly self-referential.

  • Lord

    While it is more interesting for us not involved to have people bet, and more informative as it requires agreement on a measurable prediction, if others are persuaded of their arguments and make bets, these proponents will still have influence over them. A bet is an exposure of ones arguments as unpersuasive. A failure to persuade is also such an exposure, but most of us would prefer to be persuaded than be faced with bravado and posturing, and while unpersuaded, may still have no strong preference of our own. Betting does force one to think about probabilities but too often extreme positions must be taken to induce a risk adverse counter party to accept a wager or otherwise remain an unclaimed bet. So I can see why those unaccustomed to wagering would not want to bother. I would be as satisfied with the simple statement of a prediction free from any wager, for after all, the wager is insignificant and there is credibility to be gain or lost with a public prediction that is more significant.

    • A simple prediction that is forgotten isn’t of much use. To be socially useful, someone needs to go back and check on predictions, and report that in a way to reward or punish the predictor. Bets arrange for such checking and reward or punishment.

      • Lord

        I agree systemization can represent real improvement, such as stock analysts track records, but bets are only more memorable to the parties. The winner may remind others later of his opponents failures, but otherwise few will remember. But how many can forget a book predicting Dow 36,000, even without a bet?

  • Betting on beliefs provides bad incentives for intellectuals. Theoretically controversial work isn’t valued primarily for getting outcomes right but for the insights they provide. How much does the value or lack of value of Robin on ems depend on whether he gets it right, which I think even he has to admit depends largely on his being lucky.

    The incentive provided by betting on outcomes would lead intellectuals to overemphasize correct results. In the social sciences, correct results are only valuable insofar as their presentation leads its audience to a greater understanding. Insightless prediction of results is a carnival trick, not science, and if the insight is there, the results are important only in relation to it.

    Robin has “psychoanalyzed” his dissenting colleague, who has replied in the same vein: Robin is looking for comeuppance. I suppose Cowan knows Hanson better than I (but then, he’s an economist and I’m a former psychologist:)); but I don’t think Robin’s a comeuppance type. I could be wildly wrong, but I think the dynamic is this: Robin doesn’t want to have to worry too much about the success of his book in persuading “elites.” He’d rather say to himself, “I’m right and they’re wrong, and that’s what counts.”

    He may learn whether he’s right in his next life, but being right won’t automatically justify his book. Intellectual life isn’t fundamentally about being right but about influencing the discussion beneficially. This is best achieved by presenting one’s best opinions, even if one doesn’t (usually) believe they’re true.

    More posters than I would have expected are subject to a moralized conception of discourse, according to which, I suppose, saying what you don’t “believe” is lying. Similar considerations led him four years ago to conclude “classic prose” involves lying — . Ethics is more context relative. ( )

    [A side thought: Robin thinks he’s an N on the N-S scale of the Myer Briggs–the test really tells you what you think you are rather than what you are. I can’t imagine an N having this emphasis on an “accuracy” defined in near-mode terms.]

  • Radford Neal

    For most people, betting amounts like $100 is just a public relations stunt, not a meaningful financial transaction. In particular, the monetary value of the time involved is likely to be greater than the expected return. Betting substantial amounts like $10000 is also unlikely to be worthwhile, because the much larger transaction costs of drafting a clear contract and somehow ensuring that payment will be legally enforceable are again greater than the expected return. Betting really substantial amounts like $1000000 is discouraged by personal utility not being linear in the dollar value of wealth.

    • My point is that a bet is an argument. If making a bet isn’t worth the bother to make your argument, why would saying a bunch of words be worth the bother?

      • You really do think “arguing” is about demonstrating confidence.

      • Radford Neal

        But given that the bets don’t really make sense from a financial standpoint, why would making them be persuasive to an observer? In particular, why would they be more persuasive than the arguments you could express with a similar amount of effort?

      • Arguing doesn’t make sense either from a direct personal benefit sense; it won’t change our own beliefs enough to buy better decisions. When we argue we pay a personal cost in time and effort in order to persuade others. I’m suggesting we bet for the same reason, to persuade others. Bets should be persuasive because all else equal believing more makes you willing to bet more.

      • Radford Neal

        But once everyone realizes that the bets don’t make sense financially, why would they be persuasive? If we all KNOW that people are making them as “arguments”, then they have no force as arguments, since people would be making these “bets as arguments” regardless of the strength of their beliefs if they want to influence people.

      • Robin Hanson

        The fact that your wanting to persuade influences your bets does not change the fact that all else equal the stronger your belief the more betting you are willing to do.

  • The entire point of the post is about an analogy between arguments and bets. Yet 30 comments in and no one speaks to that analogy. It is just as if they were responding to the keyword topics, and ignoring the content of the post. This is pretty much what I’d expect if I said the word “abortion” in the post title. People would take that as license to express their standard opinions on that topic, and mostly ignore the content of the post.

    • Yet 30 comments in and no one speaks to that analogy.

      That can mean your argument was so good that no one could answer it or so weak that no on could understand it. (Not only you but even the reader doesn’t necessarily know which.)

      There’s certainly some analogy between arguments and bets: offering either tends to demonstrate belief. But bets do this far, far better–which shows that there’s something wrong with your implication that demonstrating belief is the main function of arguing. In many contexts, in fact, numerous arguments demonstrate weak beliefs rather than strong beliefs, such as in death row hearings before the California Supreme Court where attorneys for clients who have no chance raise every conceivable argument. In law in general, raising too many arguments shows that none are strong and that you’ll probably lose.

      Arguments are mainly to convince or persuade, not to demonstrate belief. Strength of belief is a mere correlate of arguing and not a strong or invariant correlate.

      Demonstrating strength of belief in most contexts is really rather like using too many arguments. The fact that you, the author, are confident in your argument or that Krugman isn’t so confident isn’t strictly devoid of information–neither are the weak arguments that attorneys proliferate in death row cases–but as information supporting the correctness of your position it is usually so weak an indicator that it’s a distraction in serious intellectual conversations, from which it should be generally excluded for the same reasons that an attorneys opinions of his client’s case or hearsay testimony are excluded from the courts.

      I didn’t think you intended your analogy to be taken so seriously; more a rhetorical prop because as an argument it seems to be an obvious fallacy, which is why I thought you never stated it crisply as an argument. You’re saying betting and arguments both show confidence–and then assuming that that’s the main reason for each. Showing confidence is an extremely tertiary reason for presenting arguments but its the primary (relevant) reason for betting.

      • It isn’t that we argue to show our belief. We argue more to persuade. It is that bets show our belief, and *that* is a reasonable way to persuade. It is reasonable to change your belief on a topic on the basis of someone else’s willingness to pay costs to show their beliefs on the topic.

      • It is that bets show our belief, and *that* is a reasonable way to persuade.

        That’s exactly what’s at issue: is it a reasonable way to persuade. Strong belief about controversial matters (Daniel Carrier, for example — — neglects to conditionalize) is weak (not “powerful”) evidence.

        The mere fact that it may be rational to change my belief based on someone else’s willingness to pay doesn’t imply that people trying to convince each other based on proving the strength of their beliefs leads to reasoned discussion or reasonable outcomes. (I’m sorry for exceeding the posting limit, but I just couldn’t rest until I discovered your basis for what I consider a silly error that is widely embraced, apparently, among libertarians. Here, I think I have it: the fallacy of composition.)

        You must confront the whole question of what leads to reasonable societal discussions, not just what makes it rational for an individual to change their opinion. What could show stronger belief than a suicide bomber, who forfeits his life in a bet on Islamic immortality? There is (a modicum of) “rationality” in someone observing this behavior and converting, which is the idea. But this isn’t a reasonable way to have a discourse. Strong appeals to individual rationality do NOT necessarily lead to reasonable discourse; often they don’t, and this is particularly the case when the evidence is evidence because it merely demonstrates strong belief.

        Another relevant example. Why do norms of debate tend to proscribe ad hominem arguments, for instance the argument that the maker is a proven hypocrite. Ad hominem arguments have their place, but in the highest and most scholarly forms of discussion, they’re banned, treated as “fallacies.” Why? If proving belief can persuade “reasonably,” then proving that the proponent is (generally) a hypocrite should dissuade, and it can in fact be rational to change one’s belief on learning the proponent is a general hypocrite.

        Simply finding “effective” ways to persuade isn’t all you’ve cracked it up to be, and the norms regarding discourse don’t serve hypocrisy exclusively. They also serve to protect a far-mode disinterestedness from the intrusion of near-mode status-seeking, which lowers intellectual level of discussion even as it persuades.

    • IMASBA

      “The entire point of the post is about an analogy between arguments and bets. Yet 30 comments in and no one speaks to that analogy. It is just as if they were responding to the keyword topics, and ignoring the content of the post. This is pretty much what I’d expect if I said the word “abortion” in the post title.”

      Your analogy was that if we want people to argue their beliefs we should also want them to bet on their beliefs, and when the father of futarchy talks about betting he is of course advocating for prediction markets.

      Several people here adressed reasons why it’s just not the same thing, in theory, in practice (result), or both. They may not have stated explicitly that they were juxtaposing argueing against prediction markets, but you didn’t make your analogy explicit either, someone who can read between the lines has no problem discerning both your analogy and the implict juxtapositions in the comments.

      • >Your analogy was that if we want people to argue their beliefs we should
        also want them to bet on their beliefs, and when the father of futarchy
        talks about betting he is of course advocating for prediction markets.

        I don’t think Robin’s argument is *essentially* about prediction markets. At least that’s not what I react to most strongly in what he writes here.

        Robin makes it increasingly clear that this is about making ad hominem arguments intellectually respectable. If Hanson can prove he’s confident in what he says, he (according to Hanson) has stated an argument! And it’s fine for Hanson to counter an opponent’s arguments by pointing out that the opponent has failed to demonstrate self-confidence!

        Perhaps Hanson thinks intellectual discourse hasn’t progressed beyond the status contests of primeval times. I don’t really know what he thinks about this; the equation of bets and arguments is just plain weird, yet Robin’s obviously quite committed to it and thinks he’s explained it adequately.

  • Ashok Rao

    I think bets as an institution can increase innovation, as an expectation for being considered worthwhile not so much. Here’s what I’ve written before, in an argument that all DSGE models be floated on a prediction market (the models, not the predicted output):

    “I have explained how we can use the market to tell us the right DSGE. But we can also use it to incentivize the creation of better models, microfounded or not. Think about how Nate Silver made election forecast history in 2008 and again in 2012. He relied a little bit on polls (statistical estimates or individual DSGEs) and a bit on Intrade (the prediction market). He decided he knew better than the market as a whole – and way, way better than some idiotic estimates from Rasmussen or Gallup. He profited. Of course, if he did bet on a prediction market, his winnings were limited by the fact that everyone else thought Obama would win as well.

    An election forecast is discrete (though it may be analogized in the form of distributions etc.) The growth rate of productivity, wages, capital, and GDP are not. If I’m a Nate Silver, and I feel the current favorite of the prediction market can be improved, I go home and create a nice little DSGE. I’m also the first one to know about this DSGE, by definition. I form a subjective prior on the veracity of my product, which ipso facto should be higher than my odds that the market pick is right, and upload it to the market, and place my (relatively) high odds on the pick: and a nice, juicy bet.

    If it turns out my model does “beat the market”, I’ll earn a healthy sum until other models and the market-pick tweak it into the average (weak EMH-ish). Suddenly, model designers across the world are incentivized not just to create DSGEs that will most beautifully grace the letters of AER or impress friends, but those that will best predict out-of-sample trends. So not only do we have a better aggregation of existing models and parameters which itself is hugely useful, but we have, without any cost, incentivized people to create a better model. Furthermore, you may not be a model designer but a econometrician. You now have the incentive to gather the best estimates of future parameters and tweak it off your DSGE of choice. Clearly such a market will also encourage cleaner and better statistics, as the goal is now accuracy not publication.”

    So I think you get most of my claim right in your challenge, but I want to note the possibility to the contrary as well. However, this operates through a different channel altogether, rather than the “tax on bullshit”.

    It is not like academics today are inundated in predictions and have a hard time deciding which is bullshit. It’s not like every time some guy predicts 50% inflation tomorrow my life is worse. No externalities. Why tax?

    So without expecting everyone to bet you a) get a flood of idiotic predictions and b) get a few gems. But since we respect bets from models and not random luck, the few gems would eventually prove really successful even without betting and receive the attention they deserve. And the crappy ones would do no harm.

    Also, note that I agree bets are good to clarify opinions and force introspection among friends. I do not agree, clearly, though that an argument is worthless without an associated bet. There are non-argument related incentives to making a bet, just as there are non-argument related frictions to making a bet.

    • IMASBA

      “If it turns out my model does “beat the market”, I’ll earn a healthy sum until other models and the market-pick tweak it into the average (weak EMH-ish). Suddenly, model designers across the world are incentivized not just to create DSGEs that will most beautifully grace the letters of AER or impress friends, but those that will best predict out-of-sample trends.

      But since we respect bets from models and not random luck, the few gems would eventually prove really successful even without betting and receive the attention they deserve.”

      I’m very skeptical about such predictions. After all the years the stock market has existed and traders have specialized in modeling it, a gorilla choosing random stock can still keep up with the market for years at the time. Multiple studies have shown that executive, investors, stock trader and even (pop-)cultural performance are largely based on coincidence. In complex sectors there aren’t really any true champions, our ape brains just like the idea of idolizing champions so much that we imbue those who are slightly above the average and get a lucky streak with notions of “excellency”.

  • Jonas

    The point of the bets isn’t even about winning the money. It’s about the public determination and communication of the judgment. The money is there just to ensure that each side faithfully recorded their position, and the judgment is open and neutral. It also ensures public recognition of the fact that the lower accepted the judgment that he lost (presumably he paid up), or risk social approbation for weaseling out.

    The difficulty of debating with most people is that they keep shifting their positions once part of it becomes discredited. Or they argue side points that don’t reflect on the underlying position but may be a debating trick to agitate the opponent or discredit his personality (which won’t affect a bet). Even if they admit they’re wrong, they fuzz up their memory of it so they can deny having done so (or parse it so narrowly it’s meaningless) once a little time has passed. Betting is just one way to force better records, put a monetary incentive on doing so, and gain some public recognition as a result.

    If gambling makes you queasy, you can avoid the gambling aspect just by each side putting up a bond on their own word. If they’re wrong, they contribute to charity. Warren Buffets makes those gentlemen bets all the time (e.g. vs a hedge fund guy on sustainability of hedge fund returns). It’s to force the opponent to be specific and pinned down on exactly what his claim is, so it can be evaluated.

    • IMASBA

      “If gambling makes you queasy, you can avoid the gambling aspect just by each side putting up a bond on their own word. It’s to force the opponent to be specific and pinned down on exactly what his claim is, so it can be evaluated.”

      Yes, but this is very different from a prediction market where people can bet anonymously and some will attempt manipulation.

  • I’ve maintained throughout this discussion that intellectuals betting on their ideas is an undesirable practice, but now let me add that it is less bad to bet on a prediction market than to “just bet” (at 50-50 odds). Since I’ve seen no reason to think anyone will be willing to subsidize general prediction markets enough to make them sufficiently rewarding to work, it’s important to distinguish the effects of betting on prediction markets from “ordinary” bets (such as was offered Krugman, for example) in order to see just how bad betting would really be for intellectual life.

    The difference is that the prediction market would reward the person betting on their idea approximately according to the marginal utility of the idea. Put simply, an idea that everyone else thinks is wrong is (usually) more valuable than an idea most everyone else thinks is right, and the prediction market takes this into account.

    Conclusion: Pressure to accept bets, like Hanson puts on Cowan, would produce even more conformism ( see “Pathologies of belief-opinion confusion” [ ] than would prediction markets. People should be encouraged—not discouraged by obligations to bet—to develop “improbable” ideas.

    Bryan Caplan notes in a tweet I came across that “getting wrong people to bet is like pulling teeth.” To translate (Robin style), people who are probably wrong but whose ideas would be very valuable if correct aren’t dumb enough to accept 50-50 odds.

  • jason

    I question the assumption that arguments are made to persuade others. Most people never convince anyone to change their beliefs and attitudes via an argument, and yet everyone keeps making them. Why?

    Here’s a relevant quote from Georg Christoph Lichtenberg: “I ceased in the year 1764 to believe that one can convince one’s opponents with arguments printed in books. It is not to do that, therefore, that I have taken up my pen, but merely so as to annoy them, and to bestow strength and courage on those on our own side, and to make it known to the others that they have not convinced us”

    Another reason is that everyone is taught to argue in school and many people express themselves via argument out of habit. “I like trees” is an idiotic thing to say, but “We need to act now to reduce CO2 emissions because CO2 causes global warming!” indicates you’ve read a book or two and believe in Science. Structuring your expressions of attitude and feeling as arguments is a cheap and easy way to signal that you’re educated and thoughtful, and a certain type of person can’t help themselves.

  • dmytryl

    Let’s apply this to engineering for example. I need an electronic circuit for a task. You draw something and bet me it works, someone else draws something and writes competent looking circuit analysis (which you did not provide, which is indicative of you not actually having any reason to think that the circuit even works).

    The bet can be seen as buying persuasiveness with money, and as such provides no guarantee that you hold the belief.

  • Pingback: Overcoming Bias : Why Do Bets Look Bad?()

  • Pingback: The Reality of the Reifier: The Bowl | Original Seeing()

  • Pingback: Abstraction 8: The Bowl | Original Seeing()