Fixing Academia Via Prediction Markets

When I first got into prediction markets twenty five years ago, I called them “idea futures”, and I focused on using them to reform how we deal with controversies in science and academia (see here, herehere, here). Lately I’ve focused on what I see as the much higher value application of advising decisions and reforming governance (see herehere, here, here). I’ve also talked a lot lately about what I see as the main social functions of academia (see here, here, here, here). Since prediction markets don’t much help to achieve these functions, I’m not optimistic about the demand for using prediction markets to reform academia.

But periodically people do consider using prediction markets to reform academia, as did Andrew Gelman a few months ago. And a few days ago Scott Alexander, who I once praised for his understanding of prediction markets, posted a utopian proposal for using prediction markets to reform academia. These discussions suggest that I revisit the issue of how one might use prediction markets to reform academia, if in fact enough people cared enough about gaining accurate academic beliefs. So let me start by summarizing and critiquing Alexander’s proposal.

Alexander proposes prediction markets where anyone can post any “theory” broadly conceived, like “grapes cure cancer.” (Key quotes below.) Winning payouts in such market suffer a roughly 10% tax to fund experiments to test their theories, and in addition some such markets are subsidized by science patron orgs like the NSF. Bettors in each market vote on representatives who then negotiate to pick someone to pay to test the bet-on theory. This tester, who must not have a strong position on the subject, publishes a detailed test design, at which point bettors could leave the market and avoid the test tax. “Everyone in the field” must make a public prediction on the test. Then the test is done, winners paid, and a new market set up for a new test of the same question. Somewhere along the line private hedge funds would also pay for academic work in order to learn where they should bet.

That was the summary; here are some critiques. First, people willing to bet on theories are not a good source of revenue to pay for research. There aren’t many of them and they should in general be subsidized not taxed. You’d have to legally prohibit other markets to bet on these without the tax, and even then you’d get few takers.

Second, Alexander says to subsidize markets the same way they’d be taxed, by adding money to the betting pot. But while this can work fine to cancel the penalty imposed by a tax, it does not offer an additional incentive to learn about the question. Any net subsidy could be taken by anyone who put money in the pot, regardless of their info efforts. As I’ve discussed often before, the right way to subsidize info efforts for a speculative market is to subsidize a market maker to have a low bid-ask spread.

Third, Alexander’s plan to have bettors vote to agree on a question tester seems quite unworkable to me. It would be expensive, rarely satisfy both sides, and seems easy to game by buying up bets just before the vote. More important, most interesting theories just don’t have very direct ways to test them, and most tests are of whole bundles of theories, not just one theory. Fourth, for most claim tests there is no obvious definition of “everyone in the field,” nor is it obvious that everyone should have opinion on those tests. Forcing a large group to all express a public opinion seems a huge cost with unclear benefits.

OK, now let me review my proposal, the result of twenty five years of thinking about this. The market maker subsidy is a very general and robust mechanism by which research patrons can pay for accurate info on specified questions, at least when answers to those questions will eventually be known. It allows patrons to vary subsidies by questions, answers, time, and conditions.

Of course this approach does require that such markets be legal, and it doesn’t do well at the main academic function of credentialing some folks as having the impressive academic-style mental features with which others like to associate. So only the customers of academia who mainly want accurate info would want to pay for this. And alas such customers seem rare today.

For research patrons using this market-maker subsidy mechanism, their main issues are about which questions to subsidize how much when. One issue is topic. For example, how much does particle physics matter relative to anthropology? This mostly seems to be a matter of patron taste, though if the issue were what topics should be researched to best promote economic growth, decision markets might be used to set priorities.

The biggest issue, I think, is abstraction vs. concreteness. At one extreme one can ask very specific questions like what will be the result of this very specific experiment or future empirical measurement. At the other extreme, one can ask very abstract questions like “do grapes cure cancer” or “is the universe infinite”.

Very specific questions offer bettors the most protection against corruption in the judging process. Bettors need worry less about how a very specific question will be interpreted. However, subsidies of specific questions also target specific researchers pretty directly for funding. For example, subsidizing bets on the results of a very specific experiment mainly subsidizes the people doing that experiment. Also, since the interest of research patrons in very specific questions mainly results from their interest in more general questions, patrons should prefer to directly target the more general questions directly of interest to them.

Fortunately, compared to other areas where one might apply prediction markets, academia offers especially high hopes for using abstract questions. This is because academia tends to house society’s most abstract conversations. That is, academia specializes in talking about abstract topics in ways that let answers be consistent and comparable across wide scopes of time, space, and discipline. This offers hope that one could often simply bet on the long term academic consensus on a question.

That is, one can plausibly just directly express a claim in direct and clear abstract language, and then bet on what the consensus will be on that claim in a century or two, if in fact there is any strong consensus on that claim then. Today we have a strong academic consensus on many claims that were hotly debated centuries ago. And we have good reasons to believe that this process of intellectual progress will continue long into the future.

Of course future consensus is hardly guaranteed. There are many past debates that we’d still find to hard to judge today. But for research patrons interested in creating accurate info, the lack of a future consensus would usually be a good sign that info efforts in that area less were valuable than in other areas. So by subsidizing markets that bet on future consensus conditional on such a consensus existing, patrons could more directly target their funding at topics where info will actually be found.

Large subsidies for market-makers on abstract questions would indirectly result in large subsidies on related specific questions. This is because some bettors would specialize in maintaining coherence relationships between the prices on abstract and specific questions. And this would create incentives for many specific efforts to collect info relevant to answering the many specific questions related to the fewer big abstract questions.

Yes, we’d  probably end up with some politics and corruption on who qualifies to judge later consensus on any given question – good judges should know the field of the question as well as a bit of history to help them understand what the question meant when it was created. But there’d probably be less politics and lobbying than if research patrons choose very specific questions to subsidize. And that would still probably be less politics than with today’s grant-based research funding.

Of course the real problem, the harder problem, is how to add mechanisms like this to academia in order to please the customers who want accuracy, while not detracting from or interfering too much with the other mechanisms that give the other customers of academia what they want. For example, should we subsidize high relevant prestige participants in the prediction markets, or tax those with low prestige?

Those promised quotes:

The Angel of Evidence … [is a] centralized nationwide prediction market. Anyone with a theory can list it there. … Suppose you become convinced that eating grapes cures cancer. So you submit a listing to the Angel: “Eating grapes cures cancer”. Probably most people doubt this proposition and the odds are around zero. So you do some exploratory research. You conduct a small poorly controlled study of a dozen cancer patients. … Gradually a couple of people … make bets … maybe saying there’s only a 10% chance that you’re right, but it’s enough. The skeptics, and there are many, gladly bet against them, hoping to part gullible fools from their money. …

These research prediction markets are slightly negative-sum. Maybe the loser loses $10, but the winner only gets $9. When enough people have bet on the market, the value of this “missing money” becomes considerable. This is the money that funds a confirmatory experiment. … Suppose the experiment returns positive results. … Either everyone is entirely convinced that grapes cure cancer. … Or the controversy continues, … [and] a bet can be placed on the prediction market for the success or failure of a replication. …

Who is going to bet for or against the proposition that the Higgs boson has a mass greater than 140 GeV? Only a couple of physicists even understand the question, and physicists as a group don’t command large sums of spare capital. So what happens is that scientific bodies – the Raikothin equivalent of our National Science Foundation – subsidize the prediction markets. … They donate $1 million to the Angel of Evidence to make the prediction market more lucrative. Suddenly the market is positive-sum; maybe you lose $10 if you’re wrong, but gain $11 if you’re right. The lure of free money is very attractive. … “Science hedge funds” would try to figure out what mass the Higgs boson is likely to have, knowing they will win big if they’re right. Although the National Science Fund type organization funds the experiments indirectly, it is the money of these investors that directly goes to CERN to buy boson-weighing machinery. ..

How are the actual experiments conducted? … Having any strong opinion on the issue at hand is immediate disqualification for a consultant scientist to perform a confirmatory experiment. The consultant scientist is selected by the investors in the prediction market. Corporate governance type laws are used to select a representative from both sides. … Then they will meet together and agree on a consultant. If they cannot agree, sometimes they will each hire their own consultant scientist and perform two independent experiments, with the caveat that a result only counts if the two experiments return the same verdict. …

The consultant … decides upon an experimental draft and publishes it in a journal. … It is the exact published paper that will appear in the journal when the experiment is over, except that all numbers in the results section have been replaced by a question mark. … First, investors get one final chance to sell their bets or bow out of the experiment without losses. … This decreases the amount of money available for the experiment. That comes out of the consultant scientist’s salary, giving her an incentive to make as few people bow out as possible. … Second, everyone in the field is asked to give a statement (and make a token bet) on the results. This is the most important part. .. When the draft is published, if you think there are flaws in the protocol, you speak then or forever hold your peace. (more)

GD Star Rating
Tagged as: , ,
Trackback URL:
  • sflicht

    In the author actually gives a fairly detailed account of specific market mechanisms for this application of prediction markets to scientific research.

    • That isn’t prediction markets.

      • sflicht

        Didn’t mean to suggest that that paper’s proposal is a prediction market. Rather, its suggested mechanism could perhaps be adapted for the purpose of subsidizing scientific prediction markets. As I see it, the issue is finding an incentive compatible way to get scientists to formulate precise testable contracts (with appropriate priors) for the Angel of Evidence, and to get accurate info-seeking parties (current customers of academia) to subsidize corresponding market makers. That’s the domain where something like the proposal in the linked paper seems relevant.

        Note that the database proposed there also can contain highly theoretical (“abstract” rather than “concrete”) hypotheses. Who would subsidize the corresponding markets? Perhaps a forward-looking company anticipates how more “down to earth” theoretical results — ones more subject to experimental evidence — will bear upon the concrete hypotheses which inform its research agenda on a 10-year horizon. If an overarching abstract program has implications for these related but testable theories, the company might sponsor the market for the abstract program at some low level. Alternately, a layer of speculators or financial intermediaries (hedge funds, banks) might fill these funding gaps, anticipating that they will obtain alpha for their portfolios on concrete markets from the additional information in abstract markets. (Analogy: abstract science could, in the world with an Angel of Evidence, be a socially “useless” good in the same way that microsecond-latency fiber optic data links between financial centers are, according to Michael Lewis, such a good in our existing world.)

      • sflicht

        I should mention, though, that the equilibrium I envision in which abstract research is also funded could well be hard to reach or even non-existent.

        The only evidence I have is the extreme illiquidity of Scicast contracts on questions like P vs NP.

  • BJ Terry

    Creating prediction markets for questions like, “What will the consensus view be of M-theory in 100 years” seems like it’s really using the wrong tool for the job. The benefits of prediction markets in aggregating private information to reveal collective probabilities function more poorly when you introduce greater uncertainties beyond the question under consideration, and the best way to introduce uncertainties is to increase the time horizon dramatically. When you are betting on what the consensus will be in 100 years, you are betting not only on the question under consideration, but whether the betting institution will even exist, whether the losing bettors will exist and be liquid (surely one won’t have their actual capital tied up in a prediction market for 100 years), whether the political environment will be such that the questions will be answered fairly or interpreted properly, whether the question will even be conceived to be coherent in the future, etc.

    In a way, Scott’s proposal is actually more realistic than this one. I agree that the market would almost certainly have to be subsidized, but science moves forward one study at a time. Chipping away at all of the tiny issues at a faster pace will leave us with more scientific understanding in the future in a direct and quantifiable way by directly increasing the quality of science now. Having very long-term focused prediction markets doesn’t increase the rate science progresses, because it relies on two systems of science, our current traditional practice of science (although subsidized by the market makers based on the short-term version of your markets) which will eventually settle the question, and the prediction market which will be adjudicated based upon the results of traditional science.

    It’s not totally clear to me why it’s in the market makers’ best interests to pursue open scientific research with their subsidies. They shouldn’t care whether the prices are highly accurate in the short-term. If I were a subsidized market maker, I would do either whatever minimum level of effort is required to collect the subsidy (by keeping bid-ask spreads however low) or I would pursue research efforts totally in secret so that I would have an informational advantage over the participants in my market, much as a traditional investment bank would. If there were multiple market makers, we would each be incentivized to do our research in private, creating tremendous duplication of effort, meaning that it may require even more resources to operate science in such a manner. Eventually we would be in a state where a question appears to be settled with high probability, but layman can’t figure out why. (Well, one of the market makers built a $1 billion particle accelerator, solved the question and bid up the price so high that it’s not worth it to anyone else to do further research. But what ELSE did they learn? That’s now valuable private information they can use on future markets.)

    Obviously these issues are solvable, but I think the final result starts looking more and more like Scott Alexander’s proposal (explicit mechanisms for transparency and the avoidance of conflicts of interest) and less like this one.

    • People who buy real estate or stocks usually don’t expect to hold them over the full life of the asset; they sell to someone else before then. We have many academic institutions, such as journals and universities, that have lasted over a century. So long term assets are clearly feasible in academia.

      My proposal does not neglect the many tiny issues; it just funds them indirectly, rather than directly, based on speculators’ estimation of which tiny issues will eventually be informative about the big issues. If you instead have the patrons directly fund the little issues, you will be relying on their judgment about which little issues will end up being how useful.

      Regarding your claim that my proposal “relies on two systems of science”, every proposal for how to fund science must rely on some system of funding above and beyond the system of actually doing science.

      The market makers I propose are automated; they have no discretion. So they systematically lose money to traders with information about the future price.

      Regarding incentives for secrecy, speculative markets create incentives for secrecy in the short run, but incentives for openness in the longer run; one secretly collects info, then trades on it, then reveals it in order to move the price to a point where you can reverse your previous trades for profit. Most other funding systems also create incentives for secrecy in the short run.

      • Regarding incentives for secrecy, speculative markets create incentives
        for secrecy in the short run, but incentives for openness in the longer
        run; one secretly collects info, then trades on it, then reveals it (emphasis added) in
        order to move the price to a point where you can reverse your previous
        trades for profit.

        1. In learning about a topic, the “short-run” is often plenty long. (Seemingly, much more so than evaluating a capital investment, where the landscape is constantly changing.)

        2. The “it” that is revealed isn’t all the information acquired: only the favorable information.

        Do you really want to create an intelligentsia society of promoters rather than thinkers?

      • Your investment position doesn’t have to be public and it can be changed quickly. So it need not tie you to anything.

      • 1. If it isn’t public, this creates another problem: the public doesn’t know the vested interests of those promoting ideas.

        2. A position that isn’t publicly known is still a commitment. While you hold the interests, it is not in your interest to be publicly open-minded. You are, for the moment, tied to that investment.

        Never will you be in a position where your interest is to be intellectually honest.

      • brendan_r

        Stephen, you’re making the same argument that people make against short-sellers: “they’re talking their book”. Yes, but the investor shorted the stock in the first place because he honestly believed it overvalued!

        Investing by taking arbitrary positions and then promoting them does not work, with one exception: when one side of the debate is silenced, i.e. unshortable stocks lacking put options.

        Promoters flock to unshortables because no one has the incentive to contradict them. By far the worst promotion and inefficiency occurs in inshortables.

        Academia with out prediction markets is like a stock market with out the ability to short, because the incentives to bullishly push an idea usually exceed the incentives to skepticism.

        People w/ no asset market experience dramatically overestimate the power of dishonest promotion.

        “Never will you be in a position where your interest is to be intellectually honest.”

        An investor, right before he places his bet, is far more intellectually honest than the best academic.

      • An investor, right before he places his bet, is far more intellectually honest than the best academic.

        You can only so conclude by refusing to understand the term “intellectual honesty.”

        An investor is least intellectually honest right before he bets! This is just definition. You may try to argue that said investor is more veridical right before he bets; but you can say he is intellectually honest only by forgetting what’s meant, for intellectual honest is a property of persons conversing, not keeping secrets.

        What we have here is two different models of social coordination, one involving maximizing disinterested conversation and the other maximizing personal incentives to be correct. I can’t say with certainty that your model and Robin’s is less effective, but I can say you (misunderstanding the very concept of intellectual honesty) haven’t understood the alternative.

        (I can also say that, at least to me, a world based on privately veridical decision-making, not open discussion, is most unappealing.)

      • brendan_r

        Gotya. OK, I really do wanna convince you here, so bear with me.

        Treasury Inflation Protected Securities (TIPS) were created in 1997, and gave us our first direct look at market forecasts for inflation.

        Does the existence of TIPS reduce disinterested conversation of inflation and its causes? If so, how?

        My view is this: Inflation talk was wildly interested both before and after TIPS were created, and most interested talk is driven not by money-greed, but by ideology, politics, academic theory-pushing, etc., etc.

        The existence of TIPS has, at the margin, constrained stupid ideas, shifted burdens of proof, and helped people update their models.

        For example, some of the best macro discussion takes the form of, “The Fed did X and TIPS did Y- how can we explain that”.

        I don’t think prediction markets reduce disinterested convo; they improve its quality.

      • BJ Terry

        The situation with TIPS is quite unusual as compared with most predictions. I would compare a scientific prediction market to the prediction of earnings events in public corporations (Justification: inflation is uniquely important to the practice of finance itself; inflation is composed in part on pure expectations as a feedback loop, whereas scientific
        theories and earnings events are both grounded in current reality; there is only one inflation but there are many scientific theories and many earnings events that are completely idiosyncratic).

        When hedge funds fly planes over the parking lots of the largest Wal-Mart stores to see how the holiday season is going, we never, ever learn what they saw. That information is, for all purposes, completely inaccessible to the public at large. Even long after this earnings season is over, we never find out what those pilots saw. Lots of cars? Fewer cars in total but ten times as many luxury vehicles? Fewer cars but with more people carting out widescreen televisions? In investing this information doesn’t matter, but in science these details do matter, because more information begets more information.

      • Most things that academics learn in the course of doing their research are never published.

      • IMASBA

        But those things are often applied by the academic in the rest of their career and transferred to PHD or master students, plus some of those things are just rediscoveries. But yeah, there’s probably a lot that could be done to share useful knowledge and experience between different institutions.

      • And hedge funds that learn things in private use them later on in the future, and transfer them to future employees.

      • IMASBA

        Hedge fund employees cannot transfer knowledge that’s not drectly inside their head, they get sued if they do that.

        A Chinese scientist can email an American scientist with a question and get an answer free of charge, this is really unparalleled in business. Of course business needs to hoard some info to compete but it would be nice if info was disclosed, say after 10 years and when the business goes bankrupt without a buyer.

      • sflicht

        Of course, in some sense, hedge funds transfer knowledge every time they trade based upon it, since their trades move the market price. So even if the public doesn’t “learn” the information gathered in private, the public can perhaps infer such information.

      • brendan_r

        And how would the existence of prediction markets dissuade sharing of bits of info at the margin?

        Let’s make it concrete. Educational interventions rarely achieve their goals. I think that’s partly because of asymmetric incentives for sharing good and bad info. Lot’s of bad info is file-drawered, or isn’t disseminated widely because no one but the tax payer has an incentive to criticize, and thus doomed to fail interventions proceed.

        Wouldn’t prediction markets bring more bits of info to the surface by making info sharing incentives more symmetric?

        “but in science these details do matter, because more information begets more information.”

        Does that seem true in Education policy making?

      • BJ Terry

        The mere existence of prediction markets doesn’t dissuade sharing information at the margin, of course. I’m not suggesting that prediction markets for science shouldn’t exist (obviously they should; prediction markets should be legal, sometimes subsidized, and we should have as many as the market can bear). I’m suggesting that structuring your prediction markets as extremely long-term bets and using that as the core tool to fund science, which is not a marginal scenario, could lead to less openness of scientific results as compared with structuring prediction markets as short-term bets on the specific outcomes of individual studies. And I agree that prediction markets in some form would bring more information to the surface with regard to educational interventions.

      • And how would the existence of prediction markets dissuade sharing of bits of info at the margin?

        I’m not (mainly) talking about sharing bits of information; rather, the sharing of understandings, theories, and counter-arguments. My claim is that widespread use of prediction markets (futarchy) would create a intelligentsia of promoters and hoarders.

        The best way for us to understand an intellectual landscape dominated by prediction markets isn’t to look at speculative markets generally (which only in the most indirect way address intellectual questions–for example, what is the main cause of inflation, not what will the inflation rate be in the next decade) but to look at how the orientation fostered by prediction markets affects the discourse of its advocates!

        We have in Robin a great example of an intellectual shaped by the prediction-market mentality: ask how the quality of Robin’s advocacy of prediction markets themselves has been affected by his devotion to them. Robin’s uniqueness makes him interesting and socially useful, but do we want to see an intelligentsia composed of Robins?

        What distinctive characteristics of Robin’s discourse can be plausibly attributed to prediction markets (or, perhaps, to his already having had a mentality congruent with them)? I venture the following:

        1. Robin never discusses the drawbacks of prediction markets. (Thus, followers like you end up denying there are any drawbacks–recent post.) While intellectuals are generally biased for their own views, intellectual status motivates most people to cultivate a degree open-mindedness. The prediction-market mentality seems to allow those who embrace it to justify being open promoters of their ideas.

        2. Robin rarely (if ever) sets out the theoretical bases for his conclusions. I’d argue that doing so wouldn’t serve promotional purposes; it would instead equip others to critique his conclusions.

        3. Robin completely avoids issues related to his conclusions when they don’t relate to promotion. Thus, he simply deigns not to comment on macro-economics.

        You (with Moldbug) say leftism is advocacy of a scholarly dictatorship. I disagree: this is often the goal of Monomaniacalists, not (Utopianist) socialists/communists. ( ) [An extreme version is the Monomaniacalist dream of takeover by a “friendly AI.”] But futarchy (or the widepread use of intellectual prediction markets) would turn scholars into traders.

        Robin sees that folks don’t really care about being correct, and he seeks to supply financial incentives. I think he fails to recognize that intellectual progress depends on the prevalance of the right intellectual norms much more than on the right individual incentives to be right. Personally, I think Robin cares far too much about being right. It isn’t by virtue of being right that most contributions are made to intellectual life, which is a process of conversation.

      • brendan_r

        Info includes theories.

        If Robin is a biased promoter then he is motivated by standard academic motives- pushing his idea- not by owning futures on some idea.

        For info on how promotion works in financial markets, I look to financial markets. You look to a single academic who isn’t even participating in a financial market.

        “Personally, I think Robin cares far too much about being right. It isn’t by virtue of being right that most contributions are made to intellectual life, which is a process of conversation.”

        Science is conversation constrained by certain ideas that tend to lead to greater accuracy. Here’s an institution known to produce greater relative accuracy than any other. And you oppose it.

        Which makes sense since you believe that “being right” ain’t so important.

      • I have never made any claims to expertise.

        But understand, the reliance on personal expertise is part of what’s at issue. The reasoning behind prediction markets puts substantially greater emphasis on the ad hominem aspect of intellectual influence: more so than do I or mainstream intellectual history.

  • JW Ogden

    A robust betting market on AGW would be very good. It is very surprising that one does not exist.

    • IMASBA

      What would be the point? It would take many years to await the result and all the deniers would not agree on the decision conditions anyway so they wouldn’t invest (Robin is right that betting forces people to put their money where their mouth is). In the end people would just be taking home their own investment, probably with a loss compared to dividend on stock over the same time period or even the interest on a savings account.

      • IMASBA

        I suppose you could try trapping deniers by tying political consequences to the price level of the market, hoping that they’ll bring in enough capital to attract speculators, but not so much that speculators can’t overcome them anymore. If you were to then donate some of the proceeds to charity there’d be a point, but it’s a dangerous gamble and useless policywise (if current scientific consensus cannot sway policy enough then why would policymakers ever agree to the terms of a market that would have to be decided by those same scientists?)

      • There’s no reason to take a loss relative to stocks; Stocks can be the asset you bet.

      • IMASBA

        “There’s no reason to take a loss relative to stocks; Stocks can be the asset you bet.”

        Oh yeah, I guess if the organizers allow it they could. Thanks.

    • Quixote

      It does exist, just not in a form available to retail consumer. It can be seen in the cost of reinsurance and in the pricing of catastrophe risk bonds on the capital markets.

      • That form is also not one that allow us to clearly interpret the factual predictions of its prices.

  • brendan_r

    Anti prediction market talk, when not completely invalid, holds prediction markets to a standard of perfection that implicitly assumes that the status quo is ideal. Reminds me of Moldbug’s observation:

    “The essential idea of leftism is that the world should be governed by scholars.”

    • IMASBA

      Perhaps futarchy is the economic libertarian (not necessarily right wing because I wouldn’t just assume traditional conservatives, American or European, would embrace prediction markets) version of communism. Robin has been going on about how people are resisting prediction markets because they rather signal personality traits than put their money where their mouth is or have something to hide. Maybe he is right and it means futarchy is just as incompatible with human nature as communism?

      • There’s a big difference between “it won’t work if tried” with “it would work if tried but people don’t want to try”.

      • IMASBA

        What if people would keep resisting it (not necessarily consciously) once it has been implemented because it “doesn’t feel right”, just like they did with communism? I’m really curious about that.

      • Communism doesn’t “feel right”? When humankind has spend 90% of its time under a form of communism?

        In a sense that’s right; in Hansonian terms, farmer values come to prevail under scarcity. (Marx–and not even the Bolsheviks–envisioned communism succeeding in an isolated, backward country.)

        It seems almost transparent that, if humanity ever overcomes scarcity, we’ll be communist. (A cynical view of em theorizing might be that it’s designed demonstrate that scarcity will always be with us, that Malthus is eternal.)

      • IMASBA

        “Overcoming scarcity” would require breaking the laws of physics.

        The not-so-distant future can be much more “socialist” than the present (basic income, education for all, that sort of thing) but it won’t be communist, though of course new psychologies become a possibility in the far future and then communism could take over.

      • Seems to me overcoming scarcity requires only drastic limitation of population growth in the future. (This is what Robin says is impossible–even without ems.)

        Overcoming scarcity doesn’t mean unlimited abundance–only so much that egoistic striving can’t comfortably be subordinated to social good. The Paleolithic foragers who had communism didn’t overcome physics. It seems foraging lost to farming only when harsh scarcity began to prevail in the Mesolithic.

      • IMASBA

        Foragers were not communists, they were just small communes with few possessions that could be hoarded and no viable institutions to make hoarding possible. They were for sure more socialist inclined than your average farmer but they never ahd to make the choice between capitalism and communism or something else.

        Stopping population growth will lead to a higher GDP/capita but that will just make us more spoiled so we forget how scarcity has declined compared to the past: our ambitions will just grow with the level of wealth we have. Even if we have on average 5 Earth-like planets per person we’ll still be inclined to think (and probably with good incentive reasons) that lazy people should have only 1 or 2 such planets and more productive members of society should perhaps get 10 (so yes, I do think we will eventually want to reduce inequality but we’ll never want to eliminate it fully).

      • foragers were not communists, they were just small communes with few possessions that could be hoarded and no viable institutions to make hoarding possible.

        They were militantly egalitarian, ganging up on any would be strong man.

        Your claim suggests that there could be no basis for domination because of the absence of hoarding. But domination was indeed possible, as it had been the rule for our ape ancestors: it was militantly resisted..

  • Christian Kleineidam

    Maybe trying to fix everything about science at once isn’t the way to go.

    We do have initiatives such as the [reproducibility-initiative]( Setting up a prediction market on the outcomes of those replication attempts should be relatively straightforward and not require much resources.

    • There are many cheap things that can be done. Problem is, even less resources are devoted to doing such things.

  • Bad Horse

    If you take bets on what the consensus answer to some question will be at a specific time in the future, you encourage researchers to keep their data and results secret, and to publish false results.

    • Robin Hanson

      Academics already have many incentives to do those things.