20 Comments

Bee, when I write more, I most certainly will tell you. :)

Expand full comment

Hi Robin,

yes, I would be interested to hear what I am missing. If you write something, would you please send me a pointer?

You are probably right that I do not understand very much about institution design, but then I am not the one who wrote a proposal how to save science. I was just pointing out where I think your proposal does not work. I am not claiming I have a better one.

Best,

B.

Expand full comment

Bee, I fear our conversation is not progressing. From my end it looks like you just do not understand enough about basic issues in institution design. From your end perhaps you do not think I know enough about your kind of basic research. I'd love to sit down with you for a few hours to talk in person, and I my try another post soon to summarize what I think you are missing, but for now I guess it is time to quit.

Expand full comment

Hi Robin,

okay, I was being very imprecise. Note, I did not write 'non-scientists'. Indeed, I think any judgement on the value of scientific theories should be made - as far as possible - by experts who know the subject, but it is essential these are financially and idealistically independent (this is currently definitely not the case). What I do not want is that generally researchers are awarded by gambling on the betting market. This is a very inefficient use of people whose task it is to make new discoveries.

I have no objection to the general idea of having independently funded (however this is achieved) scientists that perform the task of rewarding 'being right' over 'being popular, fashionable, and eloquent'. I just do not think that the system you propose will use resources (financial and human) optimally. Instead, it will attract attention to the market itself and the betting on already existent approaches on the expenses of dedication to research. It will result in an environment that favors scientists who are good in predicting, and makes them attractive candidates to hire - irrespective of their originality. I do not see how independent thinking and originality relates to being good in gambling.

Besides this, dynamics of marketplaces have their own weaknesses, you know that far better than I. Most importantly, fundamental research requires stable financial long-term support. Theories are not worked out within one year. If financial support is shaky (as it is currently), the system will discourage researchers from attacking the important and difficult questions, and instead they will focus on small, doable projects that add tiny pieces of knowledge to the far branches on the tree of specialization. This is a problem we already have today, and your proposal does not solve it.

Best regards and happy Easter,

B.

Expand full comment

I wrote:

[To] be paid for your theory [it] ... just has to adjust the probabilities people would reasonably assign to some thick bet-on claims. These claims can be theoretical or empirical, broad or specific, ... Bee replied:

Yes. But. Scientists. Should. Do. Research. ... Assigning a probability to approaches. ... make sure you have [non-scientists where] ... this should be their main task. Scientists should not be forced to do this besides their research, which is their actual taskBee, imagine this claim:

Scientists should just come up with ideas. Scientists should not have to write them coherently into papers, or present them at conferences; publicists should do that. Scientists should not have work out mathematical derivations; mathematicians should do that. Scientists should not have to build instruments; mechanics should do that. Scientists should not have to do statistical analyses of their data; statisticians should do that.Any one scientist is free to try to form a team where other team members take on these other tasks, leaving him or her to just "come up with ideas." But reality just has a whole bunch of tasks to be done; it does not label some of them as "research" versus not. We usually decide which bundles of tasks should be done by the same person based on how bundling changes the efficiency, and fun, of achieving them.

Expand full comment

Hi Robin,

In any system people must spend some time evaluating work, and presenting work for evaluation. There is no escaping that; sorry.

Yes. I am aware of that. The question is, who is 'people'. This is where I object. I don't want scientists who are supposed to work out their ideas to spend time with betting.

Scientists will do what it takes to get funding. If betting markets control funding, scientists will bet.

Yes. But. Scientists. Should. Do. Research. Your proposal strongly suggests they think in a very specific pattern, that is assigning a probability to approaches. Gee, you are an economist, haven't you heard about all the nerdy guys who don't want to be dictated how they are supposed to work or think? Let the scientists do science. I think we agree that it is necessary to ensure that funding decisions are based on opinions of independent people (financially and idealistically) who are able and willing to assign such probabilities. What I say is, just make sure you have such a group of people - I think this should be their main task. Scientists should not be forced to do this besides their research, which is their actual task, and the was the reason for them to chose their job.

Hi TGGP,

You say what is important is "good research", but that's awfully hard to define. Once you come up some way of determining whether research is good, it seems you could bet on it. I like it when things are quantified, so when you say Hanson's proposal is "not going to solve the problems within our community", could you give a percent of the problems (weighted by what seems significant to you) it could ameliorate, or on the contrary make worse?

Yes. As I have tried to point out, one of the most annoying things in today's academic research is that researchers don't have enough time to do research. A very big part of our job is covered with administrational duties, writing of proposals, refereeing proposals, advertising, advertising, advertising, networking, hand-shaking, smiling, talking to the right people in the right places. Etc. Political games. Can take up to 110% of your time (I am a lucky exception). I don't want to replace this with just another system that suffers from a similar problem.

Yes, good research is awfully hard to define. But for research to be good, you have to do research to begin with. Betting does not lead to novelty. You are not discovering anything new by tracking the status of the existing. You don't make progress by asking scientists to quantify opinions in probabilities.

One of the most pressing problems I see within our community is the lacking communication among sub-fields. Another problem I see is that a big part of theoretical physics is getting more and more detached from the basic questions that we are supposed to answer, and drifting towards formality. Generally, I think decisions to fund researchers are too conservative and do not sufficiently support ideas that are new but might very well fail. Another big problem I see is that funding is too short-termed. It takes time to work out a theory, and jumping from one temporary contract to the other doesn't encourage people to start working on these ideas.

I am sorry, but I am not able to 'weight' these problems in an objective way. For me, the short-term contracts are the most severe problem. This is probably due to the obvious reason that I am on such a contract and wish I wasn't. My guess is that the too conservative funding that supports established topics rather than new ones is the most severe problem.

I don't think Robin's proposal would make these matters worse, but I don't see how it helps either. Not without additional structure that is.

Best,

B.

Expand full comment

Dear Bee,

I am thinking of what other examples of a reputation system that is set up like those in the financial world that I could show you as examples of market approaches in scientific research which could be useful to answer some of your needs stated in your Democracy series.

Many years (15) ago, my friends on a mailing set up a digital reputation system "HeX" where various entities like people and ideas could be registered and people could buy and sell shares in those registered entitites. The idea was that market price of a share was supposed to represent the value of the reputation. However it didn't work well because there was nothing of underlying value for which the shares were traded and there was some serious price manipulation since there was nothing to tie the prices to reality (thanks to Hal and Perry for filling me in and reminding me how HeX worked).

HeX was a toy experiment (using your words), but there was an even larger experiment: AMIX, American Information Exchange, ideas from which I think could still be useful for providing information to the scientific researchers. When it was set up in the later 80s, it was a little too early for its time. Well, now we have Ebay....

The Wikipedia article about it, needs some work, but this one at salon.com looks more readable.

Expand full comment

The present system rewards those who pay attention to marketplace tactics ... instead of just doing ... good research. This is the problem. I don't want to replace this system with another system that rewards scientists for smartly playing on some market. I want a system that rewards people for doing good research, full stop. I don't want scientists to be required to spend their time translating opinions into amount of money they would bet.

In any system people must spend some time evaluating work, and presenting work for evaluation. There is no escaping that; sorry.

And I am sure most just wouldn't do it - this is why I think the proposal does not work.

Scientists will do what it takes to get funding. If betting markets control funding, scientists will bet.

That is, what do you do if there is no question that will be decided any time soon. What do you do with work on theories that are not (yet) testable, and have no distinct bet to offer.

You can be paid for your theory if it informs any thick bet out there. That doesn't have to be a bet on a particular test of your theory, nor does it have to be a bet on anything that will be settled soon. Your theory just has to adjust the probabilities people would reasonably assign to some thick bet-on claims. These claims can be theoretical or empirical, broad or specific, conditional or unconditional. The influence can be direct or indirect, via any other claims.

How do you decide who are promising researchers to select if these people don't have anything to bet on.

As I said above, for-profit research labs would "choose which people or projects to invest in, based on whatever criteria they think most likely to win prizes."

I don't see how these examples correlate to optimizing scientific progress. You might bet on forecasting which horse runs the fastest, but does this betting help you to discover faster horses? ... Your whole proposal is based on the assumption that the base market is clearly defined and already present.

If you claim to have an insight, but having that insight does not adjust reasonable estimates on any topic of current interest, or on any topic you get any patron interested in, it is not clear you in fact have an insight.

To be clear, I am not proposing a system to replace all existing institutions; I am proposing that on the margin we substitute this one for those.

Expand full comment

Bee, at longbets.org there are wagers on whether string theory will win any Nobel Prizes by a certain date. I'm not an expert on string theory, but I believe it has often predicted the existence of sub-atomic particles that were later discovered. Some of the time periods of the bets at longbets.org are indefinite. You say what is important is "good research", but that's awfully hard to define. Once you come up some way of determining whether research is good, it seems you could bet on it. I like it when things are quantified, so when you say Hanson's proposal is "not going to solve the problems within our community", could you give a percent of the problems (weighted by what seems significant to you) it could ameliorate, or on the contrary make worse?

I'd also like to note that you acknowledge that some scientists are willing to bet, but you think most are not. It seems to me that bets still serve as an important source of information even if the majority of people with an opinion do not participate. You can have a sampling problem if the people that are willing to bet are biased in a certain direction, but the financial incentives (which we assume they respond to, otherwise they wouldn't be betting) counter-act that. For example, the people in charge of funding might be very unsure about what to fund. They don't bet, but they can look at betting markets to better inform themselves. On the other hand, this is assuming they behave like bayesian optimizers, and the fact that they are not would appear to be the reason Hanson came up with this idea.

Expand full comment

Hi Amara,

thanks. Yes, scientists should 'think deeply and carefully about all aspects of the problem' to arrive at a judgement that most accurately captures the value of a theory. I totally agree on that. I just don't think the introduction of a betting market is the right way to reach a higher 'level of rigor', neither do I think all academics want to spend time with assigning probabilities to their proposition's truth. But most importantly, even if that was the case this would not solve the problems that I have pointed out. This is not to say I think it is useless for scientists to use such an examination.

Best,

B.

Expand full comment

Obviously the link didn't come out. Instead type: 'WMAP frequency Bayesian'in Google to see what I was trying to post.

Expand full comment

oops, forgot this part:

(*) For example from the cosmology WMAP people:

http://www.google.com/searc...

Expand full comment

Dear Bee,

To assess the correctness of a scientific idea requires a high level of rigor, I think we agree. There is a type of 'betting', or let's say statistical inference that is proceeding successfully in the sciences, which you might not have noticed yet. (*)

Bayesian Probability Theory, which is the best formal theory that we have on the relationship between theory and evidence, gives the probability of that proposition's truth, which is conditional to some context. In order to to assign a prior in this formalism, one must think deeply and carefully about all aspects of the problem. The result is a carefully defined question to answer.

In the betting world, the carefully defined questions are necessary for the winners. The value/money gives all involved (more) incentives to very rigorously define the question and consider possible answers. I don't think that this is a distraction at all. Instead this is a highly motivating and very efficient process to answering the truth of a scientific proposal.

Expand full comment

Hi Matthew:

sadly enough, I agree with your observations about 'the current environment', which I have also addressed in my posts (see e.g. here).

Robin's proposals aim to provide a correction to this tendency, because there is a tangible cost to being wrong (and reward for being write) when making a bet based on certainty.

Yes, I understand this aim. I just don't think it is going to work for the reasons I have given. Please note, there IS a tangible cost to being wrong and reward for being right in science. Scientists do not advertise their theories despite knowing they are wrong, but because they believe they are right. The problem is to find out what is right and what is wrong.

The present system rewards those who pay attention to marketplace tactics that you have mentioned, instead of just doing what they are supposed to do: good research. This is the problem. I don't want to replace this system with another system that rewards scientists for smartly playing on some market. I want a system that rewards people for doing good research, full stop. I don't want scientists to be required to spend their time translating opinions into amount of money they would bet. And I am sure most just wouldn't do it - this is why I think the proposal does not work.

But more importantly, I have pointed out that it doesn't even address the most crucial points (at least in theoretical physics). That is, what do you do if there is no question that will be decided any time soon. What do you do with work on theories that are not (yet) testable, and have no distinct bet to offer. How do you decide who are promising researchers to select if these people don't have anything to bet on. Would you judge on a researchers quality by his/hers betting abilities? How does that correlate to his potential of making a major contribution in his research field in the future?

Hi Robin:

Thanks, this is indeed interesting, and it makes sense to me. But I don't see how these examples correlate to optimizing scientific progress. You might bet on forecasting which horse runs the fastest, but does this betting help you to discover faster horses? You might bet on forecasting the election, but how does that forecast lead to novelty? Your whole proposal is based on the assumption that the base market is clearly defined and already present, it just has to be formulated in some questions to be evaluated. This just doesn't address the most pressing problem: how to ensure enduring progress. And again, I just don't think academics would play your game. Best,

B.

Expand full comment

Bee, we have a fair bit of data comparing the accuracy of market prices to other ways to organize people to make forecasts. In these contexts, people usually feel they are being completely honest. Nevertheless, field comparisons show markets to have been at least as accurate. For example:

Such markets (at least the hard cash versions) have so far done well in every known head-to-head field comparison with other social institutions that forecast. Orange juice futures improve on National Weather Service forecasts, horse race markets beat horse race experts, Academy Award markets beat columnist forecasts, gas demand markets beat gas demand experts, stock markets beat the official NASA panel at fingering the guilty company in the Challenger accident, election markets beat national opinion polls, and corporate sales markets beat official corporate forecasts.14The favored explanation is that even though people feel honest, they are not in fact as honest as they could be. People give more accurate forecasts when the incentives for accuracy are higher.

Expand full comment

Bee,

I think there is a key factor that a financial market brings to the table, that is not present in the current academic environment.

In the current environment, the incentives favor scientists to exaggerate and oversell their certainty in their proposals. Science is conducted almost as a political debate or a legal trial, where the advocate of a particular position is trying to "win" converts, because winning the scientific debate means getting grant money, graduate students, and status. One is not rewarded for (openly) hedging bets, as that would probably mean fewer speaking engagements, fewer graduate students, a perception of being a "risky" bet.

Robin's proposals aim to provide a correction to this tendency, because there is a tangible cost to being wrong (and reward for being write) when making a bet based on certainty.

Expand full comment