7 Comments

Betting doesn't work in philosophy - you can't observe who turns out to be right. That's why disagreement is such a huge problem for philosophers; and some methodological steps just make it worse, such as our emphasis on intuitions (and the use of thought experiments as intuition pumps) and interpretation.I don't think we're a priori more biased than other experts, or that philosophical truth is hard to find; the problem is that falsehood is harder to find - it's difficult to realize you're mistaken when you have nothing to lose by doing it.(That being said, for everything else, "bet when you disagree" might have been the most important (and simple) lesson I learned from the rationalist community)

Expand full comment

Hard to implement for philosophical controversies. We'd need someone to decisively determine the truth at some point.

Expand full comment

Related: seeing your current set of strategies for meeting basic needs in life (health, wealth, relationships etc.) as implicit bets. This can reveal how selection effects have clustered you together with those making very similar bets and possibly insulating you from the full range of evidence about better and worse strategies. How would an actuary bet on your cohort (the set of people following similar decision principles)?

Expand full comment

It seems that the success of the scientific method is that it is pretty well aligned with human nature: egoism; when a scientists proves that orthodox knowledge is incorrect, the scientist gets favored personally & human knowledge increases... perfect match.

It seems like that many scientists/philosophers have a hard time being as critical about their own stances, opinions, work; as about other's (because it seems not to be aligned with human nature).

However, In science and philosophy people always say that it is such a great virtue to admit that one was wrong when new evidence comes to light and subsequently changing one's believe. The higher the investment in the former opinion, the more virtuous it would be from a scientific/philosophical doctrine (look at X, spreading his theory Y for 20 years all over the media, and so willing to change his opinion when new evidence came to light, which he himself even brought to light, what a great philosopher/scientist)

Could we simply conclude that feeling more philosophically virtuous after changing one's opinion (feeling good about that virtue) is simply way more often than not outweighed by other factors (e.g. financial loss, losing face to people who don't hold that virtue in respectable regard)?

Expand full comment

If people actually made bets often, then the ease of bets might explain differences in fields. But in fact bets are rare in most fields.

Expand full comment

Maybe biases are harder to root out in disciplines that don't allow for adjudicating bets. For example, Lewis's modal realism or Huemer's intuitionism aren't going to be "found" to be true. Philosophical theories don't predict events, not even conditionally.

Betting markets are much more useful for straightening out people on issues like global warming: If you think it's a hoax, bet that average temperatures will not rise in the next 10 years. Many will gladly give you 5-1, and if you're right, you can clean up! There is no counterpart for philosophy, unless it's about future "standard views." That would not be useless, but there tend to be cycles about what's standard. Huemer's intuitionism is itself a kind of revival of a century-old view, and even dualism has revivals from time to time. I would compare top philosophical theories to common openings in chess. These too go in cycles, and sometimes old techniques get surprising updates, because the community aren't practicing how to play them out. Likewise, top philosophical theories depend on the current crop of top philosophers and what they're "weak" to.

Expand full comment

I remember an interview where Leonard Krauss said, 'Don’t talk to me about anomalies, talk to me about evidence.' The more foundational the belief, the more inexact its implications for particular facts, and thus the easier it is to rationalize it away as an anomaly. No one will change their mind about the latest fed move or tax change based on the next couple years worth of GDP data, or rethink international aid based on the continuing stagnation of failed states.

There were some (Malcom Muggeridge, Betrand Russell), who visited the Soviet Union and became disillusioned with socialism, but they had to actually go their to see not just the poverty, but more unexpectedly, the repression. Heck, the Soviet Union's GDP number seemed pretty good, and CIA validated (but fraudulent), for 50 years, right up to 1989.

Big, bad ideas aren't rejected, they're orphaned, as when economists simply stopped doing research on the Keynesian macro models of the 1970s, Leontief's input-output macro models, or working out the implications of Friedman's quantity theory of money.

Expand full comment