On average, contrarian views are less accurate than standard views. Honest contrarians should admit this, that neutral outsiders should assign most contrarian views a lower probability than standard views, though perhaps a high enough probability to warrant further investigation. Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right.
Bayesianism in stats?
Non-collapse in QM?
And yet, people who have an actual expertise in the relevant fields, computer scientists and AI researchers, tend to reject your arguments, or at least think they are exaggerated, while you get most support, especially academic support, from philosophers and other non-experts.
Outsiders only need to know two things: that the arguments presented are highly conjunctive, and that there is a wide range of opinion in all the fields involved.
But for almost everyone, with regard to the arguments MIRI gives about risks from AI, this would require enormous amounts of time and attention and study of new-to-them fields like computation and AI. Isn't a healthy dose of epistemic learned helplessness appropriate for almost everyone trying to evaluate MIRI's arguments?
Allow me to belabor the obvious: this very old post merits re-reading—by its author.
Robin argues that contrarian views are prima facie less plausible than conventional views.
This obvious conundrum is seldom addressed by those (such as "contrarians") who are rationally required to address it. Robin, in this posting, addresses it without claiming to resolve it.
My solution is to distinguish between the near-mode concept of opinion and the far-mode concept of belief, recognizing that in controversy rationality is advanced when proponents advance their opinions (factoring out the evidence provided by the brute judgments of others) despite having contrary beliefs: knowing full well that they are probably wrong! — http://tinyurl.com/6kamrjs .
[I can't in good faith recommend this to, say, Yudkowsky. He can't as effectively ask people to contribute thousands of dollars to something he really doesn't believe—or at least really shouldn't believe. But note well: this means there's deception and irrationality built right into the core of the MIRI project.]
This is remarkably useful in terms of strategic approaches to get those who're doubtful about your beliefs on your side. Unfortunately it doesn't detail specific tactics which can be used in winning arguments. It's a great counterpoint to Majoritarianism though, and something I will definitely try to remember and integrate as I go forward.
I don't know if it's the beginning of a darker trend, but I've noticed a few contrarians using "inferential distance too large" as an excuse. It's similar to "few who study us disagree". It might actually be the same thing under a new name; I'm not sure. (Certainly it has the same motivational origin.)
I hate how Eliezer introduced the concept and now I'm seeing it used as an excuse not to explain crazy viewpoints. (Not giving examples, for obvious reasons.)
I know it's an old thread, but I'm bored. Barkley Rosser states my straightforward testable alternative to standard theory, that risk is unrelated to return as a function of a utility function based on relative wealth, as opposed to absolute wealth, is a sideshow. It generates rather clear, testable, and important alternative to our current paradigm, which states that if you measure the right metric of wealth, covariance with that metric is positively and linearly related to expected returns.
Rosser states econophysics is a promising alternative. I disagree. Of course, one can point to physics-like things in many stochastic models. Heck, the original Black-Scholes was derived via a differential equation used for the heat equation in physics. But as a field, econophysics generates an embarrassment of riches. Models that produce statistical properties--means, variances, jumps--'like' those we observe in financial time series. But that is too easy.
Generating variance, jumps, phase shifts, is one thing, but to then assert these are laws being obeyed in real time is quite different than fitting them to the peso-dollar exchange rate in the 1980s. I haven't seen any clean testable hypotheses generated from econophysics, only many papers showing how, with hindsight, various models can emulated the past. That's not promising, anyone with Excel and a time series can come up with a fun model that has a high R2. If all you want to do is fit, atheoretical approaches are great for that. If you want to predict, you need a theory that restricts.
The main intervention that affects mortality in a positive manner is dietary energy restriction. That's not about weight - but it is about energy intake.
I'ts freely available here:
27-page pdf version
Example given in the paper, moral reasoning by analogy:
'In seeking protection from Eastern's creditor's in bankruptcy court, Lorenzo (Chairman of financially troubled Eastern Airlines), is like a young man who killed his parents and then begged the judge for mercy because he was an orphan. During the last three years, Lorenzo has stripped Eastern of its most valuable assets and then pleaded poverty because the shruken structure was losing money'.
To make the analogical relations redundant and apply Bayes,.you need, as you say to find; 'the probability that the explicit narrative is true, multiplied by the probability that it can be validly applied with sufficient precision to the implicit problem domain.'
The trouble is with the former probability, you can't assign such a probability, because there is no precisely definable context-free moral statement you can make, there will always be counter-examples for given situations (See paper).
Put simply, you can never cannot fully detach near-mode details from far-mode narrative, thus independent probabilities can't be assigned. That's why Bayes ultimately fails.
This thread is getting old but it is a topic of interest to me, so I wanted to post a few examples and links which I have run across. These are contrarian views which have been pretty decisively rejected:
- AIDS denialism: AIDS is not caused by the HIV virus, but by environmental factors such as drug abuse
- quantized redshift: redshifts of distant galaxies tend to fall close to multiples of certain values, contrary to most cosmological theories
- cold fusion: loading deuterium into various metal compounds releases anomalous heat and radiation
- laetrile: cures cancer
On the other side are contrarian views which have become accepted. I think to count as contrarian, we need to have had a period of time in which the view was seen as disreputable or at least as unlikely. Sometimes evidence leads to a new model pretty quickly, as when cosmologists discovered in the 1990s that the universe's expansion was accelerating rather than decelerating. Contrarian successes should look more like paradigm shifts, internal revolutions. A few possible examples:
- punctuated equilibrium as a model for evolution
- some fats are good for you, rather than all fat being unhealthy
- behaviorism fails to explain most human behavior
- monetarist economic theories replaced Keynesian (oh, wait, I mean it the other way around)
A good source of physics-oriented contrarianism going back to the 1980s is John Cramer's Alternate View columns. Astonishingly, I found that link in a posting I made in 1996, and it is still good (and still being updated).
I'd still appreciate people adding others, if they run across this posting in the future.
Ain't gonna trade a bottle of St. Germain for a 300kb pdf, but from the summary and introduction I don't see where the author shows that arguments by analogy don't depend on the probability that the explicit narrative is true, multiplied by the probability that it can be validly applied with sufficient precision to the implicit problem domain.
Paper: 'Argument by Analogy' (Juthe, '05)
Demonstrates that there are perfectly valid analogical arguments that cannot be converted into inductive (bayesian) or deductive form.
Hal those are both disturbing, and persuasive, sources. The second one only address heart disease and not mortality more generally. I know that if one controls for both exercise and weight, weight doesn't seem to matter for mortality.
Eliezer, well that can and should be tested!
Really? It seems to me that the amount of self-satisfaction scales pretty linearly with the length of the explanation of their psychology.