# Not Science, Not Speculation

I often hear this critique of my em econ talks: “This isn’t hard science, so it is mere speculation, where anyone’s guess is just as good.”

I remember this point of view – it is the flattering story I was taught as a hard science student, that there are only two kinds of knowledge: simple informal intuition, and hard rigorous science:

Informal intuition can help you walk across a street, or manage a grocery list, but it is nearly hopeless on more abstract topics, far from immediate experience and feedback. Intuition there gives religion, mysticism, or worse. Hard science, in contrast, uses a solid scientific method, without which civilization would be impossible. On most subjects, there is little point in arguing if you can’t use hard science – the rest is just pointless speculation. Without science, we should just each user our own intuition.

The most common hard science method is deduction from well-established law, as in physics or chemistry. There are very well-established physical laws, passing millions of empirical tests without failure. Then there are well-known approximations, with solid derivations of their scope. Students of physical science spend years doing problem sets, wherein they practice drawing deductive conclusions from such laws or approximations.

Another standard hard science method is statistical inference. There are well-established likelihood models, well-established rules of thumb about which likelihood models work with which sorts of data, and mathematically proven ways to both draw inferences from data using likelihood models, and to check which models best match any given data. Students of statistics spend years doing problems sets wherein they practice drawing inferences from data.

Since hard science students can see that they are much better at doing problem sets than the lessor mortals around them, and since they know there is no other reliable route to truth, they see that only they know anything worth knowing.

Now, experienced practitioners of most particular science and engineering disciplines actually use a great many methods not reducible to either of these methods. And many of these folks are well aware of this fact. But they are still taught to see the methods they are taught as the only reliable route to truth, and to see social sciences and humanities, which use other methods, as hopeless delusional, wolves of intuition in sheep’s clothing of apparent expertise.

I implicitly believed this flattering story as a hard science student. But over time I learned that it is quite wrong. Humans and their civilizations have collected a great many methods that improve on simple unaided intuition, and today in many disciplines and fields of expertise the experienced and studied have far stronger capacities than the inexperienced and unstudied. And these useful such methods are not remotely we’ll summarized as formal statistical inference or deduction from well-established laws.

In economics, the discipline I know best, we often use deduction and statistical inference, and many of our models look at first glance like approximations derived from well-established fundamental results. But our well-established results have many empirical anomalies, and are often close to tautologies. We often have only weak reasons to expect many common model assumptions. Nevertheless, we know lots, much embodied in knowing when which models are how useful.

Our civilization gains much from our grand division of labor, where we specialize in learning different skills. But a cost is that it can take a lot of work to evaluate those who specialize in other fields. It just won’t do to presume that only those who use your methods know anything. Much better is to learn to become expert in another field in the same way others do; but this is usually way too expensive.

Of course, I don’t mean to claim that all specialists are actually valuable to the rest of us. There probably are many fraudulent fields, best abolished and forgotten, or at least greatly reformed. But there just isn’t a fast easy way to figure out which are those fields. You can’t usually identify a criminal just by their shifty eyes; you usually have look at concrete evidence of crime. Similarly, you can’t convict a field of fraud based on your feeling that their methods seem shifty. You’ll have to look at the details.

GD Star Rating
Tagged as: , ,
• DanielHaggard

Surely there are some rules of thumb though that we can use in distinguishing between someone who is just spinning a good yarn and someone who is actually providing value.  What is the difference in value between say – a scientologist and an analytic philosopher?

Here are some candidates identifiers for decent non-scientific contributions:

1) At least respects the outputs of known science (they continually revise their yarns as science brings more to the table)

2) Make a contribution to concept formation – which can later be used in new scientific hypotheses.  Which leads to:

3) Make a contribution to hypothesis formation.

But I suspect these will not be satisfying because they still all derive their value from science.  A lot of what analytic philosophy does, for instance, will never make a contribution to scientific endeavour.

Still – I feel there has to be more we can say here… if your ‘intuition’ about non-scientific expertise is correct.

• http://juridicalcoherence.blogspot.com/ srdiamond

Shortcuts exist, if you don’t conceive of yourself as an epistemic atom. ( http://tinyurl.com/6kamrjs  ) You can rely on the opinions of those who have good reason to respect intellectually on other bases but who have deeply immersed themselves in the discipline in question. This is one basic function of philosophy: to identify positions close to one’s own, so one can presume to rely on their judgment more or less as if it were your own.

And any intellectual should understand at least one cognate discipline. Isn’t that still required for graduate degrees?

Perhaps more to the point, it might be rational to abstain on many intellectual questions where we have no inkling or participate only to the extent that we do.

• http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

Analytic philosophy is harmless. Scientologist do exercises that have psychological effects. They train people to show no emotional responses towards insults.
The core question is whether teaching people to supress their emotions is healthy.

• VV

In economics, the discipline I know best, we often use deduction and
statistical inference, and many of our models look at first glance like
approximations derived from well-established fundamental results. But
our well-established results have many empirical anomalies, and are
often close to tautologies. We often have only weak reasons to expect
many common model assumptions. Nevertheless, we know lots, much embodied
in knowing when which models are how useful.

In September 2007, at the beginning of the subprime mortages crisis, Robert Lucas, a Nobel laureate in economics (*) and professor at the (in)famous Chicago University, wrote: “So I
am skeptical about the argument that the subprime mortgage problem will
contaminate the whole mortgage market, that housing construction will
come to a halt, and that the economy will slip into a recession. Every
step in this chain is questionable and none has been quantified. If we
have learned anything from the past 20 years it is that there is a lot
of stability built into the real economy.” http://gregmankiw.blogspot.it/2007/09/lucas-on-monetary-policy.html

Paul Krugman, another Nobel laureate in economics (*), pretty much calls him and the other Chicago economists crackpots, an accusation which they retort to him, of course.

That’s what you get when you forfeit the scientific method, especially in a field where huge financial or political interests exist.

(* It’s worth mentioning that there is actually no Nobel Prize in economics. Alfred Nobel never instituted it and the Nobel foundation doesn’t award it. What economists get is the “Nobel Memorial Prize in Economic Sciences”, which is awarded by the Central Bank of Sweden)

• http://overcomingbias.com RobinHanson

You are really going to reject all of economics because you found two economists disagree in public?

• VV

It’s not just the two of them. There are entire ‘schools’ of macroeconomics which disagree on core issues. And they seem unable to even find a method to settle their disagreement.

For instance, on the subprime mortages crisis, Lucas seems to have got it wrong. Was him unlucky or incompetent? We don’t know and we can’t know, because he didn’t use a method capable of derieving systmatic predictions from the first principles of the theory.

• http://juridicalcoherence.blogspot.com/ srdiamond

The irony is that your position–rely only on empirically confirmed science–is also Lucas’s:

Every step in this chain is questionable and none has been quantified.

The economists who diagnosed the bubble correctly were the ones going out on the “unscientific” limb.

Also, consider the possibility that some economic phenomena are chaotic, unsuitable for prediction. (If prediction of everything economic was easy by means of economics, economists would all be rich men.) That doesn’t mean that all economic phenomena are unpredictable–or that economists can’t have intelligent if “unscientific” opinions about economic vulnerabilities.

Still, the economic meltdown wasn’t economics’ finest hour. I hope economists are scrutinizing the reasons for their failure.

• http://juridicalcoherence.blogspot.com/ srdiamond

“economists would all be rich men”

But I suppose there are female economists.

• a_____z

Oh, give me a break. The Nobel stuff is hardly worth mentioning, as the set of people who are capable of making informed, insightful statements about economics as a discipline and also don’t know about the Nobel’s history is almost certainly the empty set.

(PS: You’re excluded from that set for the former reason, not the latter)

• http://entitledtoanopinion.wordpress.com TGGP

I thought you were going to make the point that Bayesianism allows for much less rigorous forms of evidence that can still change our beliefs on the margin. Eliezer has made the point about “conservation of expected evidence” applied to witch trials, Karl Popper was making a similar point about intellectuals that evaded the possibility of falsification.

VV, I’m not aware of any academic economist critical of Krugman’s punditry who has said that his work on trade theory (for which he won a Nobel) is crack-pottery. It’s entirely possible that they exist, but aren’t prevalent. Macro is more cordial than it is commonly portrayed, although of course that does definitively prove its worth.

• http://juridicalcoherence.blogspot.com/ srdiamond

I thought you were going to make the point that Bayesianism allows for much less rigorous forms of evidence that can still change our beliefs on the margin.

“Bayesians” in the philosophy of science indulge in the same variety of scientistic thinking that Robin critiques. They try to solve problems in epistemology with means essentially restricted to mathematics, so they avoid the issues. Bayesian probability allows the representation mathematically of states of belief subject to subtle influences, but it doesn’t have any bearing on whether these subtle forms of inference exist.

• VV

On the contrary, Bayesian inference is much more formal than the scientific method: it requires you to explicitly formulate quantitative priors and state your hypotheses as quantitative conditional probabilities, and then to update your beliefs according to precise mathematical rules. If anything, the scientific method can be considered an approximation of Bayesian inference with an informal simplicity prior.

Of course, Yudkowsky and his ilk have expanded the usage of the term ‘Bayesian’ so much that it became little more than a buzzword in their speech, pretty much like the term ‘objective’ for Objectivists.  (Take the quantum mechanics sequence, for instance, where Yudkowsky pompously claims that by the power of his ‘Bayesian’ techniques he managed to settle a long unresolved issue in mainstream physics and philosophy of science. In fact, he didn’t even attempt any actual Bayesian analysis).

• rrb

“In fact, he didn’t even attempt any actual Bayesian analysis”

Yeah he did. He argued that out of two theories that assign equal probabilities to the evidence, one of them has lower Kolmogorov complexity and thus has higher probability with a Solomonoff prior. Right here:http://lesswrong.com/lw/q8/many_worlds_one_best_guess/

(When he says “Occam’s razor” he links to posts about Solomonoff induction, and it’s clear if you read this, especially in the context of the rest of the sequence, that this is what he’s up to)

• dmytryl

This was really stupid because the output in Solomonoff Induction which he also mentions is required to begin with the data that is being predicted (if it had to merely contain, a counter would win), so MWI-esque tape is not even a valid code – you have to include the code that picks one world to be able to output predictions.

You can’t really handwave away requirement to output whats observed, or a simple counter which will eventually output any string, will be the best theory.

• VV

(sorry for breaking the reply)
VV, I’m not aware of any academic economist critical of Krugman’s
punditry who has said that his work on trade theory (for which he won a
Nobel) is crack-pottery. It’s entirely possible that they exist, but
aren’t prevalent. Macro is more cordial than it is commonly portrayed, although of course that does definitively prove its worth.

Sure, they lay down axioms of abstract economic models and prove theorems on them, and they generally agree that the theorems follow from the axioms. But if these models don’t allow us to make unambiguous testable predictions (even probabilistic ones) about the observable world, what are they good for?

• dEMOCRATIC_cENTRALIST

But if these models don’t allow us to make unambiguous testable predictions (even probabilistic ones) about the observable world, what are they good for?

What if they allow better predictions than you would get without them?

• VV

AFAIK, it’s impossible to tell, and that’s the point of testability.

• http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

Prediction are by definition testable.

Look at bioinformatics. Protein structure prediction is a hard problem. Bioinformaticians have a biyearly contest where different bioinformaticians compete to find out who’s best at predicting protein structure.

Economists don’t seem to have prediction contests to compare different economic models against each other.

Econmists spend to much time with mathematical proofs and not enough time with montecarlo simulations.

There no good reason why economists do things differently then bioinformaticians, expect the fact that the people who can actually make good computer models about the economic reality get hired by the private sector and leave academia.

This leaves people who aren’t interested in real world predictions in economic departments. I think there good reason to assume that those people don’t know what they are doing.

• VV

There no good reason why economists do things differently then
bioinformaticians, expect the fact that the people who can actually make
good computer models about the economic reality get hired by the
private sector and leave academia.

Top-level bioinformaticians also get contended by pharmaceutical companies, but this doesn’t seem to prevent the academia from having people capable of obtaining measurable results.

Companies which need accurate economic predictions (investment funds, banks) often hire physicists or other hard-science professionals, not economists.

• Robin Hanson

Bayesian stat is a fine theoretical account of what would make any method use data to track truth. But formal Bayesian statistics has only limited direct applicability as a method of practice that people can learn in the process of getting good at a discipline or field.

• Daublin

Just to stir some imagination, here are a few other kinds of reasoning that smart people do to increase their knowledge beyond intuition.

Sometimes they reason by analogy. They are trying to understand some system, and they compare it to some other system that they understand better. Good analogic reasoning involves a number of sub-steps.  For example, brainstorming for a good system to analogize to is rather important; in fact, one of the more valuable ways a smart person can really prove their worth. As another example, it is helpful to actively and systematically look for possible sources of disanalogy; if the search fails, you have higher confidence in the analogy.

People use spot testing. If you test random parts of a system, and they are all firm, then you gain the ability to do something like a statistical inference that the whole system mostly consists of robust components. This is particularly relevant in a social context, where you are trying to understand something built by other people, but you don’t know ahead of time what kind of work they did.

People reason about isolation of effects. You can make more powerful inferences if you can eliminate components of the system under study as irrelevant. For example, if you are trying to determine the breaking characteristics of a particular car, you can probably ignore the chassis’ effect on air resistance. You can definitely ignore the radio.

People modify the problem. For example, if you aren’t sure the radio isn’t affecting the transmission, you might disconnect the radio. If you are trying to compare two computer systems, you might install equivalent versions of most of the software, to eliminate sources of disanalogy. If you are trying to understand a software anomaly, it helps if you can remove parts of program without affecting the anomaly.

People recode their data. That is, they do a first layer of analysis to modify a data set into something more manageable, and then do further analysis on the resulting data set. For example, one person might recode the income data into two broad categories of “below \$50k income” and “above \$50k income”. Another might recode data into a savers/spenders index, where 1.0 means a savaholic and 0.0 means they spend every penny the moment they get it.

Finally, people measure. “Measurement” includes a number of techniques for increasing knowledge without chaining together prior knowledge. There are too many kinds of measurement to even enumerate them without boring people, but to stir the imagination: written surveys, data from supermarket loyalty cards, interviews, manual logging procedures, and spyware.

• http://juridicalcoherence.blogspot.com/ srdiamond

it is the flattering story I was taught as a hard science student

Really? I haven’t found students of physics prone to this view. Rather, I’ve found it prevalent among practitioners of applied science, among engineers and physicians.

• http://www.facebook.com/profile.php?id=599840205 Christian Kleineidam

“We often have only weak reasons to expect many common model assumptions. Nevertheless, we know lots, much embodied in knowing when which models are how useful.”

How do you know that you know?

• http://overcomingbias.com RobinHanson

On the occasions where statistical tests can be applied, they usually indicate such knowledge. And my intuitions agree.

• http://juridicalcoherence.blogspot.com/ srdiamond

The danger seems to lie in not knowing the boundaries of expert knowledge. (Kahneman makes this point.) “A little knowledge is a dangerous thing” can, perhaps, apply to entire disciplines.

• Phylos

FWIW, this “hard science” account has been rejected by the Philosophy of Science community since the 1950s. For details, an account I like is Larry Loudan’s Progress and Its Problems. There are many others. Loudan’s book was published in the 1970s; this whole wheeze is old.

• Pingback: Epicene Cyborg