27 Comments

Investor Saving Bank -Min $100- Paid 300% after 1 hourInvest with peace of mind.Your money is always at work even when yoau are not.Experienced, mature, certified financial professionals.Invest Nowhttp://www.investorsavingba...Investment Insurancehttp://www.payinghyiponline...

Expand full comment

The danger seems to lie in not knowing the boundaries of expert knowledge. (Kahneman makes this point.) "A little knowledge is a dangerous thing" can, perhaps, apply to entire disciplines.

Expand full comment

"economists would all be rich men"

But I suppose there are female economists.

Expand full comment

The irony is that your position--rely only on empirically confirmed science--is also Lucas's:

Every step in this chain is questionable and none has been quantified.

The economists who diagnosed the bubble correctly were the ones going out on the "unscientific" limb.

Also, consider the possibility that some economic phenomena are chaotic, unsuitable for prediction. (If prediction of everything economic was easy by means of economics, economists would all be rich men.) That doesn't mean that all economic phenomena are unpredictable--or that economists can't have intelligent if "unscientific" opinions about economic vulnerabilities.

Still, the economic meltdown wasn't economics' finest hour. I hope economists are scrutinizing the reasons for their failure.

Expand full comment

 It's not just the two of them. There are entire 'schools' of macroeconomics which disagree on core issues. And they seem unable to even find a method to settle their disagreement.

For instance, on the subprime mortages crisis, Lucas seems to have got it wrong. Was him unlucky or incompetent? We don't know and we can't know, because he didn't use a method capable of derieving systmatic predictions from the first principles of the theory.

Expand full comment

Oh, give me a break. The Nobel stuff is hardly worth mentioning, as the set of people who are capable of making informed, insightful statements about economics as a discipline and also don't know about the Nobel's history is almost certainly the empty set.

(PS: You're excluded from that set for the former reason, not the latter)

Expand full comment

 @facebook-599840205:disqus

There no good reason why economists do things differently then bioinformaticians, expect the fact that the people who can actually make good computer models about the economic reality get hired by the private sector and leave academia.

Top-level bioinformaticians also get contended by pharmaceutical companies, but this doesn't seem to prevent the academia from having people capable of obtaining measurable results.

Companies which need accurate economic predictions (investment funds, banks) often hire physicists or other hard-science professionals, not economists.

Expand full comment

Prediction are by definition testable. 

Look at bioinformatics. Protein structure prediction is a hard problem. Bioinformaticians have a biyearly contest where different bioinformaticians compete to find out who's best at predicting protein structure.

Economists don't seem to have prediction contests to compare different economic models against each other.

Econmists spend to much time with mathematical proofs and not enough time with montecarlo simulations. 

There no good reason why economists do things differently then bioinformaticians, expect the fact that the people who can actually make good computer models about the economic reality get hired by the private sector and leave academia. 

This leaves people who aren't interested in real world predictions in economic departments. I think there good reason to assume that those people don't know what they are doing. 

Expand full comment

Analytic philosophy is harmless. Scientologist do exercises that have psychological effects. They train people to show no emotional responses towards insults.The core question is whether teaching people to supress their emotions is healthy. 

Expand full comment

You are really going to reject all of economics because you found two economists disagree in public?

Expand full comment

On the occasions where statistical tests can be applied, they usually indicate such knowledge. And my intuitions agree.

Expand full comment

FWIW, this "hard science" account has been rejected by the Philosophy of Science community since the 1950s. For details, an account I like is Larry Loudan's Progress and Its Problems. There are many others. Loudan's book was published in the 1970s; this whole wheeze is old.

Expand full comment

"We often have only weak reasons to expect many common model assumptions. Nevertheless, we know lots, much embodied in knowing when which models are how useful."

How do you know that you know?

Expand full comment

it is the flattering story I was taught as a hard science student

Really? I haven't found students of physics prone to this view. Rather, I've found it prevalent among practitioners of applied science, among engineers and physicians.

Expand full comment

Just to stir some imagination, here are a few other kinds of reasoning that smart people do to increase their knowledge beyond intuition.

Sometimes they reason by analogy. They are trying to understand some system, and they compare it to some other system that they understand better. Good analogic reasoning involves a number of sub-steps.  For example, brainstorming for a good system to analogize to is rather important; in fact, one of the more valuable ways a smart person can really prove their worth. As another example, it is helpful to actively and systematically look for possible sources of disanalogy; if the search fails, you have higher confidence in the analogy.

People use spot testing. If you test random parts of a system, and they are all firm, then you gain the ability to do something like a statistical inference that the whole system mostly consists of robust components. This is particularly relevant in a social context, where you are trying to understand something built by other people, but you don't know ahead of time what kind of work they did.

People reason about isolation of effects. You can make more powerful inferences if you can eliminate components of the system under study as irrelevant. For example, if you are trying to determine the breaking characteristics of a particular car, you can probably ignore the chassis' effect on air resistance. You can definitely ignore the radio.

People modify the problem. For example, if you aren't sure the radio isn't affecting the transmission, you might disconnect the radio. If you are trying to compare two computer systems, you might install equivalent versions of most of the software, to eliminate sources of disanalogy. If you are trying to understand a software anomaly, it helps if you can remove parts of program without affecting the anomaly.

People recode their data. That is, they do a first layer of analysis to modify a data set into something more manageable, and then do further analysis on the resulting data set. For example, one person might recode the income data into two broad categories of "below $50k income" and "above $50k income". Another might recode data into a savers/spenders index, where 1.0 means a savaholic and 0.0 means they spend every penny the moment they get it.

Finally, people measure. "Measurement" includes a number of techniques for increasing knowledge without chaining together prior knowledge. There are too many kinds of measurement to even enumerate them without boring people, but to stir the imagination: written surveys, data from supermarket loyalty cards, interviews, manual logging procedures, and spyware.

Expand full comment

Bayesian stat is a fine theoretical account of what would make any method use data to track truth. But formal Bayesian statistics has only limited direct applicability as a method of practice that people can learn in the process of getting good at a discipline or field.

Expand full comment