Monthly Archives: August 2007

Guessing the Teacher’s Password

Followup to:  Fake Explanations

When I was young, I read popular physics books such as Richard Feynman’s QED: The Strange Theory of Light and Matter.  I knew that light was waves, sound was waves, matter was waves.  I took pride in my scientific literacy, when I was nine years old.

When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called "the wave equation".  I could follow the equation’s derivation, but, looking back, I couldn’t see its truth at a glance.  So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious.  And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word "wave" meant to a physicist.

There is an instinctive tendency to think that if a physicist says "light is made of waves", and the teacher says "What is light made of?", and the student says "Waves!", the student has made a true statement.  That’s only fair, right?  We accept "waves" as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student?  Surely, the answer "Waves!" is either true or false, right? 

Continue reading "Guessing the Teacher’s Password" »

GD Star Rating
loading...

Nerds as Bad Connivers

Giving a keynote talk at a software conference recently made me reflect on the essence of "nerds." 

Assume that nerds essentially have "Autism light," i.e., high intelligence and low social skills.  If so, then while nerds can reason and sympathize well, they are less able to read the acts and expressions of others in order to infer their states of mind.  Nerd social behavior could then be as strategic or altruistic as anyone else, but it couldn’t as subtly depend on reading social cues.   

Distinguish two key social effects of these lower social skills: effects on cooperation and on conniving.  If low social skills makes it harder for nerds to cooperate, then we should find that groups of nerds are less able to coordinate with each other to achieve common ends, such as managing large projects together.  There may be an effect here, but if so it seems weak; nerds cooperate pretty effectively all the time on large software and other engineering projects.

The other social effect is on Machiavellian conniving.  Nerds should be worse at judging which coalition to join when, which associates may betray them or have done so, when and how to betray associates, what lies to tell, what threats will be credible and appropriate, and so on.  These low conniving skills should make nerds less attractive as coalition partners, at least for helping each coalition deal with other coalitions.  It seems pretty obvious to me that there is a large effect here. 

Now compare the social versus the private costs of these social skill deficits.   While a reduced ability to cooperate might hurt society even more than it hurt the nerd, a reduced ability to connive should hurt the nerd more than it hurts society.  Poorly cooperating nerds would tax society, giving a reason to shun nerds, but poorly conniving nerds would mainly be preyed upon by those with better social skills, and be victims worthy of social sympathy.  Spouses could more easily get away with cheating on nerds, and business partners could more easily get away with reneging on implicit understandings. 

If, as it seems to me, nerd social handicaps reduce nerd abilities to connive far more than their abilities to cooperate, then people should try too hard to avoid being exploited nerds, relative to a social optimum.  If so, we have too few nerds, and all else equal we should want to subsidize nerds, to get more of them. 

GD Star Rating
loading...
Tagged as:

Why do corporations buy insurance?

Yesterday I wondered:

Why do corporations by insurance for fire damage and such?  It seems to me that maybe the oughtn’t, since the cost of insurance is greater than the expected payouts (due to administrative costs, asymmetric information, moral hazards etc).  Investors should presumably prefer corporations to be pure bets, and reduce risk and volatility by holding suitably diversified portfolios.

Today my colleague Peter Taylor, who worked in the insurance industry for many years, replied (reproduced here with permission):

Corporations certainly do buy insurance against fire and very good value it proves to be for them I must say when a large-scale fire does occur.  Your argument was adopted by some large corporations going "self-insured" or creating  their own "captives" but generally it takes one large loss and they are back in the insurance market.  Moreover, the argument for self-insurance can be about saving a few pennies off expenses rather than assessing the real risk – a recent example was Hull Council deciding to self-insure with its own fund against flood rather than pay the market price – underestimating the losses by an order of magnitude.  The reversion to the insurance market is partly to do with shareholders’ wish for stable results as well as their reluctance to accept bad luck.  Shareholders don’t seem to accept that accidents/fires/whatever happen and blame the management (Napoleon’s unlucky generals) so from a management point of view it is much easier to buy the insurance year on year and avoid getting caned when a loss does occur.

I’m still not sure I completely understand why insurance is bought. It might be that shareholders are biased (which seems to be what Peter suggests).  If so, is this a recognized failing? Do sophisticated institutional investors also prefer that the companies they own stock in buy fire insurance?

Continue reading "Why do corporations buy insurance?" »

GD Star Rating
loading...
Tagged as:

Fake Explanations

Once upon a time, there was an instructor who taught physics students.  One day she called them into her class, and showed them a wide, square plate of metal, next to a hot radiator.  The students each put their hand on the plate, and found the side next to the radiator cool, and the distant side warm.  And the instructor said, Why do you think this happens?  Some students guessed convection of air currents, and others guessed strange metals in the plate.  They devised many creative explanations, none stooping so low as to say "I don’t know" or "This seems impossible."

And the answer was that before the students entered the room, the instructor turned the plate around.

Consider the student who frantically stammers, "Eh, maybe because of the heat conduction and so?"  I ask: is this answer a proper belief?  The words are easily enough professed – said in a loud, emphatic voice.  But do the words actually control anticipation?

Ponder that innocent little phrase, "because of", which comes before "heat conduction".  Ponder some of the other things we could put after it.  We could say, for example, "Because of phlogiston", or "Because of magic."

Continue reading "Fake Explanations" »

GD Star Rating
loading...

Media Risk Bias Feedback

Recently a friend mentioned that he was concerned about health effects from wifi. I pointed out that this was likely an overblown concern, fed by the media echoes of a scare mongering BBC Panorama program, and pointed him at the coverage at Ben Goldacre’s blog Bad Science for a through takedown of the whole issue.

To my surprise he came back more worried than ever. He had watched the program on the Bad Science page, but not looked very much at the damning criticism surrounding it. After all, a warning is much more salient than a critique. My friend is highly intelligent and careful about his biases, yet fell for this one.

There exists a feedback loop in cases like this. The public is concerned about a possible health threat (electromagnetic emissions, aspartame, GMOs) and demand that the potential threat is evaluated. Funding appears and researchers evaluate the threat. Their findings are reported back through media to the public, who update their risk estimates.

In an ideal world the end result is that everybody get better estimates. But this process very easily introduces bias: the initial concern will determine where the money goes, so issues the public is concerned about will get more funding regardless of where the real risks are. The media reporting will also introduce bias since the media favour reporting newsworthy news, and risk tends to cause greater interest than reports of no risk (or the arrival of reviews of the state of the knowledge). Hence studies warning of a risk will be overreported compared to risks downplaying it, and this will lead to a biased impression of the total risk. Finally, the public will have an availability bias that makes them take note of reported risks more than reported non-risks. And this leads to further concerns and demands for investigation.

Note that I leave out publication bias and funding bias here.There may also be a feedback from the public to media making media report things they estimate the public would want to hear about. These factors of course muddy things further in real life but mostly seem to support the feedback, not counter it.

Continue reading "Media Risk Bias Feedback" »

GD Star Rating
loading...
Tagged as:

Irrational Investment Disagreement

The Spring Journal of Economic Perspectives reviews how many investment puzzles can be explained by irrational disagreement:

One should not be able to forecast a stock’s return with anything other than … riskiness … Yet … a large catalog of variables with no apparent connection to risk have been shown to forecast stock returns, …. stocks that have had unusually high past returns or good earnings news to continue to deliver relatively strong returns over the subsequent six to twelve months … "glamour" stocks with high ratios of market value to earnings, cashflows or book value to deliver weak returns over the subsequent several years … many of the most interesting patterns in prices and returns are tightly linked to movements in volume …

We … argue in favor of… "disagreement" models. … encompassing … the following underlying mechanisms: i) gradual information flow; ii) limited attention; and iii) heterogeneous priors. … this class of models is at its heart about the importance of differences in the beliefs of investors. …

Gradual information flow by itself can be entirely consistent with a rational model … What is also required… is that, … investors do not fully take into account the fact that they may be at an informational disadvantage, …
limited attention needs to be combined with the assumption that …  when trading with others, they do not adjust for the fact that they are basing their valuations on only a subset of the relevant information. … one needs to combine heterogeneous priors with an assumption that the investors do not fully update their beliefs based on each other.

GD Star Rating
loading...
Tagged as: ,

Is Molecular Nanotechnology “Scientific”?

Prerequisite / Read this first:  Scientific Evidence, Legal Evidence, Rational Evidence

Consider the statement "It is physically possible to construct diamondoid nanomachines which repair biological cells."  Some people will tell you that molecular nanotechnology is "pseudoscience" because it has not been verified by experiment – no one has ever seen a nanofactory, so how can believing in their possibility be scientific?

Drexler, I think, would reply that his extrapolations of diamondoid nanomachines are based on standard physics, which is to say, scientific generalizations. Therefore, if you say that nanomachines cannot work, you must be inventing new physics.  Or to put it more sharply:  If you say that a simulation of a molecular gear is inaccurate, if you claim that atoms thus configured would behave differently from depicted, then either you know a flaw in the simulation algorithm or you’re inventing your own laws of physics.

Continue reading "Is Molecular Nanotechnology “Scientific”?" »

GD Star Rating
loading...

Are More Complicated Revelations Less Probable?

Consider two possible situations, A and B. In situation A, we come across a person–call him "A"–who makes the following claim: "I was abducted by aliens from the planet Alpha; they had green skin." In situation B, we come across a different person–call him "B"–who tells us, "I was abducted by aliens from the planet Beta; they had blue skin, they liked to play ping-pong, they rode around on unicycles, and their favorite number was 7." In either situation, we are likely to assign low subjective probability to the abduction claim that we hear. But should we assign higher subjective probability to the claim in one situation more than in the other?

Mindful of Occam’s razor, and careful to avoid the type of reasoning that leads to the conjunction fallacy, we might agree that A’s claim is, in itself, more probable, because it is less specific. However, we have to condition our probability assessment on the evidence that A or B actually made his claim. While B’s claim is less intrinsically likely, the hypothesis that B’s claim is true has strong explanatory power to account for why B made the specific statements he did. Thus, in the end it may not be so obvious whether we should believe A’s claim more in situation A than we believe B’s claim in situation B.

Continue reading "Are More Complicated Revelations Less Probable?" »

GD Star Rating
loading...
Tagged as: , ,

Scientific Evidence, Legal Evidence, Rational Evidence

Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen.  As a rationalist, are you licensed to believe this statement?  Put it this way: if you go ahead and mess around with Wulky’s teenage daughter, I’d call you foolhardy.  Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence.

Our legal system will not imprison Wulky on the basis of the police commissioner’s statement.  It is not admissible as legal evidence.  Maybe if you locked up every person accused of being a crime boss by a police commissioner, you’d initially catch a lot of crime bosses, plus some people that a police commissioner didn’t like.  Power tends to corrupt: over time, you’d catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity) and more and more innocent victims (unrestrained power attracts corruption like honey attracts flies).

This does not mean that the police commissioner’s statement is not rational evidence.  It still has a lopsided likelihood ratio, and you’d still be a fool to mess with Wulky’s teenager daughter.  But on a social level, in pursuit of a social goal, we deliberately define "legal evidence" to include only particular kinds of evidence, such as the police commissioner’s own observations on the night of April 4th.  All legal evidence should ideally be rational evidence, but not the other way around.  We impose special, strong, additional standards before we anoint rational evidence as "legal evidence".

As I write this sentence at 8:33pm, Pacific time, on August 18th 2007, I am wearing white socks.  As a rationalist, are you licensed to believe the previous statement?  Yes.  Could I testify to it in court?  Yes.  Is it a scientific statement?  No, because there is no experiment you can perform yourself to verify it.  Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone’s authority.  Science is the publicly reproducible knowledge of humankind.

Like a court system, science as a social process is made up of fallible humans.  We want a protected pool of beliefs that are especially reliable.  And we want social rules that encourage the generation of such knowledge.  So we impose special, strong, additional standards before we canonize rational knowledge as "scientific knowledge", adding it to the protected belief pool.

Continue reading "Scientific Evidence, Legal Evidence, Rational Evidence" »

GD Star Rating
loading...

Pseudo-Criticism

Years of studying the history, philosophy, and sociology of science led me to conclude that the word "science" says little useful beyond "good research."  Peter Woit over at Not Even Wrong demonstrates, ranting against Bostrom’s "pseudoscience" simulation argument.   His complaints: he doesn’t see how to check it with data soon, it is easier than his research but has "dense thickets" of reasoning he finds too hard to follow, and it is not very connected to and distracts attention from his research areas: 

On the pseudo-science front … beyond the edge of absurd, there’s today’s NYT Science Times section, which features a piece by John Tierney about the ideas of philosopher of science Nick Bostrom. … that there’s a significant probability that our universe is just a simulation being conducted by a more advanced civilization … Maybe we should be trying to entertain our creators so they will not turn off the simulation? …

The main reason I find myself getting annoyed with discussions of it here is that generally it’s pretty irrelevant to the science I’m concerned about.  Not only that, but a huge amount of damage is being done to that science by an increasingly large number of people who seem unable to tell the difference between science and science fiction. … people who want to do pseudo-science because it’s a lot easier than science will keep on justifying absurd, and inherently untestable speculation, claiming that "how do you know that a miracle won’t happen if we work on this? If we do, maybe we’ll find a real test!"

People who do this behave exactly the same way as every crackpot I’ve ever made the mistake of arguing with, trying to draw you into an endless investigation of the dense thickets of their idiocy. Arguing with someone who thinks the "simulation argument" is a scientific hypothesis is just this kind of waste of time.

… I don’t see what the problem is with "lumping Bostrom’s ideas in with religion".  They’re not science and have similar characteristics: grandiose speculation about the nature of the universe which some people enjoy discussing for one reason or another, but that is inherently untestable, and completely divorced from the actual very interesting things that we have learned about the universe through the scientific method. …

The thing which is likely to lead me sooner or later to have to give up and shut down this blog is … the large number of people who want to turn this into a discussion forum for crackpottery and various forms of pseudo-science.

Sure there are possible ways you could define "science"; but few agree on which one, and when a proposed definition seems to conflict with "good research" people usually start to look for another definition.

Added: Woit responds with "neener neener." 

GD Star Rating
loading...
Tagged as: