Category Archives: Overconfidence

Overconfidence & Paternalism

Paul Graham tries to explain paternalism: 

Parents know they’ve concealed the facts about sex, and many at some point sit their kids down and explain more. But few tell their kids about the differences between the real world and the cocoon they grew up in. Combine this with the confidence parents try to instill in their kids, and every year you get a new crop of 18 year olds who think they know how to run the world.

Don’t all 18 year olds think they know how to run the world? Actually this seems to be a recent innovation, no more than about 100 years old. In preindustrial times teenage kids were junior members of the adult world and comparatively well aware of their shortcomings. They could see they weren’t as strong or skillful as the village smith. In past times people lied to kids about some things more than we do now, but the lies implicit in an artificial, protected environment are a recent invention. Like a lot of new inventions, the rich got this first. Children of kings and great magnates were the first to grow up out of touch with the world. Suburbia means half the population can live like kings in that respect.  …

Continue reading "Overconfidence & Paternalism" »

GD Star Rating
loading...
Tagged as: , ,

Science Doesn’t Trust Your Rationality

Followup toThe Dilemma: Science or Bayes?

Scott Aaronson suggests that Many-Worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging:

Libertarianism and MWI are both are grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments).

Now there’s an analogy that would never have occurred to me.

I’ve previously argued that Science rejects Many-Worlds but Bayes accepts it.  (Here, "Science" is capitalized because we are talking about the idealized form of Science, not just the actual social process of science.)

It furthermore seems to me that there is a deep analogy between (small-‘l’) libertarianism and Science:

  1. Both are based on a pragmatic distrust of reasonable-sounding arguments.
  2. Both try to build systems that are more trustworthy than the people in them.
  3. Both accept that people are flawed, and try to harness their flaws to power the system.

Continue reading "Science Doesn’t Trust Your Rationality" »

GD Star Rating
loading...

Sincerity Is Overrated

Consider choices like:

  • Do I push folks at my large company to hire my son?
  • Do I spend college money from my parents to pursue an acting career?
  • Do cut open this patient to try my new surgical technique?

Such choices might be justified if, for example,

  • My son is really good fit for the job opening.
  • I have an excellent chance to succeed in acting.
  • This is a very promising surgical technique.

But when am I justified in having such beliefs?  Most people think they are justified in acting on a belief if that belief is "sincere."  And by "sincere" they mean they are not conscious of just pretending to believe.  When they go to the shelf in their mind where that belief is suppose to sit, this is what they find.  And they don’t remember anything illicit about how that belief got there.

But sincerity is way too low a standard!  Since humans have an enormous tendency toward self-deception, wishful thinking, and so on, we are clearly "sincerely" biased in many ways.  So to be justified in acting on a belief, you must have tried to identify and overcome relevant biases.  Furthermore, your efforts should be proportionate to the magnitude of the actions being considered, and to the magnitude of the biases that could distort your beliefs.  For important actions where biases tend to be large, you must try very hard to consider what you might have seen and felt if the world were other than you think it is. 

GD Star Rating
loading...
Tagged as:

Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”

Followup to: Who Told You Moral Questions Would be Easy?Response to: Circular Altruism

At the most basic level (which is all we need for present purposes), an argument is nothing but a chain of dependence between two or more propositions.  We say something about the truth value of the set of propositions {P1…Pn}, and we assert that there’s something about {P1…Pn} such that if we’re right about the truth values of that set, we ought to believe something about the truth value of the set {Q1…Qn}. 

If we have that understanding of what it means to make an argument, then we can see that an argument doesn’t necessarily have any connection to the universe outside itself.  The utterance "1. all bleems are quathes, 2. the youiine is a bleem, 3. therefore, the youiine is a quathe" is a perfectly logically valid utterance, but it doesn’t refer to anything in the world — it doesn’t require us to change any beliefs.  The meaning of any argument is conditional on our extra-argument beliefs about the world.

One important use of this principle is reflected in the oft-quoted line "one man’s modus ponens in another man’s modus tollens."  Modus ponens is a classical form of argument: 1. A–>B.  2.  A. 3.  .: B.  Modus tollens is this: 1.  A–>B.  2. ¬B.  3. .: ¬A.  Both are perfectly valid forms of argument!  (For those who aren’t familiar with the standard notation, the horizontal line is meant to indicate negation.)  Unless you have some particular reason outside the argument to believe either A or B, you don’t know whether the claim A–>B means that B is true, or that A isn’t true! 

Why am I elucidating all this basic logic, which almost everyone reading this blog doubtless knows?  It’s a rhetorical tactic: I’m trying to make it salient, to bring it to the top of the cognitive stack, so that my next claim is more compelling.

And that claim is as follows:

Eliezer’s posts about the specks and the torture [1] [2], and the googleplex of people being tortured for a nanosecond, and so on, and so forth, tell you nothing about the truth of your intuitions.

Argument behind the fold…

Continue reading "Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”" »

GD Star Rating
loading...
Tagged as: ,

CFO Overconfidence

A recent NBER paper:

We test whether top corporate executives are miscalibrated, and whether their miscalibration impacts investment behavior. Over six years, we collect a unique panel of nearly 7,000 observations of probability distributions provided by top financial executives regarding the stock market. Financial executives are miscalibrated: realized market returns are within the executives’ 80% confidence intervals only 38% of the time. We show that companies with overconfident CFOs use lower discount rates to value cash flows, and that they invest more, use more debt, are less likely to pay dividends, are more likely to repurchase shares, and they use proportionally more long-term, as opposed to short-term, debt.

It would be relatively easy to measure overconfidence in CFO candidates, and choose less overconfident ones.  Since this doesn’t happen, I suspect that CEOs, like bosses of software managers, prefer overconfident CFOs.

GD Star Rating
loading...
Tagged as:

Trust in Math

Followup toExpecting Beauty

I was once reading a Robert Heinlein story – sadly I neglected to note down which story, but I do think it was a Heinlein – where one of the characters says something like, "Logic is a fine thing, but I have seen a perfectly logical proof that 2 = 1."  Authors are not to be confused with characters, but the line is voiced by one of Heinlein’s trustworthy father figures.  I find myself worried that Heinlein may have meant it.

The classic proof that 2 = 1 runs thus.  First, let x = y = 1.  Then:

  1. x  =  y
  2. x2  =  xy
  3. x2 - y=  xy - y2
  4. (x + y)(x - y)  =  y(x - y)
  5. x + y = y
  6. 2 = 1

Now, you could look at that, and shrug, and say, "Well, logic doesn’t always work."

Or, if you felt that math had rightfully earned just a bit more credibility than that, over the last thirty thousand years, then you might suspect the flaw lay in your use of math, rather than Math Itself.

You might suspect that the proof was not, in fact, "perfectly logical".

The novice goes astray and says:  "The Art failed me."
The master goes astray and says:  "I failed my Art."

Continue reading "Trust in Math" »

GD Star Rating
loading...

Guardians of Ayn Rand

Followup toEvery Cause Wants To Be A Cult, Guardians of the Truth

"For skeptics, the idea that reason can lead to a cult is absurd.  The characteristics of a cult are 180 degrees out of phase with reason.  But as I will demonstrate, not only can it happen, it has happened, and to a group that would have to be considered the unlikeliest cult in history.  It is a lesson in what happens when the truth becomes more important than the search for truth…"
                 — Michael Shermer, "The Unlikeliest Cult in History"

I think Michael Shermer is over-explaining Objectivism.  I’ll get around to amplifying on that.

Ayn Rand’s novels glorify technology, capitalism, individual defiance of the System, limited government, private property, selfishness. Her ultimate fictional hero, John Galt, was <SPOILER>a scientist who invented a new form of cheap renewable energy; but then refuses to give it to the world since the profits will only be stolen to prop up corrupt governments.</SPOILER>

And then – somehow – it all turned into a moral and philosophical "closed system" with Ayn Rand at the center.  The term "closed system" is not my own accusation; it’s the term the Ayn Rand Institute uses to describe Objectivism.  Objectivism is defined by the works of Ayn Rand.  Now that Rand is dead, Objectivism is closed.  If you disagree with Rand’s works in any respect, you cannot be an Objectivist.

Continue reading "Guardians of Ayn Rand" »

GD Star Rating
loading...

Affective Death Spirals

Followup toThe Affect Heuristic, The Halo Effect

Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts.  The phlogiston theory of chemistry could explain just about anything, so long as it didn’t have to predict it in advance.  And the more phenomena you use your favored theory to explain, the truer your favored theory seems – has it not been confirmed by these many observations?  As the theory seems truer, you will be more likely to question evidence that conflicts with it.  As the favored theory seems more general, you will seek to use it in more explanations.

If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that’s probably how they got started.

(Just keep an eye out, and you’ll observe much that seems to confirm this theory…)

This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.

But it’s nothing compared to the death spiral that begins with a charge of positive affect – a thought that feels really good.

A new political system that can save the world.  A great leader, strong and noble and wise.  An amazing tonic that can cure upset stomachs and cancer.

Heck, why not go for all three?  A great cause needs a great leader.  A great leader should be able to brew up a magical tonic or two.

Continue reading "Affective Death Spirals" »

GD Star Rating
loading...

The fallacy of the one-sided bet (for example, risk, God, torture, and lottery tickets)

This entry by Eliezer struck me as an example of what I call the fallacy of the one-sided bet.  As a researcher and teacher in decision analysis, I’ve noticed that this form of argument has a lot of appeal as a source of paradoxes.  The key error is the framing of a situation as a no-lose (or no-win) scenario, formulating the problem in such a way so that tradeoffs are not apparent.  Some examples:

Continue reading "The fallacy of the one-sided bet (for example, risk, God, torture, and lottery tickets)" »

GD Star Rating
loading...
Tagged as:

We Change Our Minds Less Often Than We Think

"Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another.  The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%."
       — Dale Griffin and Amos Tversky, "The Weighing of Evidence and the Determinants of Confidence."  (Cognitive Psychology, 24, pp. 411-435.)

When I first read the words above – on August 1st, 2003, at around 3 o’clock in the afternoon – it changed the way I thought.  I realized that once I could guess what my answer would be – once I could assign a higher probability to deciding one way than other – then I had, in all probability, already decided.  We change our minds less often than we think.  And most of the time we become able to guess what our answer will be within half a second of hearing the question.

How swiftly that unnoticed moment passes, when we can’t yet guess what our answer will be; the tiny window of opportunity for intelligence to act.  In questions of choice, as in questions of fact.

Continue reading "We Change Our Minds Less Often Than We Think" »

GD Star Rating
loading...