Author Archives: Paul Gowder

The surprising power of rote cognition

Even if you're familiar with the ideas that are presented on this blog, it can be surprising just how strong the forces of habit and rote cognition and behavior can be.

One of the schools of cognitive psychology that addresses biases describes "system 1" and "system 2" thinking, where "system 1" is everyday automatic processing, deciding by intuition, relying on heuristics, and totally filled with biases, and "system 2" is thoughtful and careful consideration, logical and methodical. But this seems inadequate, because we can slip into automatic cognitive patterns even when we are consciously trying to be careful.

A few examples from personal experience below the fold…

Continue reading "The surprising power of rote cognition" »

GD Star Rating
a WordPress rating system
Tagged as:

Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?

In the early days of this blog, I would pick fierce arguments with Robin about the no-disagreement hypothesis.  Lately, however, reflection on things like public reason have brought me toward agreement with Robin, or at least moderated my disagreement.  To see why, it’s perhaps useful to take a look at the newspapers

the pope said the book “explained with great clarity” that “an interreligious dialogue in the strict sense of the word is not possible.” In theological terms, added the pope, “a true dialogue is not possible without putting one’s faith in parentheses.”

What are we to make of a statement like this?

Continue reading "Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Bias in Real Life: A Personal Story

All too often, I, like all too many Americans, will walk into a fast food joint.  As is well known, the fast food industry has, for a good number of years now, been pushing combination meals — a single order will purchase a main course (classically, burger), a side order (fries) and a drink (coke).  As is also well known (pdf), people respond to cues like this in judging how much to consume — if something is packaged as a meal, we process it as a meal.  (In case that link doesn’t work, it’s to Brian Wansink & Koert van Ittersum, "Portion Size Me: Downsizing Our Consumption Norms" Journal of the American Dietetic Association 107:1103-1106 (2007).)

All this stuff is old news.  But, I wouldn’t expect myself to fall for it (which is the point of this post: I did).  I’m a pretty cynical and suspicious guy, a cynicism and suspicion that rises to almost downright paranoia when it comes to marketing.  (I’ve been known to heavily discount a health threat the moment someone starts selling a product to protect against it, for example.)  I flatter myself by thinking I’m somewhat intelligent.  And I’m well aware of the above research. 

Yet every few weeks until today, I’d walk into a Taco Bell and order one of those combo meals.  This is so even though I often don’t particularly want one of the items on the combo — I’m usually fairly indifferent between, say, having a soda and just drinking water.  Since water’s free and soda isn’t, rationally, I should just drink water every time.  So why do I order the combo meal?  Well, it’s in a combo meal — presumably, it’s cheaper than buying the items separately.  I’m saving money!*  Or, at least, this is the rationalization my brain would supply, on a level just below consciousness except on those rare, fleeting, and unproductive moments when I’d bother to think before ordering.**

Recently, in order to live a little healthier, I made a firm decision to stop consuming sodas.  So it was actually easy to figure out how much I was "saving" by ordering the combo meal instead of all three items.***

Guess how much I saved.  Go ahead.  Guess.  In the comments, even, if you want (status points to the first person who gets it right).  Highlight the space between the brackets to see, after you’ve guessed. 

[Combo meal savings over ordering all three items separately: $0.08.  Extra combo meal cost over ordering just the two items I wanted: $1.61]

I fell for this kind of stupidity even though I know the research.  Do you? 

I really think this bears emphasis.  I know this research really well, and I have known it for over a decade.  If they can get me, they can get anyone.  Everyone, even serious experts, even the guy who largely invented the study of these common biases, can fall prey to this kind of thing.  Dare you think you’re exempt? 

Do you think maybe this contributes to our obesity problem? Or do you still think that overeating can casually be described as a "free choice" for which people are personally responsible?  (While Taco Bell profits from selling unwanted sodas…)

Policy message: if even informed people can be suckered like this, maybe it is time for a legislative solution

Continue reading "Bias in Real Life: A Personal Story" »

GD Star Rating
a WordPress rating system
Tagged as:

The Problem at the Heart of Pascal’s Wager

It is a most painful position to a conscientious and cultivated mind to be drawn in contrary directions by the two noblest of all objects of pursuit — truth and the general good.  Such a conflict must inevitably produce a growing indifference to one or other of these objects, most probably to both.

– John Stuart Mill, from Utility of Religion

Much electronic ink has been spilled on this blog about Pascal’s wager.  Yet, I don’t think that the central issue, and one that relates directly to the mission of this blog, has been covered.  That issue is this: there’s a difference between the requirements for good (rational, justified) belief and the requirements for good (rational, prudent — not necessarily moral) action.

Presented most directly: good belief is supposed to be truth and evidence-tracking.  It is not supposed to be consequence-tracking.  We call a belief rational to the extent it is (appropriately) influenced by the evidence available to the believer, and thus maximizes our shot at getting the truth.  We call a belief less rational to the extent it is influenced by other factors, including the consequences of holding that belief.  Thus, an atheist who changed his beliefs in response to the threat of torture from the Spanish Inquisition cannot be said to have followed a correct belief-formation process. 

On the other hand, good action is supposed (modulo deontological moral theories) to be consequence-tracking.  The atheist who professes changed beliefs in response to the threat of torture from the Spanish Inquisition can be said to be acting prudently by making such a profession.

A modern gloss on Pascal’s wager might be understood less as an argument for the belief in God than as a challenge to that separation.  If, Modern-Pascal might say, we’re in an epistemic situation such that our evidence is in equipoise (always keeping in mind Daniel Griffin’s apt point that this is the situation presumed by Pascal’s argument), then we ought to take consequences into account in choosing our beliefs. 

There seem to be arguments for and against that position… 

Continue reading "The Problem at the Heart of Pascal’s Wager" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

If Self-Fulfilling Optimism is Wrong, I Don’t Wanna be Right

Often, I hear claims like the following: "too many people are cynical about electoral politics."  It’s hard to know just what to make of that sort of assertion.  For cynicism is most likely true about electoral politics, and, moreover, as a good little Bayesian, I should count the cynicism of just about everyone else as evidence to strengthen that belief. 

"But!," the anticynic might say, "cynicism is a self-fulfilling prophecy!  If we all believe that politics is run by crooks, we won’t demand better at the voting booth [for example, because we vote strategically for the least offensive guy we think can win rather than the one we trust]!  If enough people are optimistic, your optimism will be self-fulfilling too!" 

So imagine the following belief/payoff correspondences.  If you hold a true cynical belief, you get payoff A.  If you hold a false cynical belief (cynicism in a nice world), you get payoff B.  If you hold a true optimistic belief, you get payoff C, and if you hold a false optimistic belief, you get payoff D.  Suppose C>A>B>D (or C>A>D>B — it doesn’t matter.)  And suppose that the world is nice if M people are optimistic (where N is the number of people in the world, and N>M>1) and nasty otherwise.

Anyone who knows game theory will immediately see that this world amounts to a coordination game with two nash equilibria: everyone optimistic in a nice world and everyone cynical in a nasty world.  And the nice world equilibrium has higher payoffs for all.

Now suppose we’re in a nasty world.  How do we get to the nice world?  It seems like we’d do best if someone came along and deceived at least M people into thinking we’re in the nice world already! 

This shows us that not only can individually rational behavior be collectively suboptimal, so can individually rational (truth-maximizing) belief.  Should we support demagoguery? 

I imagine the self-fulfilling false belief problem works on some individual cases too.  For example, suppose I have more success in dating if I’m confident?  Suppose I’m a person who has poor success in dating.  True beliefs for me are not confident ones, but I’ll do better if I adopt falsely confident beliefs, which will then be retroactively justified by the facts.  Should I engage in self-deception? 

GD Star Rating
a WordPress rating system
Tagged as: ,

Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”

Followup to: Who Told You Moral Questions Would be Easy?Response to: Circular Altruism

At the most basic level (which is all we need for present purposes), an argument is nothing but a chain of dependence between two or more propositions.  We say something about the truth value of the set of propositions {P1…Pn}, and we assert that there’s something about {P1…Pn} such that if we’re right about the truth values of that set, we ought to believe something about the truth value of the set {Q1…Qn}. 

If we have that understanding of what it means to make an argument, then we can see that an argument doesn’t necessarily have any connection to the universe outside itself.  The utterance "1. all bleems are quathes, 2. the youiine is a bleem, 3. therefore, the youiine is a quathe" is a perfectly logically valid utterance, but it doesn’t refer to anything in the world — it doesn’t require us to change any beliefs.  The meaning of any argument is conditional on our extra-argument beliefs about the world.

One important use of this principle is reflected in the oft-quoted line "one man’s modus ponens in another man’s modus tollens."  Modus ponens is a classical form of argument: 1. A–>B.  2.  A. 3.  .: B.  Modus tollens is this: 1.  A–>B.  2. ¬B.  3. .: ¬A.  Both are perfectly valid forms of argument!  (For those who aren’t familiar with the standard notation, the horizontal line is meant to indicate negation.)  Unless you have some particular reason outside the argument to believe either A or B, you don’t know whether the claim A–>B means that B is true, or that A isn’t true! 

Why am I elucidating all this basic logic, which almost everyone reading this blog doubtless knows?  It’s a rhetorical tactic: I’m trying to make it salient, to bring it to the top of the cognitive stack, so that my next claim is more compelling.

And that claim is as follows:

Eliezer’s posts about the specks and the torture [1] [2], and the googleplex of people being tortured for a nanosecond, and so on, and so forth, tell you nothing about the truth of your intuitions.

Argument behind the fold…

Continue reading "Knowing your argumentative limitations, OR “one [rationalist’s] modus ponens is another’s modus tollens.”" »

GD Star Rating
a WordPress rating system
Tagged as: ,

Leading bias researcher turns out to be… biased, renounces result

A few days ago, Robin posted on the Edge’s annual question, which this year is about the changing of minds.  One of the participants (a social scientist who undoubtedly knows lots) is Daniel Kahneman.  It’s impossible to overstate Kahneman’s eminence.  He’s unquestionably one of a handful of top researchers ever, and arguably the most important yet alive, on the subjects that make up the theme of this very blog.  In addition to being one of the inventors of the "heuristics and biases" research program, as well as prospect theory, he also won the 2002 "Nobel Prize" in economics. 

Yet he, too, is not immune from motivated error.  A friend and colleague recently forwarded Kahneman’s Edge answer to me.  Apparently, Kahneman himself was so captivated by the lure of a neat theory to handle some difficulties in hedonic experience that he managed to misinterpret the first set of results!

Our hypothesis was that differences in life circumstances would have more impact on this measure than on life satisfaction.  We were so convinced that when we got our first batch of data, comparing teachers in top-rated schools to teachers in inferior schools, we actually misread the results as confirming our hypothesis.  In fact, they showed the opposite: the groups of teachers differed more in their work satisfaction than in their affective experience at work. This was the first of many such findings: income, marital status and education all influence experienced happiness less than satisfaction, and we could show that the difference is not a statistical artifact.  Measuring experienced happiness turned out to be interesting and useful, but not in the way we had expected.  We had simply been wrong. (Emphasis added)

Social scientists, beware.  If this can happen to Daniel Kahneman, it can happen to anyone.

GD Star Rating
a WordPress rating system
Tagged as: , , , , ,

Who Told You Moral Questions Would be Easy?

In addition to (allegedly) scope insensitivity and "motivated continuation," I would like to suggest that the incredibly active discussion on the torture vs. specks post is also driven in part by a bias toward, well, toward closure; a bias toward determinate answers: a bias toward decision procedures that are supposed to yield an answer in every case, and one that can be implemented by humans in the world in which we live and with the biological and social pressures that we face.

That’s the wonderful thing about the kinds of utilitarian intuitions that tell us, deep in our brains, that we can aggregate a lot of pain and pleasure of different kinds among different people and come up with some kind of scalar representing the net sum of "utility" to be compared to some other scalar for some other pattern of events in some possible world; the scalars to be compared to determine which world is morally better, and to which world our efforts should be directed. Those intuitions always generate a rationalizable answer.

Continue reading "Who Told You Moral Questions Would be Easy?" »

GD Star Rating
a WordPress rating system
Tagged as:

Is More Information Always Better?

Sex-offender registration laws, known collectively and colloquially as “Megan’s Law,” frequently impose registration and public notification requirements on citizens who have been convicted of various sex crimes. The defenses of these laws usually hinge on a claimed high recidivism rate of sex offenders, coupled with the position that people in a community have a right to protect themselves and their children from risky neighbors.

Critics of the measures usually invoke what Robin has been calling an “unseen bias used to justify a seen bias.” They claim that people in a community are likely to discriminate against known convicted sex offenders along most or all dimensions of life, and that this discrimination, apart from being unjust on its own, may actually make it harder for those criminals to rehabilitate and become productive and law-abiding members of society. These problems, say critics, justify creating a “bias” by keeping the costs of acquiring this information high for the public.

These laws may offer an angle on the unseen bias/seen bias question more generally. It seems to me that the critics of Megan’s Law are right, for at least the following reasons:

Continue reading "Is More Information Always Better?" »

GD Star Rating
a WordPress rating system
Tagged as: , ,

Gnosis

In honor of Christmas, a religious question.

Richard and Jerry are Bayesians with common priors. Richard is an atheist. Jerry was an atheist, but then he had an experience which he believes gives him certain knowledge of the following proposition (LGE): “There is a God, and he loves me.” Jerry’s experiences his knowledge as gnosis: a direct experience of divine grace that bestowed certain knowledge, period, and not conditioned on anything else at all. (Some flavors of Christianity and many other religions claim experiences like this, including prominently at least some forms of Buddhism.) In addition to believing certain knowledge of LGE, Jerry’s gnosis greatly modifies his probability estmates of almost every proposition in his life. For example, before the gnosis, the Christian Bible didn’t significantly impact the subjective probabilities of the propositions it is concerned with. Now it counts very heavily.

Richard and Jerry are aware of a disagreement as to the probability of LGE, and also the truth of the various things in the Bible. They sit down to work it out.

It seems like the first step for Richard and Jerry is to merge their data. Otherwise, Jerry has to violate one rule of rationality or another: since his gnosis is only consistent with the certainty of LGE, he can either discard plainly relevant data (irrational) or fail to reach agreement (irrational). Richard does his best to replicate the actions that got the gnosis into Jerry’s head: he fasts, he meditates on the koans, he gives money to the televangelist. But no matter what he does, Richard can not get the experience that Jerry had. He can get Jerry’s description of the experience, but Jerry insists that the description falls woefully short of the reality — it misses a qualitative aspect, the feeling of being “touched,” the bestowal of certain knowledge of the existence of a loving God.

Is it in principle possible for Richard and Jerry to reach agreement on their disputed probabilities given a non-transmissible experience suggesting to Jerry that P(LGE)=1?

GD Star Rating
a WordPress rating system
Tagged as: , ,