What Evidence Intuition?

Many complain that economists "over-simplify."  We do simplify of course; we are picky about what data "counts", and we prefer models with a few stark assumptions.  And other social analysts do offer more detailed social stories, relying on messier data like personal impressions.  But economists wonder: does their added detail get them closer to the truth? 

A new "experimental philosophy" movement is pushing philosophers to be more like economists, by relying less directly on personal intuition.  (See this manifesto, and this blog; I predict this will go far.) 

"Intuition" is when our subconscious mind suggests to us a conclusion (with a confidence level), but without as clearly explaining its reasoning.  Since we are intelligent creatures, with far more subconscious than conscious mental activity, these intuited conclusions do tend to correlate with truth, all else equal.  So the fact of an intuition for a conclusion can be evidence for that conclusion. 

Philosophers invoke intuitions rather promisciously, however, seemingly as an all purpose glue available to plug any whole in any argument.  This encourages them to build elaborate and detailed theories.  But many complain philosophers weigh their personal intuitions too heavily, implicitly assuming them to be widely shared or superior to others’ intuitions.  (I think my colleague Bryan Caplan, for example, relies too heavily on his personal strong intuitions for dualism and morality.)

Experimental philosophers say we should instead rely more on larger intuition datasets, and so they survey intuitions across wide multicultural pools. They have some surprising results.  For example, ordinary folks disagree with philosophers about what moral acts are "intentional":

Two thought-experiments … differed only in the moral significance of the action described:

(1) The vice-president of a company went to the chairman of the board and said, `We are thinking of starting a new program. It will help us increase profits, but it will also harm the environment.’  The chairman of the board answered. `I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.’  They started the new program. Sure enough, the environment was harmed.

(2) The vice-president of a company went to the chairman of the board and said, `We are thinking of starting a new program. It will help us increase profits, but it will also help the environment.’  The chairman of the board answered. `I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.’ They started the new program. Sure enough, the environment was helped.

… Most [ordinary] subjects (82%) considering the first thought-experiment (in which the action had negative moral qualities) indicated having the intuition that the action was intentional. By contrast, most subjects (77%) considering the second thought-experiment (in which the action had positive moral qualities) indicated having the intuition that the action was unintentional.

There are now two factions in experimental philosophy:

For proponents of the proper foundation view, the problem with standard philosophical practice is that proper care has not been given to determining just what are the intuitions that should be used as evidence for or against philosophical claims.  By contrast, for proponents of the restrictionist view, the problem with standard philosophical practice is that experimental evidence seems to point to the unsuitability of intuitions to serve as evidence at all. …

The restrictionist … advocates not the root and branch removal of all intuitions, but just the pruning away of some of the more poisoned philosophical branches. The peculiar and esoteric intuitions that are the philosopher’s stock-in-trade represent a fairly small portion of the entire human intuitive capacity, and it hardly impugns the latter if the former turn out to be untenable.  (Contending that squinting in dim light is a poor way to see the world accurately would, likewise, not be to cast doubt on perception on the whole.)

So the epistemologists’ responses give us, at best, that intuitions are on average reliable, when to save the armchair practice from the restrictionists, what they need to offer is some reason to think that philosophers’ intuitions about typical philosophical hypothetical cases are reliable.  Importantly, the restrictionists’ experiments do not merely suggest that philosophers’ intuitions are fallible; they also reveal that fallibility in places that armchair philosophers were not at all expecting to find it.

My only philosophy publication so far similarly argued that philosophers naively rely too directly on error-prone health-care-specific intuitions.  From the analysis in my paper, I predict that once philosophers realize they intuition data is less reliable than they thought, they will adopt simpler theories, as economists do.  Some key questions to address:

  • What does it say about a topic that our main evidence on it is intuitions, rather than more explicit data and arguments?
  • How does intuition reliability vary with topic and person, and how reliable are intuitions on this meta-topic?
  • How much does conformity, vs. info, make professional philosopher intuitions differ from amateur ones?


GD Star Rating
Tagged as:
Trackback URL:
  • Thanks for the plug Robin! I have added your interesting site to our blog roll.

  • Senthil

    What do you mean when you say ‘health-care-specific intutions’?

    Would philosophers have simpler theories or lesser theories too? How do you imagine some philosophical theory would be if they realize that intution data is less reliable? Like, do you have any instance in mind on how some theory, any theory, is viewed currently and how would it be simpler like an economic theory?

  • Senthil, bioethics is full of claims that different moral rules apply to health and medicine, versus say food or warmth. It is simpler to try to give each person what they want, than to try specifically to give them more health relative to other things.

  • I have also been irked by Caplan‘s reliance on intuition. He does say introspection is one area he agrees with the Austrians on. At his webpage Mario Rizzo says he wants to shift economics away from mathematics and toward philosophy, so I wonder if he thinks X-phi is a good thing in bring the disciplines closer together or bad in that it chips away at what is good in philosophy.

    Before on this blog I have discussed whether a machine could calculate the answer to ethical problems. Could a machine generate intuitions in response to philosophical queries? One definition of intuition given is a conclusion from the subconscious without evident reasoning. The working of most machines is usually known at least to the designer, so perhaps by definition couldn’t have intuitions. I don’t think they could answer the sort of epistemological questions posed in the pdf (whether someone “really knows” or “just believes”). It seems like an issue of language. Before I have excluded normative statements from the class of statements which have truth value, but I think these sorts of things could be included as well. They do not seem to alter our expected experience at all, so it doesn’t really matter what the answer to the question is.

  • Caplan’s reliance on intuition is, I think, essential, but I’m not sure he reduces it to simply being presented with a thought without its reasoning. Rather I and perhaps he thinks of intuition as irreducible. You intuit and there’s the data (with some probability of truth). It’s not that the reasoning isn’t clear–it’s that the reasoning isn’t there.

    Now, perhaps we should rely on a broader base of intuitions. But what evidence is there that such a pool will be more accurate than my singular intuition? Are you relying on some sort of meta-intuition to make that claim?

  • Scott, how can you be so confident that our subsconcious minds have no reasoning behind the conclusions they offer us? This seems to reject everything we know about cognitive science.

  • The major problem with this restrictionist philosophical method is that you end up being a Procrustes. Focusing on one part of the data at the exclusion of other parts means that you end up with a half truth. The cost/benefit is this, the more object knowledge you have the less subject knowledge, the more analysis the less syntheses, the more quality the less quantity.

    For example, you can reduce music down to its objective elements. You can say that this song has this frequency and then this one and then the next. But by doing this you kill the existential aspect of the song. The subjective element is what makes music, music.

    Soren Kiekegaard talked about how the more objective a fact is the less existential personal relevance the fact has and vise versa. The fact that 2+2=4 is highly objective but it has little meaning to me on an existential level on the other extreme my eternal salvation has a lot of existential importance to me but it is unprovable. A philosopher to sacrificing the intuitive (existential data) comes at too high of a cost. A philosopher is a lover of wisdom, as such a philosopher needs not to look for a wisdom to serve him/her but a wisdom to serve. The truths found by a restrictionist philosophical method are below man and are not worthy to serve.

  • Cure, no one is suggesting eliminating intuition as evidence.

  • Robin, Doesn’t what you suggest come at some cost even if it’s a matter of degree? I know my world view comes with the cost of having certainty.

  • Cure, almost any act has both costs and benefits.

  • Robin,

    My point was not that conclusions don’t have (at times) subconscious reasoning underlying them. Rather, my point was that what Caplan calls “intuition” and, in my experience, what many philosophers call “intuition”, is irreducible, not simply a conclusion without knowledge of how it was arrived at.

    I’m at fault here, because what you’re calling intuition is also an accepted definition of the word. Indeed, I suspect I’ve simply misread you by assuming one definition when you intended another.