Monthly Archives: January 2011

The Accidental Hypocrite

I have recently been exploring a Homo Hypocritus (man the sly rule bender) view of human nature, that humans have hugemongous brains in order to conspire to evade social norms. I’ve also known and respected Robert Kurzban for far longer, and so was excited to see his new book, Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Alas, while he has lots of thoughtful insight to offer along the way (the book is worth reading), Kurzban’s main thesis seems to be that humans are accidental hypocrites, since pretty much any evolved creatures with social norms would be hypocrites, because it is just too hard to be fully consistent:

The key to understanding our behavioral inconsistencies lies in understanding the mind’s design. The human mind consists of many specialized units designed by the process of evolution by natural selection. While these modules sometimes work together seamlessly, they don’t always, resulting in impossibly contradictory beliefs, vacillations between patience and impulsiveness, violations of our supposed moral principles, and overinflated views of ourselves. (more)

Since we have different modules to criticize the behavior of others and to choose our own actions, Kurzban says, we shouldn’t expect such modules to be coordinated, and so we just happen sometimes to be accidentally hypocritical.

The modules that cause behavior are different from the ones that cause people to voice agreement with moral rules. Because condemnation and conscience are caused by different modules, it is no wonder that speech and action often conflict. Taken together, these ideas make it clear that the modular design of the human mind guarantees hypocrisy. (p.205)

Nothing to see here, move along.

But our mind parts do coordinate a great deal, to a remarkable degree. Yes we shouldn’t expect perfect coordination, but our minds seem evolved in great intricate detail to manage the coordination between the norms we espouse and the actions we perform. In fact, I expect we have a great many mental modules devoted to exactly such functions.

If selection pressures had favored it, we could have evolved to match our norms and actions to a high degree of precision. So I think that our actual lower degree of matching is because most of our norm-act mismatches are functional. We are really quite (unconsciously) careful to monitor when our norm violations would be noticed and get us into trouble, versus when we have a good chance of getting away with them. We even coordinate carefully with our associates to arrange circumstances to be conducive to such violation. If this is true, most of our norm violations aren’t even remotely accidental.

Now Kurzban does admit that some hypocrisy is designed and functional. But he doesn’t think this is due to coordination; he thinks it mainly comes from a few hypocrisy modules:

The press secretary module might be designed to contain certain kinds of information that are useful for certain purposes, even if other modules have information that not only conflicts with this information, but is also more likely to be accurate. (p.86)

Some parts of the mind – some modules – are designed for functions other than being right because of certain strategic advantages. These modules produce propaganda, and, like the more traditional political propaganda, the information isn’t always exactly right.  (p.130)

In contrast, I think a large fraction of the human mind was designed together to facilitate the coordination of diverse behavior to achieve effective hypocrisy.

As Kurzban wrote a whole book to defend his point of view, we might wonder what are his arguments against this opposing point of view. But alas, he offers no arguments. He doesn’t even acknowledge that there is another view. He simply takes the tone that anyone who disagrees with him must not understand that brains are made of modules, and so he should explain that point one more time, with yet another cute anecdote.

I have been collecting and present evidence for my view here at this blog, and I’ll continue to do so. The more detailed and sophisticated seem our capacities for subtle self-benefiting hypocrisy, the less plausible becomes the view that hypocrisy is mostly accidental, or the result of a few small hypocrisy modules.

GD Star Rating
loading...
Tagged as:

Dumb Farmers

Apparently the foraging life is more mentally demanding than is the farming life.  Brain size rose during the forager era, but fell during the farming era. During the industry era brain size is rising again, yet another way we are returning to forager ways with increasing wealth.

Combined with social brain theory, that our brains are big to deal with complex social worlds, suggests farmer social worlds are less complex.  Perhaps this is because stronger town social norms better discourage hypocritical norm evasion.

The data: Continue reading "Dumb Farmers" »

GD Star Rating
loading...
Tagged as: , ,

Voting Is A Far Fest

In 6 experiments, … priming high power led to more abstract processing than did priming low power. (more)

To many of us it seems obvious that collective choice often goes very wrong. Yes there are many real and serious coordination problems, and yes collective choice institutions can and do often address such problems. Even so, democratic policy often seems quite dysfunctional.

There have been many attempts to account for democracy’s dysfunction, but it has turned out to be hard to make much sense of such accounts via formal game theoretic models using selfish rational agents. Bryan Caplan’s celebrated book The Myth of the Rational Voter, argues instead that voters are “rationally irrational,” indulging in varied irrationalities regarding their political beliefs, because their very low chance of being pivotal (i.e., decisive) in an election means each voter’s vote matters to them mainly for non-outcome reasons, such as personal identity, group loyalty, personality signaling, etc.

While Caplan is insightful, Tyler Cowen once noted that democracy skeptics tend to distrust policy decisions made by randomly selected voters, even though such voters could be confident that their choices matter. You might think that random voters deciding would be better than ordinary democracy.  Even so, if you’d also be wary of policy choices by random decisive voters, then you must think something else goes wrong with democracy besides a low chance of voters being pivotal. But what?

Longtime readers should not be surprised to hear my suggestion: even random pivotal voters tend to think in a far mental mode. When we make concrete choices about our own immediate lives, especially for our private consumption, we are in a pretty near mental mode.  Since near-far depends on distance in time, social distance, and unlikeliness, our mental mode becomes farther when our choices are about a more distant future, are about a wider scope of people, are seen by more people, are about more unlikely situations, or are unlikely to matter. So citizen votes in a democracy are pretty much a far fest (especially regarding unlikely far future techs).

Of course this analysis suggests that autocratic rulers also think in a rather far mode, suggesting that choices by random voters wouldn’t be much worse than those by a randomly selected king. Autocratic rulers selected via a vicious and ruthless contest for power might think in a more near mode, but more serving their own private ends, which might deviate greatly from ours. Ideally we’d select firm CEOs in part for their ability to maintain a nearer mental mode, while adhering to rules limiting their ability to exploit firms for personal gain.

Futarchy’s slogan, “vote on values, but bet on beliefs,” suggests that it might encourage collective choices based on more realistic near-mode evaluations of policy consequences, though voting on values would still retain a far fest of values.  I’m not sure how best to deal with that.

GD Star Rating
loading...
Tagged as: ,

Define By Consequences

If corporations must be treated as “persons” for the purpose of campaign contributions – as the Supreme Court mandated last year in the infamous Citizens United decision – why shouldn’t they also enjoy “personal privacy”? The case threatens to weaken an important tool used to hold government and corporations accountable. … The court should not repeat that mistake by again allowing corporations to masquerade as people. (more)

People often argue about “definitions” as if the main issue was conceptual essences, or “cutting nature at its joints.” But in fact the vast majority definition disputes are really about social convention (including law). For example, I was interviewed recently on our changing “definition of death.” I said we’d long had a perfectly sensible and timeless concept: death is when life is no longer possible. What people want instead is an easy to apply criteria, so they can know when it is socially acceptable to “give up” on someone, or to declare someone a “murderer.” The timeless concept doesn’t serve this role well, so they seek something else. (Which then limits cryonics.)

Similarly, we’ve long had a decent concept of “father,” the man from whom half of a kid’s DNA comes. But some say that since it is good for each kid to have the support of a man, we should declare a cuckolded husband to be the “father” of his wife’s kid. Debates about the definitions of “naked” or “porn” are similarly about social convenience.

The issue of calling firms “people” is also really about social consequences of doing so, even though many talk as if there was a “natural kind” out there to discover, if only we did enough conceptual analysis. I’ve argued that since the function of “free speech” is best served by “free hearing“, it shouldn’t matter who wants to talk. Unless we are willing to censor, we should let citizens hear any sources they desire.

Similarly, we should ask about the social functions served by privacy protections. Yes weaker privacy protections make it easier to hold firms accountable, but that applies to individual humans as well. And if stronger privacy protects folks more against abuse by governments or others, that benefit should apply to firms as well. Yes people may just have a direct preference for privacy, but such preferences may be weak, and perhaps people working at a firm feel similarly about the privacy of their firm.

For most definition disputes, pretending to resolve it via conceptual analysis just isn’t very honest. It is more honest to argue about the desirability of various consequences of alternate social conventions.

GD Star Rating
loading...
Tagged as: , ,

Beware “Consensus”?

If your doctor discourages you from seeking another opinion, you have even more reason to get one. (more)

Honest contrarians who expect reasonable outsiders to give their contrarian view more than normal credence should point to strong outside indicators that correlate enough with contrarians tending more to be right. (more)

Perhaps one strong outside indicator that a contrarian view is right is when the media goes out of its way to say that it is opposed by a “scientific consensus”! Ron Bailey in July:

Several [out of the eight media-declared] scientific consensuses before 1985 turned out to be wrong or exaggerated, e.g., saccharin, dietary fiber, fusion reactors, stratospheric ozone depletion, and even arguably acid rain and high-dose animal testing for carcinogenicity.

It seems to me that for folks with a contrarian bent, getting more better studies like this should be a high priority. More details from Ron: Continue reading "Beware “Consensus”?" »

GD Star Rating
loading...
Tagged as: ,

Signal Mappers Decouple

Andrew Sullivan notes that Tim Lee argues that ems (whole brain emulations) just won’t work:

There’s no reason to think it will ever be possible to scan the human brain and create a functionally equivalent copy in software. Hanson … fails to grasp that the emulation of one computer by another is only possible because digital computers are the products of human designs, and are therefore inherently easier to emulate than natural systems. … Digital computers … were built by a human being based on a top-down specification that explicitly defines which details of their operation are important. The spec says exactly which aspects of the machine must be emulated and which aspects may be safely ignored. This matters because we don’t have anywhere close to enough hardware to model the physical characteristics of digital machines in detail. Rather, emulation involves re-implementing the mathematical model on which the original hardware was based. Because this model is mathematically precise, the original device can be perfectly replicated.

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. … Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. … We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate. … Each neuron is itself a complex biological system. I see no reason to think we’ll ever be able to reduce it to a mathematically tractable model. (more; Eli Dourado agrees; Alex Waller disagrees.)

Human brains were not designed by humans, but they were designed. Evolution has imposed huge selection pressures on brains over millions of years, to perform very particular functions. Yes, humans use more math that does natural selection to assist them. But we should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor. The weather is not a designed signal processor, so it does not achieve such decoupling. Let me explain.

A signal processor is designed to mantain some intended relation between particular inputs and outputs. All known signal processors are physical systems with vastly more degrees of freedom than are contained in the relevant inputs they seek to receive, the outputs they seek to send, or the sorts of dependencies between input and outputs they seek to maintain. So in order manage its intended input-output relation, a signal processor simply must be designed to minimize the coupling between its designed input, output, and internal channels, and all of its other “extra” physical degrees of freedom. Really, just ask most any signal-process hardware engineer.

Now sometimes random inputs can be useful in certain signal processing strategies, and this can be implemented by coupling certain parts of the system to most any random degrees of freedom. So signal processors don’t always want to minimize extra couplings. But this is a rare exception to the general need to decouple.

The bottom line is that to emulate a biological signal processor, one need only identify its key internal signal dimensions and their internal mappings – how input signals are mapped to output signals for each part of the system. These key dimensions are typically a tiny fraction of its physical degrees of freedom. Reproducing such dimensions and mappings with sufficient accuracy will reproduce the function of the system.

This is proven daily by the 200,000 people with artificial ears, and will be proven soon when artificial eyes are fielded. Artificial ears and eyes do not require a detailed weather-forecasting-like simulation of the vast complex physical systems that are our ears and eyes. Yes, such artificial organs do not exactly reproduce the input-output relations of their biological counterparts. I expect someone with one artificial ear and one real ear could tell the difference. But the reproduction is close enough to allow the artificial versions to perform most of the same practical functions.

We are confident that the number of relevant signal dimensions in a human brain is vastly smaller than its physical degrees of freedom. But we do not know just how many are those dimensions. The more dimensions, the harder it will be to emulate them. But the fact that human brains continue to function with nearly the same effectiveness when they are whacked on the side of the head, or when flooded with various odd chemicals, shows they have been designed to decouple from most other physical brain dimensions.

The brain still functions reasonably well even flooded with chemicals specifically designed to interfere with neurotransmitters, the key chemicals by which neurons send signals to each other! Yes people on “drugs” don’t function exactly the same, but with moderate drug levels people can still perform most of the functions required for most jobs.

Remember, my main claim is that whole brain emulation will let machines substitue for humans through the vast majority of the world economy. The equivalent of human brains on mild drugs should be plenty sufficient for this purpose – we don’t need exact replicas.

Added 7p: Tim Lee responds:

Hanson seems to be making a different claim here than he made in his EconTalk interview. There his claim seemed to be that we didn’t need to understand how the brain works in any detail because we could simply scan a brain’s neurons and “port” them to a silicon substrate. Here, in contrast, he’s suggesting that we determine the brain’s “key internal signal dimensions and their internal mappings” and then build a digital system that replicates these higher-level functions. Which is to say we do need to understand how the brain works in some detail before we can duplicate it computationally. …

Biologists know a ton about proteins. … Yet despite all our knowledge, … general protein folding is believed to be computationally intractible. … My point is that even detailed micro-level knowledge of a system doesn’t necessarily give us the capacity to efficiently predict its macro-level behavior. … By the same token, even if we had a pristine brain scan and a detailed understanding of the micro-level properties of neurons, there’s no good reason to think that simulating the behavior of 100 billion neurons will ever be computationally tractable.

My claim is that, in order to create economically-sufficient substitutes for human workers, we don’t need to understand how the brain works beyond having decent models of each cell type as a signal processor. Like the weather, protein folding is not designed to process signals and so does not have the decoupling feature I describe above. Brain cells are designed to process signals in the brain, and so should have a much simplified description in signal processing terms. We already have pretty good signal-processing models of some cell types; we just need to do the same for all the other cell types.

GD Star Rating
loading...
Tagged as: , , , ,

Academics As Warriors

Why should you be (or buy) a warrior? Wouldn’t the world be better off if there were no warriors, even could be no warriors? Yes, maybe we’d be better off if good property rights would just enforce themselves. But given that there are already other warriors, then it can make sense for you to be (or buy) a warrior, to defend yourself against other warriors.  Yes there are some positive side effects, such as increased technical innovation in war-tech related areas.  But mostly one wars to block opposing war.

Why should you be (or buy) an academic, such as a philosopher or economist?  It seems to me that often the main reason to hire or be an academic is to defend against other academics.

Consider philosophy.  Yes human thinking is often sloppy, with sloppy categories and circular arguments. But mostly this doesn’t cause that many problems. What does go wrong is that some people specialize in noticing such sloppiness, and then using it to persuade us of particular conclusions.  When philosophers ridicule a particular sloppy argument, they shame the conclusion that argument had supported, which is then taken as supporting whatever is framed as the obvious alternative conclusion.

For example, imagine you thought that the conclusions of scientists were reliable because they followed a “scientific method.”  This creates an opening for a philosopher to point out there there really is no coherent scientific method.  Most scientists don’t actually follow most of the supposed scentific methods, and different sciences follow quite different methods.  You might then be tempted to conclude that the conclusions of scientists are not reliable at all.

Yes that conclusion doesn’t directly follow from the mere fact that science reliability had been supported by sloppy arguments. But yet, all else equal, the fact that the best argument for something isn’t as good as you’d expected is an anti argument. If one side has stronger looking arguments than the other side, that seems to support the first side.  Which is why all sides need to hire philosophers to find support, and to ridicule sloppy opposing arguments.

Similarly, often the main reason to hire or be an economist is to defend against other economists. It is bad for your side if the economic arguments supporting it seem sloppy, shallow and unsophisticated relative to the arguments from the other side.  Each side needs to hire economists to offer supporting arguments, just to stay in place.

I’m not saying that philosophers’ or economists’ efforts never make us all better off; I’m just saying there is more of a counter-acting war effect than many realize.  Much of the waste of academia is status seeking – some patrons funding academics in order to raise their status relative to others.  And another big chuck is due to partisans recruiting academics to war on their side of common divides.

GD Star Rating
loading...
Tagged as: ,

Against Voter Foresight

No tech is created unless someone imagines it. But how many imagine it, and for how long? Some techs are heralded decades in advance, with wide public discussion on possible implications. Other techs are only imagined by a few folks just before they are introduced. You might think it obvious that humanity does better when techs are imagined and widely discussed well ahead of time, but I have my doubts.

A good indicator that you think someone is rather irrational on topic is: you are reluctant to give them more info on it. When someone’s thoughts are especially messed up, you may well think they’d be better of not knowing more about it. They “can’t handle the truth”, you think. For example, if someone were especially irrational regarding an ex-lover, you might prefer they not hear any news about this ex-lover. Out of sight, out of mind, is what you’d be hoping for.

Unfortuately, my best guess is that public opinion is this messed up regarding techs that won’t appear for decades. Typically, when a public debate begins decades in advance of a potential new tech, it becomes a far-minded symbolic battle ground, where folks express grand positions on family values, materialism, inequality, nationalism, etc. The net effect is usually to inhibit the useful application of such techs. In contrast, when a tech appears mostly out of the blue, people tend to focus on whether they’d actually like to use it now.

For example, the pill and the web were both largely unheralded, and were thus quickly adopted and integrated into our lives. But if folks had seen thirty years in advance how the pill would change sexual practices, or how easily folks would give up privacy for web access, such techs might have been blocked or more heavily regulated, to our detriment.

IVF, genetic engineering, and nanotech, in contrast, were hotly debated well in advance of their feasibility. Such debates often were framed symbolically in ways quite at odds from typical practical application.

Yes new techs can introduce market failures, and yes with foresight and warning a rational public could mitigate such failures, to its overall benefit. But the biggest market failure regarding new techs is insufficient incentives to develop them. It can be good to have potential-developers envision techs ahead of time, so that they are inspired to do such developing. But wider awareness and concern tends to be hijacked into far symbolic land, where it mostly just gets in the way.

Alas this suggests that I should try not to make my speculations about the social implications of future tech too accessible to a wider audience. The chance of inspiring potential devleopers must be weighed against the chance of scaring everyone else.  Decisions markets about how to deal with potential future techs might allow us to better anticipate and prepare for such techs, because greedy contributors would be in a more realistic near mode.  But without such markets, I should watch what I say.

GD Star Rating
loading...
Tagged as: , ,

The Future Is Bright

Three observations just came together in my mind.

1. LED light is in IEEE Spectrum’s top 11 techs of the decade:

With every decade since 1970, when the red LEDs hit their stride, they have gotten 20 times as bright and 90 percent cheaper per watt. … Even now, white LEDs are competitive wherever replacing a burned-out lamp is inconvenient, such as in the high ceilings and twisty staircases of Buckingham Palace, because LEDs last 25 times as long as Edison’s bulbs. They have a 150 percent edge in longevity over compact fluorescent lights, and unlike CFLs, LEDs contain no toxic mercury. (more)

2. When something gets cheaper, we use more of it:

The Jevons paradox … is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase the rate of consumption of that resource. … [It] has been used to argue that energy conservation is futile, as increased efficiency may actually increase fuel use. (more; see also)

3. More light makes most nice things look better:

Yesterday, when filming an upcoming TV show in an ordinary home, I noticed how much extra light they added to the room, even in daytime in a room with lots of windows, and how much better that made it all look to me.  The host explained how common this was, and that to make actors look good in a scene that viewers are suppose to see as dim, they actually use use extra dark materials for everything else in the scene.

I predict that over the next few decades, as lighting gets lots cheaper, we will make our indoor worlds a lot brighter.  This will start with “studio quality lighting” for high end homes, and then percolate to the rest of our spaces. You probably don’t notice just how much our indoor areas vary in their lighting:

Full, unobstructed sunlight has an intensity of approximately 10,000 fc [footcandles]. An overcast day will produce an intensity of around 1,000 fc. The intensity of light near a window can range from 100 to 5,000 fc, depending on the orientation of the window, time of year and latitude. (more)

The Illuminating Engineering Society … guidelines extend from lighting a public area using 2 fc to 5 fc level, to lighting special visual task areas of extremely low contrast and small size using 1,000 fc to 2,000 fc. The recommendations consider factors like occupant age, room surface reflectance, and background reflectance. (more)

At 60 years old, we need two to three times the light we needed at age 20, and also more shielding and diffusers since older eyes are more sensitive to glare. (more)

Added 11:30a: Eli points us to the August Economist:

Assuming that, by 2030, solid-state lights will be about three times more efficient than fluorescent ones and that the price of electricity stays the same in real terms, the number of megalumen-hours consumed by the average person will, according to their model, rise tenfold. … When gas lights replaced candles and oil lamps in the 19th century, some newspapers reported that they were “glaring” and “dazzling white”. In fact, a gas jet of the time gave off about as much light as a 25 watt incandescent bulb does today. To modern eyes, that is well on the dim side. (more)

Added 1:30p: More energy efficient windows also leads to more bigger windows and so more light.

GD Star Rating
loading...
Tagged as: ,

What Is “Belief”?

Richard Chappell has a couple of recent posts on the rationality of disagreement. As this fave topic of mine appears rarely in the blogsphere, let me not miss this opportunity to discuss it.

In response to the essential question “why exactly should I believe I am right and you are wrong,” Richard at least sometimes endorses the answer “I’m just lucky.” This puzzled me; on what basis could you conclude it is you and not the other person who has made a key mistake? But talking privately with Richard, I now understand that he focuses on what he calls “fundamental” disagreement, where all parties are confident they share the same info and have made no analysis mistakes.

In contrast, my focus is on cases where parties assume they would agree if they shared the same info and analysis steps.  These are just very different issues, I think.  Unfortunately, they appear to be more related than they are, because of a key ambiguity in what we mean by “belief.”  Many common versions of this concept do not “carve nature at the relevant joints.”  Let me explain.

Every decision we make is influenced by a mess of tangled influences that can defy easy classification. But one important distinction, I think, is between (A) influences that come most directly from inside of us, i.e., from who we are, and (B) influences that come most directly from outside of us. (Yes, of course, indirectly each influence can come from everywhere.) Among outside influences, we can also usefully distinguish between (B1) influences which we intend to track the particular outside things that we are reasoning about, from (B2) influences that come from rather unrelated sources.

For example, our attitude toward rain soon might be influenced by (A) our dark personality, that makes us expect dark things, and from (B1) seeing dark clouds, which is closely connected to the processes that make rain.  Our attitude toward rain might also be influenced by (B2) broad social pressures to make weather forecasts match the emotional mood of our associates, even when this has little relation to if there will be rain.

Differing attitudes between people on rain soon is mainly problematic regarding (B1) aspects of our mental attitudes which we intend to have track that rain. Yes of course if we are different inside, and are ok with remaining different in such ways, then it is ok for our decisions to be influenced by such differences. But such divergence is not so ok regarding the aspects of our minds that we intend to track things outside our minds.

Imagine that two minds intend for certain aspects of their mental states to track the same outside object, but then they find consistent or predictable differences between their designated mental aspects. In this case these two minds may suspect that their intentions have failed. That is, their disagreement may be evidence suggesting that for at least one of them other influences have contaminated mental aspects that person had intended would just track that outside object.

This is to me the interesting question in rationality of disagreement; how do we best help our minds to track the world outside us in the face of apparent disagreements? This is just a very different question from what sort of internal mental differences we are comfortable with having and acknowledging.

Unfortunately most discussion about “beliefs” and “opinions” are ambiguous regarding whether those who hold such things intend for them to just be mental aspects that track outside objects, or whether such things are intended to also reflect and express key internal differences. Do you want your “belief” in rain to just track the chance it will rain, or do you also want it to reflect your optimism toward life, your social independence, etc.?  Until one makes more clear what mental aspects exactly are referred to by the word “belief”, it seem very hard to answer such questions.

This ambiguity also clouds our standard formal theories. Let me explain.  In standard expected-utility decision theory, the two big influences on actions are probabilities and utilities, with probabilities coming from a min-info “prior” plus context-dependent info. Most econ models of decision making assume that all decision makers use expected utility and have the same prior. For example, agents might start with the same prior, get differing info about rain, take actions based on their differing info and values, and then change their beliefs about rain after seeing the actions of others. In such models, info and thus probability is (B1) what comes from outside agents to influence their decisions, while utility (A) comes from inside. Each probability is designed to be influenced only by the thing it is “about,” minimizing influence from (A) internal mental features or (B2) unrelated outside sources.

In philosophy, however, it is common to talk about the possibility that different people have differing priors. Also, for every set of consistent decisions one could make, there are an infinite number of different pairs of probabilities and utilities that produce those decisions. So one can actually model any situation with several expected-utility folks making decisions as either one with common priors or with uncommon priors.

Thus in contrast to the practice of most economists, philosophers’ use of “belief” (and “probability” and “prior”) confuses or mixes (A) internal and (B) external sources of our mental states. Because of this, it seems pointless for me to argue with philosophers about whether rational priors are common, or whether one can reasonably have differing “beliefs” given the same info and no analysis mistakes. We would do better to negotiate clearer language to talk about the parts of our mental states that we intend to track what our decisions are about.

Since I’m an economist, I’m comfortable with the usual econ habit of using “probability” to denote such outside influences intended to track the objects of our reasoning.  (Such usage basically defines priors to common.) But I’m willing to cede words like “probability”, “belief” or “opinion” to other purposes, if other important connotations need to be considered.

However, somewhere in our lexicon for discussing mental states we need words to refer to something like what econ models usually mean by “probabilities”, i.e., aspects of our mental states that we intend to track the objects of our reasoning, and to be minimally influenced by other aspects of our mental states.

(Of course all this can be applied to “beliefs” about our own minds, if we consider influences coming from our minds as if it were something outside.)

GD Star Rating
loading...
Tagged as: , ,