16 Comments

Curt: "increasing degree of belief in any possible world is equivalent in effect to intensifying utilities in that same world by the same factor."

If it were the case that the mind had a big look-up table, specifying for each possible world our degree of belief in that world, then this transformation may not seem to add complexity and would preserve behavioural output. But that is not the way the mind works. My concern is that given the representational framwork we actually use, the operation of off-setting biases by modifying preferences may in fact increase complexity.

Expand full comment

Gustavo, yes confidence signaling is one possibility for men.

Expand full comment

"Carl is right, it is puzzling why we seem to have evolved to encode some preferences in biased probabilities, rather than more directly in our desires. Many consider this to be a random accident, but I suspect there is some adaptive reason for it."

My explanation:A male who sincerely believes "she wants me" sends signals that he is an attractive mate (since successful males are more likely to believe that: they have rationally come to expect success.), and these signals make his success more likely. Faking self-confidence is not easy.Wanting her more, OTOH, does not make him more likely to succeed to the same degree. It could signal that he's willing to give her more in exchange, but if he's a low-status male, it's likely that he's incapable of giving her what she wants.

Of course, this bias creates a self-perpetuating "bubble" effect, which is why some seduction literature suggests a "fake it till you make it" approach. If enough people succeeded at this, the signals produced by these beliefs (like "she wants me") would become unreliable, women might eventually notice (i.e. stop being attracted to such signals), and the biased beliefs causing them might eventually go away. But, until the day when people can exert perfect control over their beliefs, this seems astronomically unlikely.

Expand full comment

Nick, the short answer is increasing degree of belief in any possible world is equivalent in effect to intensifying utilities in that same world by the same factor. To formalize, ignoring the renormalization factors, which cancel out:

Alice lives in some world w from the unit line. She has a belief function B(x) for which world she is in. She has a utility of U0(x) for moving left and U1(x) for moving right. If I[f(x)] means the integral of f(x), she’ll move left if

I[B(x)*U0(x)] / I[B(x)*U1(x)] > 1

Now Alice accepts a piece of evidence with a likelihood function E(x). This is learning if the evidence is true, or bias if the evidence is not necessarily true. She will now move left if:

I[B(x)*E(x)*U0(x)] / I[B(x)*E(x)*U1(x)] > 1

Trivially this is the same effect you’d have from multiplying her utility functions by the same likelihood function. Cancellation is usually but not always possible by dividing the utility functions by the likelihood function.

Expand full comment

>Young men who were captains of the football team graduate thinking they're God's gift to women, and women respond, 'I'm interested in [...] well-cited professors. Who the hell are you?' "

Self deception isn't just for the young.

Expand full comment

This seems worth pursuing a bit further...

Curtis, perhaps you could explain your reasoning by applying it to some simple example, say a hypothetical bias for thinking that objects are green more often then they really are. What is the simple change in preferences that would combine with this bias to produce standard behaviour?

Robin, I'm not sure about the complexity being constant. The complexity of the input-output mapping being computed might be constant, but not necessarily the complexity of the process computing this mapping. It might be like achieving a simple task either in the ordinary way or by means of a Rube Goldberg Machine. (Btw, I'm not suggesting biased belief is in general simpler than accurate belief plus context dependent desire - in fact I think it's usually the other way around, but with some possible exceptions.)

In a very simple example, the complexity might be the same. Consider a simple agent Alice who lives in the unit interval, and who perfers to be near to one of the endpoints of this interval. Normally, one would think the agent would get some clues about where she is, and then she decides to move in the direction of whichever pole she estimates (unbiased-ly) that she is closest to. Now conside agent Bob. He gets the same clues as Alice but he is biased towards thinking he is near Zero. If we adjust his preferences so that he likes (to the right degree) being close to One even more than he likes being close to Zero, then he will behave just like Alice.

But let's make the example only a tiny bit more complicated. Let us change Bob's bias to the following: he now overestimates the relevance of the latest clue he has obtained relative to clues he obtained earlier. How might we change his preferences to ensure that he will still behave the same way as Alice? It seems that we will have to make his preferences time-dependent and extend them to cover facts about what evidence he has obtained. Something like: The strength of his desire to be close to One depends on exactly how strong his most recent clues suggesting that he is near One are; etc. Already this is beginning to get complex. If one adds some futher dimensions to the problem, it would seem that the complexity of the necessary adjustments would quickly spiral out of control.

If this idea is correct, it would suggest that the default design would be accurate beliefs + relatively simple desires. Sometimes it might be useful to bias beliefs, and perhaps to complicate the desires to partially offset the effects of the belief bias. But this would be the exception from the rule - something that would require a special explanation.

Expand full comment

True enough. Although we could keep track of the probability and utility separately as intensities and still skip renormalization. That could explain why humans lacking mathematical instruction approximate Bayesian behavoir much better than they can handle probabilities explicitly.

Expand full comment

Curtis, in our language at least we do explicitly distinguish between chances and values of outcomes. So clearly we do not only track their product.

Expand full comment

Actually my point is that the remapping is not complex but extremely simple: if you apply the probability ratios of a given piece of evidence to the utility values rather than the bayesian beliefs you get exactly the same effect on decisionmaking. I'm not saying there's some arcane topological transformation, which I agree you can almost always find between two mathematical models and for that reason would be virtually useless.

The whole business seems obvious to me since I've become accustomed to working with betting ratios [Pr(A)/Pr(B)] rather than normalized probabilities directly as they are mathematically equivalent (in the sense that you can normalize out the probabilities from a set of consistent betting ratios at any point without ever having worked them out from earlier betting ratios) but way easier to work with since you dodge renormalization. The fact that they are mathematically equivalent is known and published although it's apparently ignored since it got caught up in an arcane philosophical debate. If my claim is not obvious on a little reflection I could write up something.

In term of which system gets altered I must immediately point out that both desires and inborn priors seem to vary genetically. It would be very hard to set up a metric of which changes more. On thinking about it, I actually like the model that the two systems are lumped together and the brain just keeps track of the expected utility for each outcome [Pr(A)*U(A)], bumps it up and down based on either probability or utility estimate changes, and makes decisions by comparing expected utility. I figure evolution hates renormalizing probabilities as much as I do.

Expand full comment

The fact of complexity isn't enough to explain the choice of biased beliefs over accurate beliefs plus context dependent desires; the total complexity of the decision system is the same in either case, it is just a matter of whether that complexity is in the belief or the desire system. Perhaps complexity is for some reason more costly in the desire system?

Expand full comment

Curt, on whether changes in preferences and in biases are equivalent within the expected utility maximization framework: I think there would always be some theoretically possible way of gerrymandering preferences so as to offset any change in beliefs (i.e. without affecting behavioural output). But in many cases, I think these remappings would be extremely complex and unnatural.

Consider the "all men are pigs" heuristic. Maybe this will induce the same dating behaviour as would a more accurate set of beliefs about men's intentions combined with a desire to avoid being taken advantage of. But now consider non-dating behaviour. Her girlfriend asks her about the date. If she uses the "all men are pigs" heuristic, she will presumably have to say that the date did not go so well and that it didn't provide any grounds for hope about the future. Her girlfriend may then observe over time that the user of the bias consistently underestimates how well her dates went, and think she's a fool. Now, we might of course postulate another preference change to avoid this implication. Suppose the user of the "all men are pigs" heuristic also has an obsessivly strong desire that her girlfriend will not think that she is consistently misreading her date's signals. This desire may then bias her verbal output so that she will say that the date went quite well even though she believes that it went badly becaues all men are pigs. So the equivalence in behaviour has been restored, but at the expense of introducing 1 belief bias and 1 obsessive desire. If we consider further contexts in which the beliefs+desires might manifest themselves, it seems that we may have to keep adding epicycles to maintain constant behaviour. Perhaps an indefinite number of specific strange desires would have to be postulated to prevent the belief bias from manifesting in behaviour. Most likely, not all these needed desires will be added; so the belief bias will affect behaviour.

One might even argue that if some agent really managed to change her set of desires in such a way as to completely offset her belief bias in every possible context, then the best description of this agent would be one with the normal set of beliefs and desires - this would after all be a much simpler explanation of her behaviour. (But perhaps by looking at her mental procecess, one could make sense of the possibility of agents whose belief-desire functions had been remapped in complex ways so as to maintain normal behavioural dispositions.)

This leaves us with the question of why we have belief biases rather than accurate beliefs combined with appropriate preferences. Perhaps this is because we are not expected-utility maximizers. Or perhaps it is because simple biases in belief have different behavioural implications (even in the EU framework) than do simple changes in preference. Either way, some effects that evolution wants might be more easily obtainable through a simple belief bias than through a complicated set of preferences. For example, if we distinguish verbal from non-verbal behaviours, it might often be easiest to achieve a certain kind of verbal behaviour by biasing beliefs, while it might be easier to get a certain kind of non-verbal behaviour by installing appropriate preferences. If evolution wants us to talk as if we had great abilities but act as if we have the abilities we actually have, maybe the simplest design to achieve this would be by making us overconfident (belief bias) and cowardly (a specific desire to avoid danger).

Expand full comment

Dagon, it might be that it is easier to commit to belief bias than to desire levels, but I don't yet see why that should be so.

Curt, if in our species we did often die to reproduce, then surely our minds would not be horrified by such things.

Expand full comment

Changes in preferences and in biases (in the form of priors) should be equivalent mathematically with a utility-maximizing strategy. If an agent is considering an action with different payoffs in each of 2 possible worlds then multiplying the utility derived from one of the choices will have the same effect as changing the Bayesian betting odds in favor of that choice by an equivalent factor. I'm not sure this generalizes to multiple or continuous choices but I expect it does. Evolution would favor changing whichever is easier to change.

Carl is making an important distinction between our interests and our genes' interests. For human our genes seem *mostly* to want things also obviously good for us as individuals. The main exception is self-sacrifice for relatives, but we're generally OK with that. However, that's not necessarily going to always be true - most people would be horrified if their genes made them die to reproduce, which happens with many other organisms. In the current evolutionary environment there are going to to be conflicts of interest between our desires to have happy families and our genes "desire" to have huge ones. Even if our biases are features it behoves us to consider cui bono - is it us or these pieces of bioplastic in our nuclei?

Expand full comment

One likely answer to the puzzle is that self-deception is a commitment strategy. A bias is more effective than a desire in controlling behavior, and is less susceptible to evasion or corruption by those who'd take advantage.

Expand full comment

Carl is right, it is puzzling why we seem to have evolved to encode some preferences in biased probabilities, rather than more directly in our desires. Many consider this to be a random accident, but I suspect there is some adaptive reason for it. Alas, when I talked to Haselton I couldn't get her to understand that there was even a puzzle.

Expand full comment

There's a confusion here between error management decision strategies and probability estimates.

Consider a computerized alarm that detects the level of carbon monoxide in the air, and then rings a bell at a certain concentration. There are two ways to increase its sensitivity: you could damage the sensor so that it misreads carbon monoxide levels, or you could keep it accurate and reset the level at which it rings. A good rationalist could have a more accurate picture of the opposite sex and make use of that information to increase fitness more efficiently if he or she was motivated to do so. Of course, people aren't motivated by fitness as such, since evolution often 'used' cognitive bias instead of emotion to produce results. Thus, from the perspective of human beings a romantic bias often IS a bug (outside of the deceptive and signalling/commitment functions), even if it helps increase fitness, in the same way as involuntary pregnancy.

Expand full comment