Self-Interest, Intent & Deceit

Anders Sandberg’s post last week prompted a debate on the role of intent in explaining behaviour.  Anders would give significant weight to conscious stated goals, while some commenters preferred the economic methodology of ignoring stated goals and assuming behaviour is ultimately based on self-interest.

Perhaps evolutionary psychology can help reconcile these positions.  The evolutionary methodology, like the economic methodology, takes self-interest to be the ultimate motivation.  But, as Richard Alexander and Robert Trivers have pointed out, being deceived is disadvantageous, which implies that there will be selection to be good at spotting deception, which implies that there will be selection in favour of self-deception.  In short, the best way to lie convincingly is to believe your own lie.  For that reason, there is often likely to be a mismatch between stated and actual (ultimate) motivations; people are likely to posit noble objectives in the pursuit of their own self-interest. 

But this doesn’t imply that intentions are no more than self-deception.  Ignore intent for the moment and suppose that others draw conclusions about my motives simply from observing my behaviour. Suppose that to achieve my self-interested objectives, I have to fool you into thinking that I’m not self-interested.  Since you draw inferences solely from my actions, that means I have to act in a way that is not self-interested.  And from your perspective, that is no different than if I am not in fact entirely self-interested.  Though this argument may appear circular, it isn’t.  It implies that my actions will balance my direct self-interest with my need to deceive you.

Adding intent doesn’t change the basic reasoning, if we acknowledge the role of self-deception and accept that human actions are mediated by conscious and subconscious thought.  To achieve my self-interested objectives, I have to fool you into thinking that I’m not self-interested.  To do that I have to fool you into thinking I don’t intend to be self-interested.  To do that, I have to intend not to be self-interested.  And if I don’t intend to be self-interested, I won’t (always) act in a self-interested manner.  Again, this isn’t entirely circular, since self-deception that is too effective would defeat its own purpose.  The argument implies that my actions will reflect both my conscious intent and my self-interest in a way that balances my need to convince you of my noble motives with a need to achieve my self-interested ends.  This means that even though self-interest is more fundamental, both intent and self-interest must be considered in order to adequately explain/predict behaviour. 

There is a refinement worth mentioning.  Even though on average people will optimally balance deception and self-interest, it is unlikely that each individual will strike the optimal balance. Instead, behaviour will be distributed in a range centered on the optimal balance.  Some individuals will be far outliers on the distribution.  These will include the saints and fanatics who motives may be truly inexplicable by self-interest.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • David J. Balan

    But don’t there have to be some people who are really not self-interested in order to sustain an equilibrium in which anyone is perceived not to be self-interested by anyone else?

  • Norman Siebrasse

    Yes, that’s an excellent point. I think the answer is that mixed strategy equilibria (the same individual is sometimes altruistic and sometimes not) or polymorphic equilibria (some individuals are always altruistic and some are always selfish) are common in evolutionary models of games, such as the prisoners’ dilemma, which are commonly taken as models of social interaction.

    To elaborate: Note that in the evolutionary approach all individuals are ultimately self-interested, and deception is over whether an individual is engaging in a nice or nasty strategy. (I see that I didn’t make this at all clear in the original post.) Suppose the issue is how to divide a windfall that I have stumbled across. I might keep it for myself (self-interested strategy) or I might offer to share equally with you, proclaiming a principle of ‘fair sharing’ (the ‘altruistic’ strategy). My altruistic strategy might be in my self-interest because I would hope that if you stumbled across a similar windfall in the future, you would share with me (in the hope that if I stumbled across another such windfall…) In this kind of situation it would not be unusual to find that in a population in equilibria individuals would be altruistic most of the time, but not always (most obviously when the stakes are high). Individuals with a high commitment to sharing would share even very large windfalls (ultimately in the hope that in the future others would share similarly large windfalls with them), while individual with a low commitment to sharing would only share small windfalls. (I haven’t actually worked through this particular model, so I’m not saying that this is how things would actually work out; but this kind of result in this kind of model is quite common: see Maynard Smith, Evolution and the Theory of Games (1982)) Since large windfalls are rare, there would be considerable uncertainty as to whether a particular individuals is very committed to sharing or not. That’s where the deceit would come in. If you come across a large windfall, I want to persuade you that I am one of those people with a high commitment to sharing, so that you will share with me now, even though I don’t intend to reciprocate.

  • Hopefully Anonymous

    Norman,
    I think this is a great topic that you’re introducing, but I think there are many holes, and many contestable claims and subsidiary claims. I’ll do my best to raise these specific area and why I think your current explanations/formulations of them are problematic. But I’ll probably do so on an anonymous blog I’ll try to set up in the next couple of days, to keep from overrunning the comment section of overcomingbias.com

    A few claims worth contesting in my opinion:
    1. “Anders would give significant weight to conscious stated goals, while some commenters preferred the economic methodology of ignoring stated goals and assuming behaviour is ultimately based on self-interest.” There can be other motivating goals besides conscious stated goals and self-interest. And I think these other motivating goals are probably common.
    2. ” In short, the best way to lie convincingly is to believe your own lie.” Very contestible claim/deduction in my opinion. Believing one’s own lie may make it more difficult to flexibly manipulate the lie to a changing environment. I think this has been called “being a prisoner of one’s own myth”.
    3. “Even though on average people will optimally balance deception and self-interest, it is unlikely that each individual will strike the optimal balance. Instead, behaviour will be distributed in a range centered on the optimal balance. Some individuals will be far outliers on the distribution. These will include the saints and fanatics who motives may be truly inexplicable by self-interest.” First of all, though you don’t touch here on conscious stated intent, it’s possible that their motives may be inexplicable by that measure too. Second, there’s a distinction between the true fanatic who is so non-functioning due to the lack of self-interest informing their activity that you never hear about them: they never advance materially enough to become celebrity “fanatics”. Secondly, celebrity, organizational strength, and maximizing on a comparative economic advantage can all be considered goods a self-interested party might want to attain. So parties that attain these goods may be doing so through self-interested motivations, even if their consciously stated goals are something different. This could arise either through self-deception or through self-aware deception of others.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    “Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.”
    — John Tooby and Leda Cosmides

    It is simply not true that the evolutionary framework takes parties as being self-interested. Genes are strictly self-interested. Mothers may legitimately love their children, husbands may legitimately love their wives, and imperfectly deceptive social organisms may even legitimately love their friends. Genes for all of these attributes may, under the right conditions, promote themselves to fixation within the gene pool because they outreproduce their alternatives at that allele.

  • Tobbic

    A great post!

    Few thoughts..: Considering what makes an equilibrium where people are not always being perceived as self-interested possible I think that mixed strategies which balance outright self-interest and the need to deceive others in to thinking you are not self-interested (for various reasons) is a good explanation. This is different from an equilbrium in the absence of deception where everyone is just sharing their present windfalls to gain their share of future windfalls of others. The reason why there’s (is there?) an equilibrium with deception is that sharing with an impression (deception) of altruism is a better than strategy of pure reciprocative sharing. Furthermore, assuming succesful deception requires (or is enhanced by) self-deception, the belief that people are not always as acting out of self-interest would be useful as it would allow for self-deception. Thus, the belief that people can be genuinely altruistic would be expected to be the norm in the society as it allows for self-deception.

    It is interesting how purely altruistic behavior can’t be distinguished from self-interested behavior where deceive yourself and others. It does make the hypothesis of solely self-interesed behavior irrefutable as the opposite (altruism) is indistinguishable. It also shows how “altruism” can arise among completely self-interested individuals.

  • http://profile.typekey.com/normansiebrasse/ Norman Siebrasse

    Re H.A.: 1. The basic contrast is between self-interest and other motivations.

    2. Yes, I expect that believing your own lie does make it more difficult to respond to change. This is a subset of the broader point that if I lie to make you believe that I am not self-interested, and I believe my own lie, then I will (sometimes) act in a non-self-interested manner, which is contrary to my immediate interests, even in the same environment. So, it is clear that there are disadvantages to believing your own lie. The question is whether there are sufficient compensating advantages for self-deceit to have evolved.

    3. I agree with these points. They seem to me to flesh out, rather than contest, the sentence you quoted.

  • Norman Siebrasse

    Re Eliezer: In general, the evolutionary framework applies to any unit that replicates with variation and subject to selection. The unit that is most fruitfully studied depends on the phenomenon of interest. In this case I don’t really see how a focus on the gene rather than the individual affects the argument one way or the other.

    I entirely agree that the emotions of love are very often “legitimate” or, as I would prefer to put it, sincere. However, no matter how sincere, they are not always good predictors of future behaviour. For example, I have a friend whose wife left him after he suffered a physically debilitating illness. Prior to that he believed that she sincerely loved him. I expect he was right, in that if anyone had asked her how she would respond in such a case, she would probably have said, and sincerely believed, that she would have stood by him. On the other hand, I have an acquaintance who did stand by his wife after she suffered a very similar physically debilitating illness. In hindsight one would presumably say that the love in the second case was more sincere than in the first case. My argument would suggest that before the fact it would have been very difficult to say whose love was more sincere because of self-deception by the wife in the first case.

  • David J. Balan

    It is important to distinguish between games where “altruistic” behavior arises even though all the players are self-interested, and signalling games where there actually is such a thing as an altruistic type. In the former, the only thing that makes anyone behave altruistically is the promise of future reward and/or the threat of future punishment. In the latter, there are some benefits to being an altruistic type (for example, people are more eager to engage in trust-based trade with you), which means that there are some benefits to being perceived as an altruistic type even if you aren’t one, which leads to signalling.

  • http://profile.typekey.com/bayesian/ Peter McCluskey

    If people have unbiased estimates of others’ altruism, then having some genuine altruism appears necessary for an equilibrium of deception.
    But I have a strong impression that people are biased to overestimate others’ altruism. I see two possible reasons why such a bias could be stable. It could signal ones own altruism (if availability bias causes me to overweigh my own degree of altruism when estimating average altruism, then my beliefs about others’ altruism will say something about how altruistic I think I am).
    Or a high estimate of others’ altruism could signal ones’ intention to cooperate. As long as I’m more likely to cooperate with altruists, my belief that someone is selfish will signal that I’m less likely to cooperate with them.

  • http://profile.typekey.com/normansiebrasse/ Norman Siebrasse

    Peter’s suggestion that signaling might stabilize overestimates of others’ altruism is biased is interesting. By ‘overestimate’ I take it that we mean an actual overestimate that affects my behaviour, and not just a proclaimed overestimate, where I go around announcing that I give everyone the benefit of the doubt. The latter is cheap talk, and I don’t see why it would be a reliable signal. It also strikes me that signaling would work only if we are talking about signaling to observers with whom we might interact in the future, and not the immediate party in a pairwise interaction. If we’re not talking about cheap talk, but actual behaviour, then signaling altruism involves being more altruistic than is warranted by the available information about the other party, and it’s hard to see why that would be stable since it amounts to just being a sucker. But it does seem that altruistic behaviour towards the person I am currently interacting with would provide third parties with information about my propensity for altruism. It’s not obvious to me that this kind of signaling strategy would be stable, since it does involve being a sucker on a regular basis. As well as getting the signal that I am inclined to cooperate, it also sends the signal that I can be taken advantage of. It’s not clear to me how it would all work out. I might add that my own intuition is that people are not biased to overestimate others’ altruism, but this isn’t a strong intuition. It would be interesting to see a formal model that takes signaling into account.

  • Hopefully Anonymous

    Norman,
    You write:
    “By ‘overestimate’ I take it that we mean an actual overestimate that affects my behaviour, and not just a proclaimed overestimate, where I go around announcing that I give everyone the benefit of the doubt. The latter is cheap talk, and I don’t see why it would be a reliable signal.”
    I think the performing vs. actually believing distinction can apply to behavior as well as to talk. Once again I think you’re presenting two contrasting options in a way that I think distorts the range of options. The range of options would be closer to (1) actual overestimate that affects my behavior, (2) proclaimed overestimate that I proclaim affects my behavior, (3) performed overestimate that I perform affecting my behavior. Just like one would only proclaim that one overestimates and that it affects one’s behvaior when one has an audience, one could only perform overestimates affecting one’s behavior before an audience. That is also still different than actually overstimating, and those overestimates actually affecting one’s behavior regardless of audience. Once again, the performative aspect would occur with or without internal transparency about it. As a concrete example: One person tells everybody that he picks up trash in the public park, but in front of everyone, he litters in the public park instead. Another person is always seen picking up trash in the public park, but secretly dumps trash there at night. A third person doesn’t know why, but feels an impulsive need to pick up trash in the public park when someone whose opinion he respects is observing him at the park. A fourth person picks up trash at the public park even when he’s alone and no one’s watching.

  • samuel

    such bullshit…
    why look into such issues as semantics when the truth is deception is
    deception and if the author keeps on with this sort of mental masturbation it will be self deception…
    good luck

  • Missn

    I am deceitful and have been that way for years and find it an incredibly hard thing to overcome. It is not what the author above says and in fact I think it takes alot of courage for a person to admit they have the problem and are trying to work to overcome it. It has cost me all my friends, career, everything and while I work spiritual principles all the time, it is hard to work with because it is still a condition of selfishness. Many times people who have the behavior are looking for help to get through it and there is not alot out there and too often others turn away when we are reaching out for help because we want to change. We know that it does not contribute anything good to the world but we have to believe that we can do it. We have to believe that we can help to make the world a better place, the potential is there if we work on it. but a little help is sure nice.

  • Missn

    P.S. I don’t think it means that we are bad people, it means we have bad behaviors.