Resolving Your Hypocrisy

Self love is more cunning than the most cunning man in the world.  … Hypocrisy is the homage vice pays to virtue.    La Rochefoucauld.

Humans are hypocrites.  That is, we present ourselves and our groups as pursuing high motives, when more often low motives better explain our behavior.   We say we invade nations to help them build democracy, rather than for revenge or security.  We say we marry to help our partner, rather than to gain sex or security.  We say we choose our profession to help others, and not for prestige or income.  And so on.

Comedians live by ridiculing such hypocrisy, but "cynics" who complain without such wit and style are despised.  In contrast, we are attracted to the innocent who naively believe our hypocrisies.

Noticing the hypocrisy in others usually makes us feel morally superior.  After all, we are know we are not hypocrites; "I can look inside myself and and see my sincerity."  But eventually experience and intelligence force some of us to face the likelihood that we are no different.   At this point we can resolve our hypocrisy two ways: we can start really living up to our high ideals, or we can admit we don’t care as much as we thought about those ideals .   

Most people try harder to live up to their ideals.  They usually think they succeed, but mostly they just add on a few more layers of self-deception, and find themselves too busy to ponder the issue. "Sure hypocrites give to charities that don’t really help much, but my charity really does help; I read an article that says so.  Sorry; gotta go."   

We want to think well of ourselves, and this gives us a limited ability to make ourselves to want the things we think we should want.  And the young are more naturally innocent, with a stronger ability to remake their wants, at least toward ideals others would applaud.  But this effect fades with time, and we overestimate both how much we can change our wants, and how much we want to.

One of our ideals is to be honest with ourselves.   Is this honesty ideal a substitute or a complement for other ideals?   On the one hand, honesty should help us to to use resources more effectively to actually achieve other ideals, versus the appearance of achieving them.  On the other hand, I cannot reasonably expect anyone willing to try to live up to this ideal of honesty to have much will power left over to live up to other ideals.   

I expect people who are actually more honest will tend to have lower expectations about achieving ideal ends, though they may (or may not) actually achieve such ideals more.

Added:  Our conscious minds seem like a public relations department (PRD) of our minds.  A corporate PRD tries to find a coherent story to make it look like corporate actions came from high motives.  The PRD tries to have this high minded story recorded in official histories, legal testimony, and accounting records.  Corporate PRDs have a limited ability to influence corporate policy; "Boss, doing that will make us look real bad."   But corporate profits more fundamentally drive behavior. 

Similarly, our conscious minds record and tell high-minded stories about our actions.  When image is important enough, we can make real sacrifices to ensure our actions fit closely with our conscious self-image.  But we usually need only minor sacrifices, my guess is that a cost-minimizing PRD forced to be more honest will rely more on admitting to low motives, and less on switching from low to high motives.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Rob Spear

    Bravo, maestro! This is in accord with my experience of life. Although I am not sure about self-honest people achieving their ideals more. I guess that it depends on the ideals in question: Many ideal ends require political leverage over other people, which, I think, requires a certain amount of self deception, even if only the temporary self-deception of the actor.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Is there a useful distinction between hypocrisy and self-deception? Hypocrisy would be consciously violating one’s stated morality, while self-deception would be unconscious. The closeted gay pastor who rails against homosexuality while engaging in affairs with men would be a hypocrite by this definition.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Hal, yes, there is a useful distinction, but it is much more a matter of degree than people like to admit.

    Rob, yes, for some ideals self-deception induces more persuasion than honesty.

  • ChrisA

    “or we can admit we don’t care as much as we thought about those ideals.”….can we not also continue to hold these ideals, but accept that we can’t live up to them. After all it is possible to believe that stealing is wrong, but also steal?

    Sometimes hypocrisy has social value. For instance in the case where nurture is important in generating socially useful behaviour, it might be valuable to appear, say, a highly altruistic person to my children, as this might encorage them to be altruistic. At the very least it might make them happier (what the eye doesn’t see, the heart doesn’t feel).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Chris, I’m not sure what it means to “hold” an ideal one is not willing to act on. Perhaps it means you uphold it for speech acts, but not for other acts. You are willing to give it lip service, or even thought service, but not physical service.

    Does pretending to be altruistic actually make children altruistic, or does it just help them to pretend to be altruistic, so they can be effective hypocrites too? I find these questions fascinating. We touched on similar issues in the first post of this forum:
    http://www.overcomingbias.com/2006/11/does_sociobiolo.html

  • Carl Shulman

    Hypocrites may still do fairly well in producing altruist-like results through devices such as social choice via a large electorate. Each voter may declare that everyone should allocate 1% of his or her income to funding foreign, but refuse to shift funds from hedonism to personal donations. Since one vote is unlikely to shift an election, the self-deceiving voter will vote for a dedicated 1% ‘Bono tax.’

    http://www.reason.com/news/show/27886.html
    “Singer also draws income from a trust fund that his father set up and from the sales of his books. He says he gives away 20 percent of his income to famine relief organizations, but he is certainly living on a sum far beyond $30,000. When asked about this, he forthrightly admitted that he was not living up to his own standards.”

    Presumably Singer would vote for a dedicated famine tax of >20%. Unfortunately, such devices are not easily applied to investment in personal knowledge and rationality:

    “Sure hypocrites give to charities that don’t really help much, but my charity really does help; I read an article that says so. Sorry; gotta go.”

    The time to investigate the charity is a real personal cost, and paying that cost would surely require some expenditure of willpower. However, charitable contributions appear to offer a range of ROIs that stretches across multiple orders of magnitude, and there are relatively clear measures of ROI for some causes.

    Some people give thousands of dollars to subsidize elite opera viewing, where the opera has the opportunity to internalize the value of the product via ticket sales. http://www.nzopera.com/support/benefaction/index.cfm Others may invest in providing HIV anti-retrovirals or anti-malaria nets and spraying, which can offer a well defined cost per life-year saved in the near future.

    In such circumstances, a modest improvement in honesty that reallocates funds to a more efficient charity could greatly improve ideal-fulfillment even if overall donations fell while efficient donations rose.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl, yes, in some contexts social pressure can make people act more according to their ideals. But as you point out, this may well just produce a crude appearance of pursuing those ideals, instead of actually achieving them.

  • Dagon

    One problem I have with topics like this is trying to take a series of choices made from a continuum of feelings and judge it as if it were a single pure strategy. “If I believe X, I must always do exactly Y” is just too simple to be of any use.

    Actual humans have a range of beliefs, most of which are not held absolutely, and some of which contradict each other. These change over time, as well. It’s quite possible to usually hold a belief, or to hold it mildly, and to act in ways that contradict it due to other beliefs that outweigh it at the time. It’s even possible that a mixed strategy (sometimes pursue X, sometimes Y, sometimes Z which does neither one particularly well) is actually reasonable for addressing certain combinations of uncertain beliefs.

    Those are DIFFERENT problems than those of hypocrisy (actual motivation is different than signaled motivation) or self-deception (actual motivation different from conscious belief).

    For instance, I generally believe that giving a direct handout to a street beggar has more bad effects than good ones. I still sometimes make such donations. I don’t think hypocrisy nor self-deception describe that discrepancy very well.

    To the extent that these are biases to be addressed, they should be separated and treated differently.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Dagon, yes of course some contradictions are just random accidents. But consistent patterns of contradictions are often better explained as hypocrisy.

  • Carl Shulman

    Robin,

    I cited the electoral example as an instance in which ChrisA’s mere speech acts and lip service can be functionally equivalent to a commitment sufficient to motivate tangible sacrifice.

    Second, you expect that honest people will have “lower expectations about achieving ideal ends,” but via what mechanism? Suppose that individuals typically begin their careers with a plan of making 10 consecutive annual donations of $1,000 to a charity, but on average only make two such payments (deciding to spend the money on luxuries in practice). What will the honest do?

    1. Analyze their own motivations and conclude that they really prefer to be purely selfish and will enjoy keeping all the money, thus deciding to give and expecting to give zero.
    2. Observe the data on the population at large and expect to give $2,000.
    3. Observe the data, conclude that commitment mechanisms are necessary, arrange wage garnishing or similar measures, and expect to give $10,000
    4. Observe the data, recognize the value (in terms of the charitable goal) in identifying errors in reasoning about charity, search out charitable tax deductions and employer matches, apply commitment mechanisms, and expect to give $20,000 (using a $10,000 base of precommitted cash, in addition to matches).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Carl, I had in mind scenarios where we lie to ourselves and tell ourselves we care more than we do about certain ideals. So the change I had in mind is being more honest with ourselves, without much change in what we really care about.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, I see no point in any purported skill of rationality that doesn’t change behavior. Carl is taking the correct approach – having identified the bias, he wants to overcome it, and suggests using commitment mechanisms and cold-blooded logic to increase one’s charitable impact. You seem to want people to acknowledge the bias – and that’s all; you just want them to admit that they’re biased and therefore no better than anyone else, rather than suggesting to them how they could repair their flaws. (This whole theme deserves a separate post which I may or may not have time to write.) But, see “The Proper Use of Humility” (http://www.overcomingbias.com/2006/12/the_proper_use_.html) on the futility of acknowledging flaws without doing anything about them. Mere dutiful acknowledgement doesn’t change our real-world actions, and therefore makes no difference, and therefore has zero impact on the real world, which is a poor showing for a rationalist’s skill.

    Human emotions are complex circuitry. Complex machinery doesn’t poof into existence. It requires large cumulative selection pressures over tens of thousands of generations to produce information in the gene pool. Now we, human beings, have ideals; and therefore, in the ancestral environment, idealists had more children. It really should not be at all surprising to discover that human minds are arranged in such fashion as to maximize the reproductive benefit, and minimize the reproductive cost, of being idealistic.

    But it is not necessary to suppose that we are hiding our motives from ourselves. It is not necessary to suppose that there is explicitly represented, anywhere in my minds, the thought: “I shall now give my friend some roast antelope in order to increase the frequency of charitable genes.” (Despite the term “survival of the fittest”, evolution doesn’t give a damn about individual survival; the vast majority of all biological organisms are already dead. Keeping this in mind may help promote clean mental separation of evolutionary ‘goals’ and human goals.)

    The great silver lining of our evolved self-deceptiveness is that there is no evolutionary reason for us *not* to be sincere about our idealistic motives – evolution maximized the fitness (of genes, never individuals) by decoupling ideals and behavior so that they could be optimized to some degree independently. One suspects ideals would never have evolved in the first place without the evolutionary warp in implementation. So, if you fix the self-deception and impose veridical views of reality, you’re left, hopefully, with an *effective* idealist.

    From my perspective, that’s what this is all about – effective idealism.

    Why should I give up my ideals when I discover an evolutionary warp in their implementation? Why should I stop trying if I find out I was deceived about the effectiveness of my efforts? What do I care that my charitable neural circuitry was constructed by DNA strings that exist in me because some ancestors with them had more surviving grandchildren than ancestors who didn’t? It’s a fascinating scientific fact, and it may make me highly suspicious of certain kinds of behavior, but it doesn’t change my ideals.

    The title of this blog is “Overcoming Bias”, not “Acknowledging Bias”, and the notion that we react to our discovery of evolution’s hypocrisy by relinquishing our personal ideals might appropriately be entitled “Surrendering Completely to Bias”. Though, to really surrender properly to the evolutionary causation of our personal goals, we should forget about food, sex, idealism, and personal happiness, and concentrate solely on allele frequencies.

  • http://pdf23ds.net pdf23ds

    Robin, I’m working on a post about time-inconsistent preferences that relates to this a lot. (And since it isn’t finished, I’ll say some stuff here.) One of the striking things about it is that we usually aren’t honest with ourselves about it. It’s almost as if we evolved specifically not to notice it. I myself have become honest about it, though. I accept that wanting to exercise more regularly, or wanting to gain recognition via writing or programming or whatever outlet, is a transient desire, and doesn’t translate into a willingness to practice to improve my writing skills or work steadily on large programming projects, or peripheral knowledge that I don’t find interesting for its own sake. In regards to your post, I’m also honest: I accept that I am largely selfish and less altruistic, and will probably never have any genuine interest in larger social concerns, except insofar as they’re an intellectual exercise. (My emotional disposition happens to make this easier for me, too.)

    But I’ve found that these attitudes decrease my motivation. (And thus, eventually, my happiness.) To be motivated, people need ideals to strive towards, hypocritical or not. They need a sense of purpose. So I think being more honest with ourselves is psychologically unworkable. (Well, incompatible with getting anything done, anyway.)

    For example, without believing that choosing to smoke is somehow normatively wrong, people will find it harder to overcome time inconsistency when trying to quit. Without believing that I have obligations to community or family or nation or employer, I find it hard to motivate myself to learn about subjects I have no interest in, or learn those subjects in a rigorous way that I find tedious and difficult (i.e. taking a neuroscience class vs. reading science news). Because if I’m only failing myself, then during the times where I become demotivated, I have no reason to care.

    Maybe this is what you meant by your “will power” remark, but I’m not sure.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, I tried to be as clear as I could, but let me rephrase: The bias here is the difference between what we say/think and what we do; this can be resolved by either changing what we do or what we say/think. Both eliminate the bias. My guess is that it is easier to change what we say and think, because what we do is driven more fundamentally by what we really want, which is very hard to change. Of course I could be wrong, or perhaps you are an exception.

    Pdf, yes, it may well be that we are incapable of complete honesty with ourselves. The question then becomes how we allocate our limited ability to be honest; on what topics are the benefits of increased honesty largest, relative to the costs?

  • http://pdf23ds.net pdf23ds

    Robin, well, my first hunch is that it would more more towards time inconsistency and overconfidence than it would be in high motives. Of course, you’re not very precise in your post about what sort of self-deceptions (out of the many) you’re referring to.

  • Dagon

    I’m with pdf23ds in being interested in time inconsistency as a separate topic, though it’s certainly related. There are a lot of things that can be treated as either time inconsistency or thought/action inconsistency with no way to tell the difference.

    Robin, what do you say about the feedback loop between thinking/saying and acting? Reducing hypocrisy (and self-deception, which I claim is distinct) is a fine goal if it has no other effect, but I don’t think it’s separable from action.

    Believing and saying something, even aside from commitment strategies, has an effect on actual motivations. If I believe I want to do something, I’m more likely to actually do it than if I believe I kind-of want to, but probably won’t by the time the opportunity presents.

    If this is true, then it is NOT effective to reduce expectations. It’s far better to accept the self-deception and fallibility in being human that allows us to believe in ideals that we won’t always (or in some cases ever) achieve.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Dagon, when this forum started we had a lot of discussion of why overcome bias, and we all acknowledged that we gained many personal, and some social, benefits from our biases. I argued that what distinguished us here was that relative to others, we wanted more to reduce biases, and we accepted there would be real costs of this.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, you seem to think that evolution only fooled you about a relatively isolated area of your beliefs about the consequences of actions. If this were true, you could count on knowing your wants and recruit most of your other mental capacities to correct those errors about consequential beliefs. The quote from La Rochefoucauld gave above says that the situation is far worse; the part of your mind that tries to get you to achieve non-ideal goals is more clever and better organized than the part of your mind that tries to achieve ideal goals. It is so good that it can win and yet let your ideal mind think it has won.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    “My guess is that it is easier to change what we say and think, because what we do is driven more fundamentally by what we really want, which is very hard to change.”

    You say, “driven more fundamentally” and “what we really want”, I say, “System 1 reinforcement architecture.” Human beings are complex, evolved systems and they have multiple parts that do the same thing and interfere both constructively and destructively. There are not some parts labeled “fundamental” and “true desire”. From one standpoint, deliberate thought and learned skills are central; from another standpoint, emotion and hardwired reinforcers are central. I would call neither “central”. I would call them both “centers of gravity” and say the human mind has more than one.

    What you choose to regard as “fundamental” and “true desire” is, to no small extent, a *choice*.

    “The part of your mind that tries to get you to achieve non-ideal goals is more clever and better organized than the part of your mind that tries to achieve ideal goals.” I would say this is simply false. You are not of two minds: Evolution was hard enough put to construct *one* mind running on your brain. Evolution has a very limited complexity budget. All this business of an evolutionary warp separating ideals and behavior is probably linked to protolanguage and deliberate politics, making it a few million years old at the most; and probably most of the selection pressure is even more recent than that. That’s not much time. It certainly isn’t enough time to construct a wholly new deliberative system that works at deceiving you! We’re talking about simple brain hacks keyed into mostly existing systems.

    For example, you extrapolate various strategies for accomplishing your ideals, but you flinch away from strategies with unpleasant consequences to you, and flinch toward strategies that require little personal pain on your part while ensuring that social fellows look upon you admiringly. Note how *simple* this is from an evolutionary perspective. It requires just a small link from an existing and more ancient reinforcement system, instead of constructing a whole new mind from scratch.

    Evolution’s chief advantage, and the source of its apparent cunning, is a vast amount of pragmatic testing. The above isn’t just a clever-sounding idea evolution had, it’s there because evolution found it to work well in practice.

    You can do a hell of a lot with simple hacks if you get a million trial runs to find out what works and what doesn’t. But it is still a finite complexity budget and there are only so many tricks that evolution has. Your genes did not evolve to control economics professors, only illiterate hunter-gatherers with no cognitive psychology journals and no concept of evolution. You have the opportunity to accumulate knowledge and skills that are more *complex* than evolution’s complexity budget for messing with your head. And you can think vastly faster – from our perspective, evolution reacts so slowly as to be motionless; with respect to any given individual it *is* motionless. You can fight, and it’s not allowed to fight back.

    Defeating evolution and becoming a truly effective idealist is a quite reasonable goal, if you don’t let the convenience of giving up seduce you before you start.

  • Calca

    Robin,

    I don’t understand why what we say and think should be at all related to what we do. Can’t we hold opinions about all kind of topics? What I do is completely unrelated to international trade or foreign aid, so I cannot but be idealistic about such questions. Bias is axiomatic in this case. You take a side through a very mysterious process, a bit like the way you grow up with certain sport-team allegiances. Why would anyone try to overcome this bias? You really think that some kind of progress would ensue? I spent my life debating friends and haven’t change one person’s mind that I know of. I’ve given up trying to overcome my and other people’s bias a long time ago, it would be like trying to convert someone to a different religion.

  • ChrisA

    Calca

    I agree that many people get their priors, ideals or moral positions through non-rational processes, but I agree with the aim of this blog that it would be better to do so through rational analysis or at least using others rational analysis. Doing so must avoid wasted time/resources in pursuing mistaken goals or undertaking activities I shouldn’t have. To think otherwise would be nihilistic to me, since otherwise I would have to say anything I do has equal value (wrong or right).

    To me though hypocrisy is different from bias or self deception, it is where I might hold a ideal (which may or may not be rational), but be unable to live up to this ideal. The criminal justice system seems to recognise this, if you can show that you had no sense of right or wrong when you committed a crime, you may get absolved of the crime. Therefore the general assumption of the criminal justice system is that you are aware of the wrongness of the action you are committing. It is also trite to say that all parents are hypocrites when it comes to their kids.

    Thinking about Robin’s question though, can you really hold an ideal which you voluntarily perform actions against? Like Eli, I don’t believe there are warring minds inside me, but there may be a rational centre, a pleasure centre and a pain centre which are separate (you can think, and experience pain and pleasure at the same time).

    Using as an example a person who gambles excessively, he can see the damage his gambling is doing to his family and so genuinely believe that his gambling is wrong, but be unable to stop because the reward he gets from his pleasure centre in his mind is overwhelming the pain centre (which is generating pain from the disconnect between his rational position and his actions, otherwise known as guilt).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, yes of course we are complex, we do not have two completely separate brains, and mind parts don’t come with labels. Nevertheless, we can usefully distinguish the parts of our minds that are recruited for the purpose of presenting ourselves as following high motives, and the parts of our minds that makes sure that in practice we achieve low motives. You describe these parts as “deliberate thought and learned skills” versus “emotion and hardwired reinforcers” and claim that the former can have more complex “knowledge and skills” and think faster than the later.

    It is true that you are conscious mostly of the high motive mind activity. But this does not at all mean that the low motive mind activity cannot deliberate, learn, be complex, and think fast. And both parts are “you.”

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Calca, yes of course there are few actions of ours related to some of the topics where we have opinions. But for others topics, such as ones I listed, we have both opinions and related actions. And yes, if you don’t want to overcome your biases, this forum may have little useful to offer you.

  • Calca

    Robin,

    I’m just trying to understand, don’t get me wrong. I’m just very skeptical at the moment. What worries me are situations like, say Stalinist Soviet Union. Take for instance this sentence of ChrisA: “the aim of this blog that it would be better to do so through rational analysis or at least using others rational analysis. Doing so must avoid wasted time/resources in pursuing mistaken goals or undertaking activities I shouldn’t have. To think otherwise would be nihilistic to me…”
    Well ‘nihilistic’ was a common accusation thrown at dissidents in USSR. Who is to decide who is wasting time/resources, in the end I don’t think we will ever know.

    (But I like ChrisA’s example of the with justice system.)

  • Calca

    I have a few more questions. Has anybody ever been able to change someone else’s mind through honest debate? In my experience even when I succeed in getting partial agreement with my positions, invariably when I meet the person again some time later the person has reverted to his/her original position, in other words, my arguments didn’t stick.

    Secondly, can there be situations where even though bias has been spotted it is still better to keep it instead of overcoming it? Suppose for instance, that science “proves” that there are indeed differences between the sexes as far as the intellectual process is concerned. Isn’t the idea that “man are created equal” a good bias to have?

    You have probably already considered such questions.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Calca, this web forum is organized into posts and comments on those posts. Please don’t take the comment section of any post as a place to ask any random question; stay on topic.

  • Carl Shulman

    Robin,

    I am skeptical that in your scenario one could suddenly gain an honest self-assessment (by your stipulation, without changing what we really care about) without affecting actions, even if motivations remained unchanged.

    Take our inefficient charitable giver, and specify that he gives $5,000 to the opera he attends. He self-deceives himself into thinking that this is because he is an altruistic person, concerned with the welfare of everyone, not paying attention to the social status it brings among his opera-loving associates.

    Then, one day on the road to Damascus our opera buff realizes that he ‘really cared’ about the social status, to the tune of 90%, while only 10% of his motivation was altruistic. It seems implausible that behaviour could be unchanged, even if the desires are constant: why not explicitly separate the two activities given this new knowledge? He would reallocate $500 to the optimal charity, almost certainly getting more than 10 times the altruistic ideal-attainment, beating any nebulous prior estimates of opera value (keeping in mind people who are not honest about the efficiency of their charities are likely not to have any quantitative estimates at all). The remaining $4,500 could then be allocated purely on the basis of status maximization (which might lead to greater net status via non-opera conspicuous consumption, without the altruistic/group-loyalty charade).

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    ‘We can usefully distinguish the parts of our minds that are recruited for the purpose of presenting ourselves as following high motives, and the parts of our minds that makes sure that in practice we achieve low motives.’

    At most, we could single out a few pieces of circuitry as having been adapted specifically for those two purposes. But the vast majority of the mind serves neither purpose specifically, and is recruitable by both circuits. Circuitry does not control directly; it has its effect in the thoughts it shapes. By the time you actually think “I want to save the world…” or “…and becoming tribal chief and crushing my opponents seems like a good way to do it”, you’re thinking thoughts that were shaped by thoughts that were shaped by circuitry. And, as the above example suggests, in the ancestral environment, these pieces of circuitry were not opposed at all; they worked in smooth harmony. Human beings are not fought over by God and Satan; we were created by a single optimizer, natural selection.

    ‘You describe these parts as “deliberate thought and learned skills” versus “emotion and hardwired reinforcers” and claim that the former can have more complex “knowledge and skills” and think faster than the later.’

    *You* exist on a level of organization above the idealism circuitry and the behavior warps. Deliberation, “thinking”, takes place via unique, non-repeating information flows, which we experience as being the little voice in our own heads. Knowledge and skills are individualized, learned, and persistent over time. Neural circuitry is species-universal and encoded into a comparatively small amount of genetic information.

    It is reasonable for “you” to do things that run counter to the pressures of neural circuitry – even subtle warps – if you have more knowledge and skill at the art of rationally idealistic deliberation, than the neural circuitry originally evolved to warp.

    You have a bipolar adaptation of idealistic circuitry and behavior-warping circuitry, which works in harmony in the ancestral environment, but tends to be set at odds once human beings achieve some skill at reflection, knowledge of history, and awareness of evolutionary psychology. Now the pressures diverge – not because of changes in the pressuring circuitry, but because of new knowledge in the rest of your mind. Which pressure wins out can quite reasonably be determined by which side *you* choose to be on. Because you, the deliberative process, are aware that you have a choice, which you can make explicitly, and then deliberate on how to shape your further deliberations so that the “high” pressures win out.

    Human beings sure are complex creatures, aren’t we?

    ‘It is true that you are conscious mostly of the high motive mind activity. But this does not at all mean that the low motive mind activity cannot deliberate, learn, be complex, and think fast. And both parts are “you.”‘

    Neither part is you. The “low motive” *circuitry* – not mind, but hardwired pressure on the mind – *cannot* deliberate, it *cannot* think, it probably cannot even learn. The low motive is a pressure on your deliberation. So are the high motives. But the pressure is applied at different points – remember, in the ancestral environment, these were not *opposed* pressures. They only come into opposition within you because you possess reflective knowledge that your ancestors didn’t. If you realize this and deliberately choose a side, then your deliberate mind activity will be on the side of high motives; and while the “low motive” will continue to pressure your deliberation, it cannot actually deliberate apart from your primary stream of consciousness.

  • http://pdf23ds.net pdf23ds

    “The “low motive” *circuitry* – not mind, but hardwired pressure on the mind – *cannot* deliberate, it *cannot* think, it probably cannot even learn”

    There is evidence that several kinds of trauma, especially during childhood, can shape people’s reactions. There’s evidence for implicit memory for pictures that lasts 17 years.

    http://peripersonalspace.wordpress.com/2006/12/12/long-term-memory-for-pictures/

    There’s evidence that cultural upbringing shapes some basic attitudes that may determine one’s political predisposition. So I would not say that it cannot learn, depending on what “it” we’re talking about.

    On the other hand, I definitely agree that it can’t deliberate.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, yes of course, “the vast majority of the mind serves neither purpose specifically.” You keep saying such things as if you think I said the opposite. You apparently have a particular complex theory of mind which I am so far finding it hard to interpret from your writings. So at the moment I can’t think of what more to say that what I’ve already said. “Your” “deliberation” is not some autonomous process, it is in part a battle ground for these two tendencies we have been discussing. So the existence of deliberation per se doesn’t say which side has what advantages.

    Carl, the scenario you paint is one of thousands we could paint. I don’t understand its relevance.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, deliberation is a battleground but the battleground itself can *choose sides*. You *are* your deliberation. You are the little voice in your own head. So if you, the battleground, pick the high-minded pressure, it’s not going to be an even fight.

    You, the battleground, are an ancestrally anomalous case; the pressuring circuitry did not evolve to strategically defeat an aware opponent, but only to influence an unaware one.

    For the background theory of mind, it’s a bit old, but see http://www.singinst.org/LOGI/levels/levels.html.

    Pdf, the “low motive” behavioral reinforcement / strategy-pressuring architecture certainly influences how all sorts of skills are learned, but it’s questionable whether the system itself can be said to “learn”, especially in a sense that contributes to defeating “high motives” when the two poles of the adaptation are brought into nonancestral conflict by our reflective knowledge of evolutionary psychology. Of course the answer could be, “Yes, it learns” but I’m not sure what evidence there is for that, or what sort of learning would take place. Obviously the system influences what is learned by us, the battlegrounds. Does the system learn how to influence us better? How? What parameters are being adjusted?

  • http://pdf23ds.net pdf23ds

    Ahh, I see that you’re using learn in the context of the battleground, whereas I thought you were using it more generally. I’m not aware of any evidence that the subconscious learns anything to try to defeat conscious attempts to better follow high-minded ideals.

    Geez, every time I use the word “subconscious” I want to put a disclaimer by it. Hopefully you will forgive me.

    “Which pressure wins out can quite reasonably be determined by which side *you* choose to be on.”

    I think that the ability to do this varies very widely amongst people, due to differences in intelligence (because proper introspection is hard) and disposition (ADD, depression, and the like make it very hard to win).

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    I added a section to the post above, offering a useful analogy. Here is another analogy: Your conscious mind is like a boat on the ocean, taking credit for the way it rises and falls as ocean waves pass under it.

  • Carl Shulman

    Robin,

    My example was pointing to the deadweight losses of hypocritical behaviour: the distortion of idealistic motives by selfish unconscious drives is a Rube Goldbergian process that efficiently realizes neither selfish nor altruistic aims, nor a weighted average of the two according to their strength. If the two motives are considered and balanced explicitly by an honest agent, then there are Pareto-improving gains to be had by modifying action.

    You wrote:
    “we can resolve our hypocrisy two ways: we can start really living up to our high ideals, or we can admit we don’t care as much as we thought about those ideals,” but both options will require concrete change.

    Second, as I mentioned above, I specifically doubt that the “more honest will tend to have lower expectations about achieving ideal ends” is true across domains because the dishonest will frequently 1) not have made any rigorous estimates about success, or 2) not have thought about efficiency at all, only a ‘fuzzy feeling.’ Honesty reduces self-serving bias in expectations about achievement, but improvements in efficiency make the overall effect ambiguous.

    Third, the example incorporated an actual residual desire (10%) for idealism. You seem to be alternating between two characterizations of self-deception: in one, idealistic desires are truly epiphenomenal (#1 in my earlier comment) as in your corporate PRD model, while in the other the desires actually have motivational force, but less than we imagine. The second seems more accurate: the ‘PRD employees’ are like environmental activists hired by oil companies to provide credible defenders, and fed a misleading line by management. If increased honesty/transparency makes it clear that they are wasting their efforts they may leave for Greenpeace, even if they can’t change the company.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, you’re being way too pessimistic here. Yes, conscious thoughts can be twisted, warped, rearranged, subverted, retold in revisionist histories, but they are not epiphenomenal. The number of successful altruists in human history is not zero.

    I point out that being extremely pessimistic is (a) a reason not to try (b) a social defense which argues that no one else could have done better. Surely, one ought to be a bit suspicious here…?

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, I think we are having a serious problem finding language to express our points of view. For example, I don’t see how you could think I claimed anything like that there have been zero successful altruists. I less concerned about whether I am pessimistic or optimistic, and more concerned that we understand what we are saying.

    Carl, unless I say otherwise, all my claims are intended as tendencies, and I expect there to be exceptions.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, could you say what you think we should *do about* this bias, to overcome it? A confusing belief should be cashed out as an anticipated experience, and oftimes it is more revealing yet to speak of actions.

  • http://profile.typekey.com/robinhanson/ Robin Hanson

    Eliezer, I expect many future posts on this topic, that go beyond my initial advice, which was to accept that your motives are less ideal than you wish.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin, that’s what I’ve been objecting to this whole time. I accept that my mind is a big system, that the pressures in it don’t all work the way “I” (that is, the deliberative battleground) wish they would. It may even be that the causes of my actions are sometimes not what the little voice in my head thinks they are. None of that means my ideals are not what I chose them to be. It means that, having made my decision, I now need to enforce it.

  • Pingback: Trap in my little universe | Discovery