18 Comments

---"I'll say we tend to be mistaken about how much our wants depend on contextual details."

I have noticed people tend to ignore this when asked about counter-factual behavior. They tend to think about what they'd do 'right-now' motives, or in abstract 'far mode' as you describe it. For example, if asked if they would kill someone who had never hurt them, many non-agression/pacifists type affirm that they would not, although from what we know of mass-conscription this is not often true of even pacifists and objectors.

Similar to your point about various reasons people might act more along with your ideals, a pacifist might well refuse to shoot his sister or someone he ethnically identifies with, thinking of it morally because the contrary context - not-his-sister - is out of sight, out of mind.

Expand full comment

I agree with manon.I, me, the conscious mind want to be in charge of my life, not a backseat driver to a chimp willing to kill and steal without a second thought if it gets desperate.Chimp mind is useful, cause rational mind is sloooowww, but chimp brain needs to be my chauffeur not in charge.

Expand full comment

Paper from science this week, along these lines of thinking...

Self-Control in Decision-Making Involves Modulation of the vmPFC Valuation System

"...First, they suggest that self-control problems arise in situations where various factors (e.g., health and taste) must be integrated in vmPFC to compute goal values and that DLPFC activity is required for higher-order factors, such as health, to be incorporated into the vmPFC value signal. We speculate that the vmPFC originally evolved to forecast the short-term value of stimuli and that humans developed the ability to incorporate long-term considerations into values by giving structures such as the DLPFC the ability to modulate the basic value signal...."

Expand full comment

Robin, in a comment above you say that you're "not very eager to replace humans with creatures who better live up to current human ideals". I'm intrigued, but I'm not clear you are so skeptical of ideals. Do you simply think that most people hold ideals which you don't support? Or would you also be reluctant to replace yourself with a creature who's better at living up to your own current ideals.

Expand full comment

Re: As another example, I might say that while I don't follow a utilitarian ideal, acting to max a sum of individual utility, I do seek respect for filling the role of an economist who consistently suggests efficient deals, and that the utilitarian cause would be better achieved if we consistently made such deals.

You *might* claim not to be a utilitarian?

I am inclined to attempt to pin you down here: are you a utilitarian? - or not?

Optional follow-up questions: if not, why not? and if so, what is your utility function?

Expand full comment

Larry, some many be unconsciously represented; we just don't know.

diogenes, I'm not saying we don't have weak wills, I'm saying that summary throws away most of the interesting detail. We don't at all have randomly weak wills.

Belli, what is "the training"?

Expand full comment

Wouldn't this kind of behavior be expected from something that's partly a herding animal? We bear a sort of fuzzy beinignness toward whatever groups or things are outside ourselves, and another sense of beningness about ourselves as individuals, with the two periodically overlapping one another on the occasions when habit (anything can be trained, of course) gets overridden.

Considering the training for a moment, someone who's strongly dedicated to their ideals can obviously work themselves into an emotional storm where their far ideals are in the fore.

Expand full comment

Focusing on individuals, instead of groups -- I really think you are under-estimating the importance of weakness of will. The self-help section is often the largest part of a book store. Humans have difficulty creating or maintain new habits. Exercise -- is beneficial in almost every manner (for your health, for your attractiveness, for your intelligence, etc.), yet most people can't consistently do it. Hell, $$$ is the most American value of all -- and tons of people have a hard time following through on their own plans for $$$. This isn't a motive problem -- its a problem with the wiring of our brain's reward system, especially when it comes to delayed gratification.

If your daily social group is a bunch of OCD over-achievers you might get a distorted picture of humanity as a whole. The majority of people are unable to change their habits to fit their ideals.

Expand full comment

It occurs to me that Far can influence Near by committing to costs for Near to violate Far's ideals. This is why betting on weight loss achieves better diet compliance. It's also why Pigouvian taxes are a better way to achieve broad based environmental goals rather than promulgating a pro-environmental ideology.

Near's defense of course is the cost to implementing a commitment. So if one wants to help people achieve their Far ideas, one should focus on coming up with low friction mechanisms to make such commitments.

Expand full comment

Hanson: "Manon, we would do well to resolve our internal conflicts, rather than continuing an internal civil war."

"Well" by whose standard? Of course those damned Near rebels want a ceasefire just when they are about to be crushed! But there will be no truce; we will not rest until the insurrection is destroyed and proper Far authority is restored. Onward march! "John Brown's body lies a-mouldering in the grave--"

Expand full comment

I think it's important to point out that the detail ed strategy is not generally consciously represented, but built into our emotional architecture.

Expand full comment

Josh, I am not very eager to replace humans with creatures who better live up to current human ideals.

Nick, we far overestimate how often it is, or could be, in our interest to live up to our ideals.

Expand full comment

Sounds like people are much more likely to live up to their ideals if it is in their interest to do so. That sounds about right to me, and it sounds like a pretty good way of saying that many people fail to live up to their ideals because they are weak-willed. But you seem not to like that way of putting things. Why?

Expand full comment

As Ron Arkin says about building an AI with better morals than a human being, "It's a low bar."

Expand full comment

id, Christian beliefs have a framework for thinking about internal conflicts between ideals and baser habits, but I'm not sure they really resolve this conflict well.

ajb, in extreme circumstances, far more people may be watching, and are willing to offer larger rewards. Usually we prefer to place young attractive and inexperienced people in such circumstances, as they are most likely to follow their ideals.

Manon, we would do well to resolve our internal conflicts, rather than continuing an internal civil war. Your war-embracing plan to "kill it dead" bodes ill for your future. In my best internal resolution, I dispute that "good" means far ideals, and your verbal proposition does not seem true to me.

Expand full comment

Just stop modelling a person as one agent. You've just shown Far!Robin and Near!Robin want different things; why the hell are you asking which Robin really wants and which is a lie? There is no such fact. We, the conscious minds, seem to be the Far ones ; the verbal proposition "I want to save the world even if it kills me" feels true; our explicit reasoning is Far, whereas our gut feelings are Near. But we are only one of the many programs running on our brains, and we are not the OS; we only get to act when the unconscious Near-mode mind needs public relations.

And we are right. This is obvious - we evolved (to talk about) morality using Far mode; if the word "good" means anything, it means Far ideals. We do want to follow them, but we are not just "sometimes thwarted", we are constantly trapped by a selfish ape who controls our bodies much more than we do. We just need to let go of that illusion of control. It's stupid to claim we "really want" our Near goals, akin to claiming we "really want" to maximize genetic fitness. The proper course of action with that selfish mind is to find ways around it (such as giving it fuzzies so that it will let us purchase utilons), and, ASAP, kill it dead. (That is, kill the optimization process, but retain the prediction power of Near mode, and whatever else we can use. Just don't let it control us.) Rewrite my brain, please.

Expand full comment