Beware Commitment

We choose “shoulds” over “wants” more often in far mode:

[Of] various programs, some were public policies (e.g., gas price) and some were personal plans (e.g., exercising). These programs presented a conflict between serving the ‘‘should” self and the ‘‘want” self. Participants were first asked to evaluate how much they thought they should support the program and how much they wanted to support the program. Then, they were asked to indicate how strongly they would oppose or support the program. Half of the participants were told that the program would be implemented in the distant future (e.g., in two years) and the other half were told the program would be implemented in the near future (as soon as possible). The results indicate that support for these ‘‘should” programs was greater among participants in the distant future implementation condition than among participants in the near future implementation condition. Further examination of the ‘‘gas price” policy revealed that the construal level of the policy mediated the relationship between the implementation time and the support for the policy. Participants were more likely to choose what they should do in the distant future as opposed to the near future. … [This] has an important implication: … policy-makers could increase support for ‘‘should” policies by emphasizing that the policies would go into effect in the distant future. (more)

All animals need different ways to reason about things up close vs. far away.  And because humans are especially social, our forager ancestors evolved especially divergent near and far minds. Far minds could emphasize presenting an idealized image to others, while near minds could focus on managing our less visible actions. Homo hypocritus could see himself from afar, and sincerely tell himself and others that when it mattered he would do the honorable thing. Even if in fact he’d probably act less honorably.

One reason this was possible was that foragers had pretty weak commitment mechanisms. Yes, they could promise future actions, but they rarely coordinated to track others’ promises and violations, or to organize consistent responses.  So forager far minds could usually wax idealistic without much concern for expensive consequences.

In contrast, farmer norms and social institutions could better enforce commitments. But instead of generically enforcing all contacts, to give far minds more control over farmer lives, farmers were careful to only enforce a limited range of commitments. Cultural selection evolved a set of approved standard commitments that better supported a farmer way of life.

Even today, our legal systems forbid many sorts of contracts, and we generally distrust handling social relations via explicit flexible contracts, rather than via more intuitive social interactions and standard traditional commitments. We are even reluctant to use contracts to give ourselves incentives to lose weight, etc.

The usual near-far question is: what decisions do we make when in near vs. far mode? But there is also a key meta decision: which mode do we prefer to be in when making particular decisions?

Speechifiers through the ages, including policy makers today, usually talk as if they want decisions to be made in far mode. We should try to live up to our ideals, they preach, at least regarding far-away decisions. But our reluctance to use contracts to enable more far mode control over our actions suggests that while we tend to talk as if we want more far mode control, we usually act to achieve more near mode control. (Ordinary positive interest rates, where we trade more tomorrow for less today, also suggest we prefer to move resources from far into near.)

We thus seem to be roughly meta-consistent on our near and far minds. Not only are we designed to talk a good idealistic talk from afar while taking selfish practical actions up close, we also seem to be designed to direct our less visible actions into contexts where our near minds rule, and direct grand idealistic talk to contexts where our far minds do the talking.  We talk an idealistic talk, but walk a practical walk, and try to avoid walking our talk or talking our walk.

So yes, encouraging folks to commit more to decisions ahead of time should result in actions being driven more by our more idealistic far minds. In your far mind, you might think you’d like this consequence. But when you take concrete actions, your near mind will be in more control, making you more wary of this grand idealistic plan to get more grand idealism. Our hypocritical minds are a delicate balance, a intricate compromise between conflicting near and far tendencies. Beware upsetting that balance, via crude attempts to get one side to win big over the other.

Longtime readers may recall that my ex-co-blogger Eliezer Yudkowsky focuses on a scenario where a single future machine intelligence suddenly becomes super powerful and takes over the world. Considering this scenario near inevitable, he seeks ways to first endow such a machine with an immutable summary of our best ideals, so it will forevermore make what we consider good decisions. This seems to me an extreme example of hoping for a strong way to commit to gain a far-mind-ideal world.  And I am wary.

Added 8a: Michael Vassar objects to my saying Eliezer Yudkowsky wants to “endow such a machine with an immutable summary of our best ideals”, since Yudkowsky is well aware of the danger of using “Ten Commandments or Three Laws.” Actually, one could argue that Yudkowsky has an air-tight argument that his proposal won’t overemphasize far over near mode, because his CEV proposal is by definition to not make any mistakes:

Coherent extrapolated volition is our choices and the actions we would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

Now I hear a far mode mood in the second “wished we were” clause, but the first clause taken alone suggests a “no mistakes” definition. However, it seems to me one must add lots of quite consequential qualifying detail to a “no mistakes” vision statement to get an actual implementation. It is only in a quite far mode that one could even imagine there wouldn’t be lots of such detail.  And it is such detail that I fear would be infused with excessively far mode attitudes.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • http:/disputedissues.blogspot.com Stephen R. Diamond

    At risk of sounding patronizing, this is your best entry in a long time. Excellent, because of this profound insight: “Our hypocritical minds are a delicate balance, a intricate compromise between conflicting near and far tendencies.” Of course, we’re much more likely to praise the expression of an opinion one agrees with [a bias so obvious it probably isn't worth mentioning] and this balance is something I’ve been pondering recently.

    A stray thought: A political constitution will take a far perspective because its expected to be semi-permanent. The United States is really the only country where a long political tradition includes a written constitution. This may account for why, within homo hypocritus, Americans are the most hypocritical variety.

    • Doug S.

      This may account for why, within homo hypocritus, Americans are the most hypocritical variety.

      [citation needed]

  • michael vassar

    “Eliezer Yudkowsky… seeks ways to first endow such a machine with an immutable summary of our best ideals, so it will forevermore make what we consider good decisions. ”

    WTF!?!
    How many times does he have to emphasize that this is precisely the opposite of the idea behind CEV? The original document says

    “There are fundamental reasons why Four Great Moral Principles or Ten Commandments or Three Laws of Robotics are wrong as a design principle. It is anthropomorphism. One cannot build a mind from scratch with the same lofty moral statements used to argue philosophy with pre-existing human minds. The same people who aren’t frightened by the prospect of making moral decisions for the whole human species lack the interdisciplinary background to know how much complexity there is in human psychology, and why our shared emotional psychology is an invisible background assumption in human interactions, and why their Ten Commandments only make sense if you’re already a human. They imagine the effect their Ten Commandments would produce upon an attentive human student, and then suppose that telling their Ten Commandments to an AI would produce the same effect. Even if this worked, it would still be a bad idea; you’d lose everything that wasn’t in the Ten Commandments… ”

    and his later writings are no more ambiguous.

    • http://hanson.gmu.edu Robin Hanson

      I added a response to the post.

      • Alexander Kruel

        Yes, the idea is infallible by definition. Which wouldn’t be a problem if it was well defined. But since it lacks any detail its superficial appeal is mostly a result of its vagueness.

  • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

    Michael,

    I think you miss the point, which doesn’t necessarily concern the specific substance of the moral code. The point is there is *some* predetermined commitment or some process producing one (in however fluid a fashion) that is set in motion from a *human* far perspective. It may not have our ideals, but it is brought into existence to produce results that comport with the human far perspective; it sets in motion a morality or process of moral development that that is congenial from a far perspective, which produces bad results when untempered by near mentality. In other words, you’re creating—permanently—an entity that serves human far purposes, regardless of what process the entity uses to get there.

    • http://twitter.com/afoolswisdom sark

      A sequence of iterations of Near (and Far) mode moral evolution does not in the long run necessarily give rise to a Far result, simply because when not simulated in CEV and when actually practiced by humanity in real time, it would take a long time to arrive at that result. Rather, Far mode is a mental process, and this process can be summarized/simulated/sped-up while still retaining its essential character.

      This is not about replacing our Near mode decisions with Far mode decisions, but about replacing decisions we would have regretted, with those which we would not ultimately regret. (note: the value of avoiding ‘regret’ itself can be taken into account with CEV)

      • http:/juridicalcoherence.blogspot.com Stephen R. Diamond

        Your argument against the view that a sequence of near and far is not far merely because it culminates at a long time in the future–that argument is a straw man. Matt Simpson, below, makes my actual point more concisely and probably more effectively.

  • http://weblog.hotales.org/portal/python Jarno Virtanen

    Or to put it in another way: forming explicit commitments and contracts for far mode decisions will only show how bad we are keeping those decisions and thus will ultimately undermine our confidence to commitments. If we don’t keep track of far mode commitments at least we can better pretend that we are able to make any commitments at all.

  • Matt Simpson

    Michael,

    I think Robin’s point is that EY is focusing on our far-mode ideals, when it seems our near mode ideals are what we actually desire. Though one may argue that if what we actually desire is near mode ideals, CEV will converge on those.

    • Jess Riedel

      it seems our near mode ideals are what we actually desire

      If our stated desires are conflicting ideals in near vs. far modes, what could it mean to say that we actually desire one ideal or the other? People are inconsistent, and there’s no obvious way to resolve it.

      From what I understand of Robin’s writing, he would suggest that what we actually want should simply be identified with what we choose when push comes to shove. That is, we really want our near preferences, and our far thinking is just complicated machinery for signaling to others.

      But this just seems to be an artifact of the face that we live in a world where strong long-term commitments are difficult. If we suddenly found ourselves in a world where we could make strong long-term commitments, our far-mode selves would so commit, and we would there by realize our far mode ideals much more often.

      Yes, we evolved for this world, and it is the evolutionary goal of our genes that our near-mode ideals are realized. So yes, our near-mode thinking will tend to defend the status quo by keeping it difficult to make strong long-term commitments. The actual decision to implement the strengthening of commitments will always have to be made in near mode. But we shouldn’t confuse our genes’ goals with what our “actual” goals are, insofar as we are trying to define the latter coherently. The hypothetical strong-commitment world seems to me to be as appropriate as the real world for the purposes of defining what we really want, so we shouldn’t point to the outcomes we actually get in this world as evidence.

  • Sam

    CEV gets its morality from extrapolation of what human minds would want given greater intelligence etc. Presumably, if we were smart enough, we would take the near-mode/far-mode distinction into account.

    Also, I think Eliezer’s fun theory posts where he talks about creating a Utopia where people would actually want to live, addresses basically the same concerns you raise here, See eg. http://lesswrong.com/lw/xm/building_weirdtopia/

  • Sam
  • scott

    “Speechifiers through they ages” – the?
    “an extreme example of hoping to a strong way to commit to gain a far-mind-ideal world.” – hoping for?

    Far mode is developed in humans because it lets you project a good image, is divorced from near mode because humans need to make good decisions as well. Are good images and good decisions necessarily opposed? Not when the decision matters, you say. I agree. But why then do our criteria for goodness in images and decisions differ when we deal with low-consequence or low-likelihood decisions?

    Possibly our (cultural, social) concept of ‘good image’ is subject to non-consequence-related warping. When dealing with big, obvious, common decisions, the warping is constantly repaired – the image is kept in line – but when dealing with rare, subtle, or small decisions the warping is not repaired often, and so the good image drifts away from the good decision.

    Could this hypocrisy be resolved in some way, eg having a far-mode ideal of preferring decisions over images, and having a near-mode method of evaluating decisions and discounting images?

  • Pingback: Near Mindedness Vs. Far Mindedness – Camels With Hammers

  • Captain Oblivious

    At least in the case of things like gas prices (assuming we’re talking about a high tax designed to discourage use), there’s some logic behind favoring it more in the future than in the present: if you spring such a thing on people suddenly, some of them will be seriously negatively impacted; by declaring society’s intentions for some future point, people have more of a chance to arrange things suitably (e.g. buy a more efficient car, or move closer to work, or get a job closer to home, or whatever).

    Similar logic applies to things like eliminating the tax deduction for mortgage interest: there’s no good reason for it, but (especially in this economy) many people are somewhat dependent upon it, and the upheaval that would occur if it were suddenly eliminated (namely even more foreclosures, etc) would probably result in a net negative for society. But by picking some future point (or perhaps phasing it out by 10% per year over 10 years, starting 2 years from now), people would have time to adapt.

  • Ray

    I plan on being at the gym three mornings a week, but if held to some kind of bulletproof contract to do so, I would be very, very tired some mornings, and not really get much out of my workout. But I just couldn’t foresee that I was going to get home as late as I did Sunday night, necessitating an extra hour of sleep Monday morning.

    On the other hand, had I such a contract, I would have been watching the clock much closer Sunday night.

    Of course this is all fine as long as the state is not “encouraging” me to act in a better fashion.

  • jeff

    You seem to presume, because individuals don’t take far-mode action when presented with the choice in near timescale, that individuals would be happier in a society that balances far and near mode incentives, or that encourages near-mode actions (you only urged against a society, such as the one that EY’s Friendly AI might create, where far mode ideals are enforced upon us, so I must assume one of the others as your desire).

    I’m skeptical of jumping from individual to group/society. It seems, as an individual, that I want the right to take near-mode actions. But as a society, I’d like some enforcing of far-mode ideals upon the group, otherwise everything breaks down. You want to work on risk mitigation — an incredibly far mode project. And I want society (not the individual) to help you. I think you should more carefully distinguish between such things as allowing contracts between individuals (which means more individuals stuck in far mode) and coordinated enforcing far mode ideals upon us (such as most often suggested by the speechifying)

    • anon

      There is no such thing as “society”. Yes, individuals might be better off if they coordinated more. But coordination is hard and costly, even for small organization with clearly defined goals such as for-profit businesses. A fortiori, coordinating a vast social group to achieve vague, far-mode goals is likely to be infeasible.

  • http://www.weidai.com Wei Dai

    Why don’t we see more efforts by businesses or other organizations to take advantage of construal level theory? For example, why don’t charities ask people to commit to making donations in the “distant future (e.g., in two years)”?

    • http://whyiamnot.wordpess.com Salem

      They do. It is common for charities to ask their supporters to commit to making a donation in their wills.

      • http://www.weidai.com Wei Dai

        Asking people to make donations in their wills can be explained easily without reference to near/far: people have selfish and altruistic components in their utility functions; their opportunities for increasing utility through selfish means take a big hit when they die, so making charitable donations often becomes the best use of their money.

        Asking people to commit to making donations in 2 years can’t be similarly explained, and would be a nice confirmation for construal level theory, if charities actually did that.

    • http://twitter.com/Rongorg Grognor

      I really think a sufficient answer is that construal level theory is still very new and just not well-known enough for that to happen, and where it is known it’s confounded by how difficult it is to get people to follow up on commitments. C.f. the loan market and its many problems.

      But in any case doesn’t the credit card industry do exactly this? “Buy now, spend later!”?

  • Matt Young

    Many of the short term decisions we are stuck with today result from far decisions a while ago. The farther away the decision the more accurate we must be to meet the goal, and accuracy is painful. Accuracy is most painful when the far decision implies more accuracy then we currently use.

  • A dude

    The way I interpret the excellent NEAR-FAR framework is that NEAR mode describes how we want to behave, while FAR mode describes how we want others to behave and how we want others to think we’ll behave.

    It’s a form of the constant trading that humans engage in to try to take advantage of other humans. This trading presumably leads to a more efficient resource allocation, or so capitalism theory goes.

    So bridging the two via contracts is the same as saying that you will trade with others without trying to take advantage of them.

  • mjgeddes

    The ideal split is 70% near, 30% far. Whilst the majority of our time/resources should be devoted to doing productive things in the present (near, revealed preferences), a sizable proportion should still be allocated to our ideals (trying to realize our future potential, far, stated preferences).

    But the above paragraph I wrote is itself based on far mode analysis (note my use of the word ideal). This shows that concern for the future (if done correctly) also encompass the present. So far analysis (correctly done) actually encapsulates near analysis. That is to say, our far mind can simply express a wish to delegate 70% of computational resources to our near mind. The reverse is not true, however, since near minds can’t talk about the abstractions of far. So far mind ultimately beats near mind.

    Bayesian analysis (math/analytic) is near. However this is merely a special case of categorization/creative analogy (far). Why? Because only categorization can provide coherent (consistent, integrated) logical representations of goals having multiple levels of abstraction (incomplete and inconsistent goals). Goal systems are never totally consistent. And the only way to integrate inconsistent goals is to smear out some of the details by forming categories, thus enabling different goal systems to coordinate (interface/talk to each other). Categorization can approximate Bayes to any desired level of accuracy, simply by adjusting the resolution (broadness) of the categories –>> limit 100%resolution –>> categories disappear—>>ideal Bayesian. The reverse isn’t true. Bayes can’t fully encompass categorization, because its representational power is limited by the fact it only operates on a single level of logical abstraction. So categorization (far mode) ultimately beats Bayes (near mode).

  • Pingback: Overcoming Bias : Middle Is Near

  • Pingback: Alexander Kruel · Interesting Quotes Part 1

  • Pingback: Alexander Kruel · Asteroids, AI and what’s wrong with rationality

  • http://twitter.com/Rongorg Grognor

    This is the most brilliant post of the archives I’ve somehow missed. But it lends weight to the idea I had one day that Robin Hanson sees Eliezer Yudkowsky as sort of an embarrassing younger brother who should be kept at a distance so as to avoid unfortunate association. I think that is unfair if I’m right.