26 Comments

This is the most brilliant post of the archives I've somehow missed. But it lends weight to the idea I had one day that Robin Hanson sees Eliezer Yudkowsky as sort of an embarrassing younger brother who should be kept at a distance so as to avoid unfortunate association. I think that is unfair if I'm right.

Expand full comment

I really think a sufficient answer is that construal level theory is still very new and just not well-known enough for that to happen, and where it is known it's confounded by how difficult it is to get people to follow up on commitments. C.f. the loan market and its many problems.

But in any case doesn't the credit card industry do exactly this? "Buy now, spend later!"?

Expand full comment

Asking people to make donations in their wills can be explained easily without reference to near/far: people have selfish and altruistic components in their utility functions; their opportunities for increasing utility through selfish means take a big hit when they die, so making charitable donations often becomes the best use of their money.

Asking people to commit to making donations in 2 years can't be similarly explained, and would be a nice confirmation for construal level theory, if charities actually did that.

Expand full comment

The ideal split is 70% near, 30% far. Whilst the majority of our time/resources should be devoted to doing productive things in the present (near, revealed preferences), a sizable proportion should still be allocated to our ideals (trying to realize our future potential, far, stated preferences).

But the above paragraph I wrote is itself based on far mode analysis (note my use of the word ideal). This shows that concern for the future (if done correctly) also encompass the present. So far analysis (correctly done) actually encapsulates near analysis. That is to say, our far mind can simply express a wish to delegate 70% of computational resources to our near mind. The reverse is not true, however, since near minds can’t talk about the abstractions of far. So far mind ultimately beats near mind.

Bayesian analysis (math/analytic) is near. However this is merely a special case of categorization/creative analogy (far). Why? Because only categorization can provide coherent (consistent, integrated) logical representations of goals having multiple levels of abstraction (incomplete and inconsistent goals). Goal systems are never totally consistent. And the only way to integrate inconsistent goals is to smear out some of the details by forming categories, thus enabling different goal systems to coordinate (interface/talk to each other). Categorization can approximate Bayes to any desired level of accuracy, simply by adjusting the resolution (broadness) of the categories -->> limit 100%resolution -->> categories disappear--->>ideal Bayesian. The reverse isn’t true. Bayes can’t fully encompass categorization, because its representational power is limited by the fact it only operates on a single level of logical abstraction. So categorization (far mode) ultimately beats Bayes (near mode).

Expand full comment

This may account for why, within homo hypocritus, Americans are the most hypocritical variety.

[citation needed]

Expand full comment

The way I interpret the excellent NEAR-FAR framework is that NEAR mode describes how we want to behave, while FAR mode describes how we want others to behave and how we want others to think we'll behave.

It's a form of the constant trading that humans engage in to try to take advantage of other humans. This trading presumably leads to a more efficient resource allocation, or so capitalism theory goes.

So bridging the two via contracts is the same as saying that you will trade with others without trying to take advantage of them.

Expand full comment

They do. It is common for charities to ask their supporters to commit to making a donation in their wills.

Expand full comment

Many of the short term decisions we are stuck with today result from far decisions a while ago. The farther away the decision the more accurate we must be to meet the goal, and accuracy is painful. Accuracy is most painful when the far decision implies more accuracy then we currently use.

Expand full comment

Why don't we see more efforts by businesses or other organizations to take advantage of construal level theory? For example, why don't charities ask people to commit to making donations in the "distant future (e.g., in two years)"?

Expand full comment

There is no such thing as "society". Yes, individuals might be better off if they coordinated more. But coordination is hard and costly, even for small organization with clearly defined goals such as for-profit businesses. A fortiori, coordinating a vast social group to achieve vague, far-mode goals is likely to be infeasible.

Expand full comment

You seem to presume, because individuals don't take far-mode action when presented with the choice in near timescale, that individuals would be happier in a society that balances far and near mode incentives, or that encourages near-mode actions (you only urged against a society, such as the one that EY's Friendly AI might create, where far mode ideals are enforced upon us, so I must assume one of the others as your desire).

I'm skeptical of jumping from individual to group/society. It seems, as an individual, that I want the right to take near-mode actions. But as a society, I'd like some enforcing of far-mode ideals upon the group, otherwise everything breaks down. You want to work on risk mitigation -- an incredibly far mode project. And I want society (not the individual) to help you. I think you should more carefully distinguish between such things as allowing contracts between individuals (which means more individuals stuck in far mode) and coordinated enforcing far mode ideals upon us (such as most often suggested by the speechifying)

Expand full comment

it seems our near mode ideals are what we actually desire

If our stated desires are conflicting ideals in near vs. far modes, what could it mean to say that we actually desire one ideal or the other? People are inconsistent, and there's no obvious way to resolve it.

From what I understand of Robin's writing, he would suggest that what we actually want should simply be identified with what we choose when push comes to shove. That is, we really want our near preferences, and our far thinking is just complicated machinery for signaling to others.

But this just seems to be an artifact of the face that we live in a world where strong long-term commitments are difficult. If we suddenly found ourselves in a world where we could make strong long-term commitments, our far-mode selves would so commit, and we would there by realize our far mode ideals much more often.

Yes, we evolved for this world, and it is the evolutionary goal of our genes that our near-mode ideals are realized. So yes, our near-mode thinking will tend to defend the status quo by keeping it difficult to make strong long-term commitments. The actual decision to implement the strengthening of commitments will always have to be made in near mode. But we shouldn't confuse our genes' goals with what our "actual" goals are, insofar as we are trying to define the latter coherently. The hypothetical strong-commitment world seems to me to be as appropriate as the real world for the purposes of defining what we really want, so we shouldn't point to the outcomes we actually get in this world as evidence.

Expand full comment

I plan on being at the gym three mornings a week, but if held to some kind of bulletproof contract to do so, I would be very, very tired some mornings, and not really get much out of my workout. But I just couldn't foresee that I was going to get home as late as I did Sunday night, necessitating an extra hour of sleep Monday morning.

On the other hand, had I such a contract, I would have been watching the clock much closer Sunday night.

Of course this is all fine as long as the state is not "encouraging" me to act in a better fashion.

Expand full comment

Your argument against the view that a sequence of near and far is not far merely because it culminates at a long time in the future--that argument is a straw man. Matt Simpson, below, makes my actual point more concisely and probably more effectively.

Expand full comment

At least in the case of things like gas prices (assuming we're talking about a high tax designed to discourage use), there's some logic behind favoring it more in the future than in the present: if you spring such a thing on people suddenly, some of them will be seriously negatively impacted; by declaring society's intentions for some future point, people have more of a chance to arrange things suitably (e.g. buy a more efficient car, or move closer to work, or get a job closer to home, or whatever).

Similar logic applies to things like eliminating the tax deduction for mortgage interest: there's no good reason for it, but (especially in this economy) many people are somewhat dependent upon it, and the upheaval that would occur if it were suddenly eliminated (namely even more foreclosures, etc) would probably result in a net negative for society. But by picking some future point (or perhaps phasing it out by 10% per year over 10 years, starting 2 years from now), people would have time to adapt.

Expand full comment

Yes, the idea is infallible by definition. Which wouldn't be a problem if it was well defined. But since it lacks any detail its superficial appeal is mostly a result of its vagueness.

Expand full comment