I’ve devoted a lot of attention on this blog over the last year to near-far effects, officially “Construal Level Theory.” My summary: all near aspects tend to bring other near aspects to mind, and all far aspects tend to bring other far aspects to mind. The aspects:

I think a better division would be that "far" is logic, geometry, algebra, discrete math, exactly-solvable systems, integers; "near" is numerical analysis, statistics, differential equations, function optimization, real numbers. That's because that's how math divides up historically. Empirical science wants the "near" math; the "far" math is all older and more abstract.

Heidegger's Being and Time is itself in very, very far mode. His entire philosophy tries to reconceptualize all of human life into a Platonically perfect farness.

Though one could say the same of many German philosophers. In philosophy, endorsing far mode is an applause light.

The Go programs now produce a strong game on the 19x19 board as well, but are still a long way from the grandmaster level. And even if what you say is true (Go gets mastered by an expert system), it would only show that Go was not AGI complete. My original comments stand.

In fact, just recently, a post on 'Less Wrong' pointed to a paper stongly hinting that probablity theory is merely a special case of algorithmic information theory.

The paper seems to say that probabilities can be converted to complexities.

"Conditional probabilities between variables become conditional complexities between strings"

Since complexity and similarity are the fundamental metrics of algorithmic information theory, and categorization is the correct method to operate on these metrics (complexity and similarity), this appears to confirm what I've long been claiming on this blog.

Bayes/probablity is not the true foundation of rationality, but categorization/algorithmic information theory is.

> This is clearly seen in the game of Go, where the Monte Carlo methods now produce a strong game for the scaled down boards, but don't scale to the full sized boards (19x19).

Monte Carlo methods are working just fine on 19x19, if you bother to check. Won't be more than another decade or two before Go goes the way of chess.

Heidegger's distinction in Being and Time between present-to-hand (e.g. thinking abstractly about a door knob) and ready-to-hand (actually using the doorknob) fits nicely into this distinction too, particularly as he says that the abstract doorknob retreats when we use it, i.e. the two modes of thinking are to some extent incommensurable. Incommensurable in the same way that Hamlet lamented economy in a moral context: "thrift, thrift, Horatio! the funeral baked meats Did coldly furnish forth the marriage tables", and in the same way that no-one talks about prices at art galleries.

There are different types of math. I would place math such as probability theory and decision is being near. I would place math such as categorization and information theory as far. See the pattern? Alegbra is near. Set theory is far. Program code is near. Ontology/Domain models are far.

The cognitive blindness of you Less Wrong folks has completely bamboozled you all. Bayes is not the foundation of rationality. You only believe it is because your minds are obviously tuned to near mode and you don't understand far mode.

I've told you all once, I've told you all a thousand times, categorization (far mode) is the real foundation of rationality, Bayes is just a special case. The Ocaam prior is uncomputable, and approximations such as Monte Carlo methods don't scale. This is clearly seen in the game of Go, where the Monte Carlo methods now produce a strong game for the scaled down boards, but don't scale to the full sized boards (19x19). Why? Because no non-sentient mechnical (Bayesian) method can ever approximate the Ocaam priors - only categorization/far mode/sentience/analogy can do it. You guys just don't get it.

This strikes me as not dissimilar to the work of Carol Gilligan and other feminist ethicists, who question the value of ethical systems based on abstract reasoning.

That said, I'm always suspicious of the value of setting up dichotomies.

From what I can tell, it's that people are more likely to prefer/think about/decide using/whatever the things in the "Far" category when thinking about situations distant in time/place/etc, and vice-versa for the others. At least some of them seem to be backed by experimental evidence.

billswift- My question was "What determined which things went where in your boxes of coloured words?", not the thing about math. (I can't argue about a math classification until I know what the heck RobinHanson means by "near" and far" ... I still ignore every discussion that's grounded in this categorization.

What I want is a journal subscription, it seems. Still, I found this, which outlines experiments that cover a lot of the cases in your lists (although I can't seem to find anything for several of the puzzling ones on your list)

Thanks

I don't understand what made it go away, but I've added it back in.

I think there used to be a nice picture with near-mode and far-mode words, but I can't see it now.

I think a better division would be that "far" is logic, geometry, algebra, discrete math, exactly-solvable systems, integers; "near" is numerical analysis, statistics, differential equations, function optimization, real numbers. That's because that's how math divides up historically. Empirical science wants the "near" math; the "far" math is all older and more abstract.

Heidegger's Being and Time is itself in very, very far mode. His entire philosophy tries to reconceptualize all of human life into a Platonically perfect farness.

Though one could say the same of many German philosophers. In philosophy, endorsing far mode is an applause light.

The Go programs now produce a strong game on the 19x19 board as well, but are still a long way from the grandmaster level. And even if what you say is true (Go gets mastered by an expert system), it would only show that Go was not AGI complete. My original comments stand.

In fact, just recently, a post on 'Less Wrong' pointed to a paper stongly hinting that probablity theory is merely a special case of algorithmic information theory.

http://lesswrong.com/r/disc...

The paper seems to say that probabilities can be converted to complexities.

"Conditional probabilities between variables become conditional complexities between strings"

Since complexity and similarity are the fundamental metrics of algorithmic information theory, and categorization is the correct method to operate on these metrics (complexity and similarity), this appears to confirm what I've long been claiming on this blog.

Bayes/probablity is not the true foundation of rationality, but categorization/algorithmic information theory is.

> This is clearly seen in the game of Go, where the Monte Carlo methods now produce a strong game for the scaled down boards, but don't scale to the full sized boards (19x19).

Monte Carlo methods are working just fine on 19x19, if you bother to check. Won't be more than another decade or two before Go goes the way of chess.

I have only been vaguely paying attention to the near-far posts but reading the list I was shocked at how "Near" I am.

Heidegger's distinction in Being and Time between present-to-hand (e.g. thinking abstractly about a door knob) and ready-to-hand (actually using the doorknob) fits nicely into this distinction too, particularly as he says that the abstract doorknob retreats when we use it, i.e. the two modes of thinking are to some extent incommensurable. Incommensurable in the same way that Hamlet lamented economy in a moral context: "thrift, thrift, Horatio! the funeral baked meats Did coldly furnish forth the marriage tables", and in the same way that no-one talks about prices at art galleries.

There are different types of math. I would place math such as probability theory and decision is being near. I would place math such as categorization and information theory as far. See the pattern? Alegbra is near. Set theory is far. Program code is near. Ontology/Domain models are far.

The cognitive blindness of you Less Wrong folks has completely bamboozled you all. Bayes is not the foundation of rationality. You only believe it is because your minds are obviously tuned to near mode and you don't understand far mode.

I've told you all once, I've told you all a thousand times, categorization (far mode) is the real foundation of rationality, Bayes is just a special case. The Ocaam prior is uncomputable, and approximations such as Monte Carlo methods don't scale. This is clearly seen in the game of Go, where the Monte Carlo methods now produce a strong game for the scaled down boards, but don't scale to the full sized boards (19x19). Why? Because no non-sentient mechnical (Bayesian) method can ever approximate the Ocaam priors - only categorization/far mode/sentience/analogy can do it. You guys just don't get it.

I'm thinking of starting a Near Party. Any suggestions for insults to lob at Far-heads?

This strikes me as not dissimilar to the work of Carol Gilligan and other feminist ethicists, who question the value of ethical systems based on abstract reasoning.

That said, I'm always suspicious of the value of setting up dichotomies.

From what I can tell, it's that people are more likely to prefer/think about/decide using/whatever the things in the "Far" category when thinking about situations distant in time/place/etc, and vice-versa for the others. At least some of them seem to be backed by experimental evidence.

billswift- My question was "What determined which things went where in your boxes of coloured words?", not the thing about math. (I can't argue about a math classification until I know what the heck RobinHanson means by "near" and far" ... I still ignore every discussion that's grounded in this categorization.

Sorry if I didn't make that clear.

What I want is a journal subscription, it seems. Still, I found this, which outlines experiments that cover a lot of the cases in your lists (although I can't seem to find anything for several of the puzzling ones on your list)

The second link (on the word "summarized") is broken because it is listed twice.