Use the Native Architecture

Imagine writing two versions of the same computer program. The first represents its integers as 32-bit binary numbers.  The second writes the numbers in base 10, ASCII strings with each byte used to store one digit.

The second version has its upsides.  Thirty-two bit numbers max out at several billion, but you can keep tacking digits onto the string until you’re out of memory.

That said, the program that uses 32-bit integers runs faster because it uses the native architecture of the CPU.  The CPU was designed with this more compact format for numbers in mind, with special-purpose circuits like 32 bit adders.

The same principle applies to using one’s brain:  Some things the brain can do quickly and intuitively, and some things the brain has to emulate using many more of the brain’s native operations.  Sometimes thinking in metaphors is a good idea, if you’re human.

In particular, visualizing things is part of the brain’s native architecture, but abstract symbolic manipulation has to be learned.  Thus, visualizing mathematics is usually a good idea.

When was the last time you made a sign error?

When was the last time you visualized something upside-down by mistake?

I thought so.

Sometimes it’s not just a question of making mistakes, but of not seeing the answers at all, or seeing inelegant answers when there are more elegant ones to be found.

While I was working on AI with Eliezer last year, visualizing mathematics was responsible for some of my larger successes, and failing to do so was responsible for some of my larger mistakes.

One example of this is the incident that Eliezer recounted as follows:

I once had an exchange which sticks in my mind, and illustrates this point fairly well.  I was pondering utility functions, and said:  "Utility functions are unique up to a positive affine transformation; what kind of information does that preserve? It preserves ordering, but it’s more than just that. It doesn’t preserve proportions…"  And the one who was listening, acting as my person-to-bounce-ideas-off-of, said, "It preserves relative intervals."  And lo, I immediately knew exactly what it meant that the information in a utility function consisted of proportions between intervals between outcomes.

But the flip side of this is that any time I spent studying things like evolutionary biology, evolutionary psychology, neuroscience, cognitive psychology, heuristics and biases, etcetera etcetera, I did not spend studying math, and so I did not know off the top of my head that an affine transformation preserves relative intervals.

Actually, this wasn’t something I knew off the top of my head.  Eliezer had needed to define the word "affine" for me right before that.  I had not studied much linear algebra before working for SIAI. Instead, I instinctively tried to visualize a positive affine transformation.

I visualized positive affine transformations as ways to move and uniformly stretch a rubber band with some ink-blots on it.  If you visualize that, you will *see* that positive affine transformations preserve relative intervals. It didn’t so much take prior knowledge of mathematics, as prior experience coming up with good mathematical visualizations.

My instinct doesn’t always get triggered when it should.  I recall another situation in which Eliezer and I were trying to prove that a certain infinite series would converge.  So I did the usual thing I do in math contests and turned into an algebra ninja.  (You’re an algebra ninja when you solve a problem so fast you don’t know what hit it.)

It took me two whole sheets of paper but finally:  "…and then we upper bound log(x) with x-1…and that gives us a sum of squares, so it’s less than C times pi squared over six.  OK, it converges!"  Eliezer was still sitting there thinking.  Finally, he drew a simple picture which explained everything about why the infinite series converged, told us what it actually converged to, and gave us a much clearer understanding of the whole problem.

The principle of "use the native architecture" extends beyond visualizing mathematics.  Back in my senior year of high school, Eliezer once mentioned to me that Chinese speakers were able to memorize longer strings of digits because each digit is a single syllable in Chinese.  As a computer programmer, it occurred to me that there was nothing stopping me from picking another encoding – and I have perfect pitch, so I picked musical notes.  Middle C is 1, the D above that is 2, and so on up the scale; 0 is the B below Middle C.

Thus, when my psychology teacher put up a string of twenty digits on the board and asked us to memorize them, I was able to do it.  In fact, I still know that string of digits, as well as several phone numbers I used this trick on (though I stopped bothering once I got a programmable cellphone).

The Löb’s Theorem cartoon was drawn on the theory that the brain has native architecture for tracking people’s opinions, and would find it easier to visualize the difference between someone’s opinion and someone’s opinion about someone’s opinion, than to make the corresponding distinction in formal systems.  Hence representing Peano Arithmetic as a smiley face.

When thinking becomes difficult and unintuitive, it may be a good idea to look for a metaphor which maps the subject onto a more native representation.  Though metaphors can break down – my examples are from domains where the mapping was exact.  When metaphors break down, you have to pause; sometimes the mismatch is fixable, and sometimes it’s not.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://causalityrelay.wordpress.com/ Vladimir Nesov

    Marcello: The Löb’s Theorem cartoon was drawn on the theory that the brain has native architecture for tracking people’s opinions, and would find it easier to visualize the difference between someone’s opinion and someone’s opinion about someone’s opinion, than to make the corresponding distinction in formal systems.

    It sure complicated the math by dispersing it all over the place. Having compact symbolic notation allows to imagine graph-like models or type inference processes right in the context of a single sheet of paper where the math is written, while navigating cartoons takes time and bigger shifts of context, making it harder to follow. It helps to have a single binding context for more difficult proofs, or to modularize the proof so that everything doesn’t need to be in mind at once. In my experience, most of the time solving a moderately hard problem is spent building up a library of transformations between alternative visualizations and across the freedoms of the problem, so that at some point steps towards the solution just reveal themselves. Build a map, and then walk it.

  • steven

    I’ve always thought that most math texts do awfully at this. Way too much symbol manipulation, way too little visual intuition-building. Either that or it’s a learning-style thing.

  • anon

    Of course, we should recognize that the visualization of a problem does not necessarily make things easier for everyone. I for one, have a harder time understanding a problem or it’s solution if I have to think through a specified metaphor, much less graphic visualization (it usually makes everything even more blurry for me). I do better if it cuts straight to the point (in universally accepted math symbols and equations, which are unambiguous). Sometimes I do try to pick an example that uniquely helps me understand better (almost never a visual one though). I doubt there is a universal ‘native architecture’.

  • http://www.strangedoctrines.com Michael Drake

    “when my psychology teacher put up a string of twenty digits on the board and asked us to memorize them, I was able to do it. ”

    Did you use rhythms or add “lyrics” to the tunes?

  • anon

    It sure complicated the math by dispersing it all over the place.

    On the contrary, two-dimensional “boxed” representations of complicated expressions may be more intuitive than conventional math layout. Here is one paper on Citeseer which uses such a notation to analyze the computer proof of the Robbins-Boolean problem.

  • Floccina

    Math-U-See seems interesting.
    http://www.mathusee.com/

  • Jadagul

    Steven, math texts aren’t supposed to be a good way of visualizing the argument. They’re supposed to be a skeleton; every prof I’ve talked to agrees that you’re basically supposed to be rewriting any argument you care about as you read it. I know that if I find an argument confusing I can generally figure it out by starting at the top and writing it out as I would notes for a proof I’m working on myself.

    The math text isn’t supposed to be the understandable version; it’s supposed to be the cliff notes so you can make your own understandable version.

  • http://elder-gods.org/~larry/ Larry D’Anna

    Jadagul: I agree. You aren’t really reading a math text if you don’t have a pencil in your hand and plenty of scratch paper.

  • Douglas Knight

    I don’t see how Jadagul is disagreeing with steven. Math books give too much detail in proofs; rather, they give the wrong details. They should tell you how to reconstruct the proof, not give you touchstones. Visualization is a part of that.

  • Jadagul

    Douglas, different people visualize in different ways. The textbook tries to give you enough details that you can reconstruct the argument, while giving you few enough that you have to actually reconstruct it, which forces you to frame it in a way that makes sense to you. (I avoid the word ‘visualize’ because there never seems to be much that’s actually visual about the way I understand math proofs; this is sort of the point).

  • anon

    Jadagul, do you have ant references detailing the principles of good mathematical visualization, or providing case studies/examples? It seems that mathematicians have had no interest in documenting this since the closest that comes to mind is cognitive science research, including Lakoff and Nunez’s controversial work, Where Mathematics Comes From.

  • mjgeddes

    The underlying power of metaphors comes from analogies, which, I claim, is all of intelligence, the whole basis for intelligence – that is to say, all other forms of reasoning (such as deductive/predicate, bayesian/probabilistic etc) are merely special cases of analogies: I claim that analogy formation is beyond the scope of Bayes.

    That’s a big claim (and remember readers, you heard it explictly publically stated here by me first). Although you and Eliezer no doubt think that Bayesian reasoning is more general than anlogy formation, there is little basis for your beliefs- in fact – he and you have both got it the wrong way around. (Its Bayes that’s the special case of analogy formation, not vice versa).

    Further, analogy formation is closely related to ontology merging, the ablity to communicate (or ‘map’) a valid concept from one knowledge domain to another novel knowledge domain. I repeat my big claim: this is all of intelligence; ontology/knowledge representation is all of the AI problem.

    Refer to the detailed discussion by Steven Pinker in his new book ‘The Stuff of Thought’ on the power of metaphor/analogy.

    Incidentally, here’s a pro tip for you AI wannabes: ontology is the concrete version of pure math (ie ontologies are ‘mathematical artifacts’, or to use an analogy/metaphor, ontologies are to pure math as physical objects are to physical laws). I realized this myself months ago, but I was very pleased to get confirmation: first from Pinker, then, from references to other philosophers, who had independently realized this:

    Mathematical Beauty!

    “Twentieth-century French philosopher Alain Badiou claims that ontology is mathematics. Badiou also believes in deep connections between math, poetry and philosophy.”

  • Dreamer

    Yes, absolutely. This is something I think about a lot, and as a mathematician, my capicity to visualise mathematical ideas is exactly where I pin the physical location of my maths talent (metaphorically speaking, of course). Particular agreement with Jadagul on how to read a textbook.

    I’m going to throw it into the air that maybe a computer-science way of understanding this is that difficult mathematical ideas are somehow “large” in a memory sense, and talented mathematicians are mentally equipped with very large “mathematics buffers”. In my experience of teaching mathematics to those who are struggling with high-school level stuff, the problem seems intuitively that they can’t fit an entire mathematical idea into their head at once.

  • bob

    Marcello,

    I recommend the mural “Math” by Liz Mitchell, an artist and math teacher in Oakland, California.

  • Pingback: Reading, writing, and thinking, with your brain | Compass Rose