A classic paper by Drew McDermott, “Artificial Intelligence Meets Natural Stupidity,” criticized AI programs that would try to represent notions like happiness is a state of mind using a semantic network:

    And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name.

    So, McDermott says, “A good test for the disciplined programmer is to try using gensyms in key places and see if he still admires his system. For example, if STATE-OF-MIND is renamed G1073. . .” then we would have IS-A(HAPPINESS, G1073) “which looks much more dubious.”

    Or as I would slightly rephrase the idea: If you substituted randomized symbols for all the suggestive English names, you would be completely unable to figure out what G1071(G1072, G1073) meant. Was the AI program meant to represent hamburgers? Apples? Happiness? Who knows? If you delete the suggestive English names, they don’t grow back.

    Suppose a physicist tells you that “Light is waves,” and you believe the physicist. You now have a little network in your head that says:

     

    IS-A(LIGHT, WAVES)

     

    As McDermott says, “The whole problem is getting the hearer to notice what it has been told. Not ‘understand,’ but ‘notice.’ ” Suppose that instead the physicist told you, “Light is made of little curvy things.”1 Would you notice any difference of anticipated experience?

    How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test you could apply is asking, “Could I regenerate his knowledge if it were somehow deleted from my mind?”

    This is similar in spirit to scrambling the names of suggestively named lisp tokens in your AI program, and seeing if someone else can figure out what they allegedly “refer” to. It’s also similar in spirit to observing that an Artificial Arithmetician programmed to record and play back

     

    Plus-Of(Seven, Six) = Thirteen

     

    can’t regenerate the knowledge if you delete it from memory, until another human re-enters it in the database. Just as if you forgot that “light is waves,” you couldn’t get back the knowledge except the same way you got the knowledge to begin with—by asking a physicist. You couldn’t generate the knowledge for yourself, the way that physicists originally generated it.

    The same experiences that lead us to formulate a belief, connect that belief to other knowledge and sensory input and motor output. If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like, and you will be able to recognize it on future occasions whether it is called a “beaver” or not. But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,” you may not be able to recognize a beaver when you see one.

    This is the terrible danger of trying to tell an artificial intelligence facts that it could not learn for itself. It is also the terrible danger of trying to tell someone about physics that they cannot verify for themselves. For what physicists mean by “wave” is not “little squiggly thing” but a purely mathematical concept.

    As Donald Davidson observes, if you believe that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about “beavers” is not right enough to be wrong.2 If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? Wittgenstein: “A wheel that can be turned though nothing else moves with it, is not part of the mechanism.”

    Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be a really good idea to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”

    The deeper the deletion, the stricter the test. If all proofs of the Pythagorean Theorem were deleted from my mind, could I re-prove it? I think so. If all knowledge of the Pythagorean Theorem were deleted from my mind, would I notice the Pythagorean Theorem to re-prove? That’s harder to boast, without putting it to the test; but if you handed me a right triangle with sides of length 3 and 4, and told me that the length of the hypotenuse was calculable, I think I would be able to calculate it, if I still knew all the rest of my math.

    What about the notion of mathematical proof? If no one had ever told it to me, would I be able to reinvent that on the basis of other beliefs I possess? There was a time when humanity did not have such a concept. Someone must have invented it. What was it that they noticed? Would I notice if I saw something equally novel and equally important? Would I be able to think that far outside the box?

    How much of your knowledge could you regenerate? From how deep a deletion? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing a fountain of knowledge, not just one fact.

    A shepherd builds a counting system that works by throwing a pebble into a bucket whenever a sheep leaves the fold, and taking a pebble out whenever a sheep returns. If you, the apprentice, do not understand this system—if it is magic that works for no apparent reason—then you will not know what to do if you accidentally drop an extra pebble into the bucket. That which you cannot make yourself, you cannot remake when the situation calls for it. You cannot go back to the source, tweak one of the parameter settings, and regenerate the output, without the source. If “two plus four equals six” is a brute fact unto you, and then one of the elements changes to “five,” how are you to know that “two plus five equals seven” when you were simply told that “two plus four equals six”?

    If you see a small plant that drops a seed whenever a bird passes it, it will not occur to you that you can use this plant to partially automate the sheep-counter. Though you learned something that the original maker would use to improve on their invention, you can’t go back to the source and re-create it.

    When you contain the source of a thought, that thought can change along with you as you acquire new knowledge and new skills. When you contain the source of a thought, it becomes truly a part of you and grows along with you.

    Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.


    1 Not true, by the way.

    2 Richard Rorty, “Out of the Matrix: How the Late Philosopher Donald Davidson Showed That Reality Can’t Be an Illusion,” The Boston Globe, 2003, http://archive.boston.com/news/globe/ideas/articles/2003/10/05/out_ of_ the_ matrix/.

    New to LessWrong?

    New Comment
    61 comments, sorted by Click to highlight new comments since: Today at 3:54 AM

    I make it a habit to learn as little as possible by rote, and just derive what I need when I need it. This means my knowledge is already heavily compressed, so if you start plucking out pieces of it at random, it becomes unrecoverable fairly quickly. As near as I can tell, my knowledge rarely vanishes for no good reason, though, so I have not really found this to be a handicap.

    As near as I can tell, my knowledge rarely vanishes for no good reason

    ...age will eventually remedy that. ;)

    I don't think you've understood the article. The idea of the article is that if you're able to derive it, then yes, you can regenerate it. That's what 'regenerate' means.

    I think nominul does understand it, and at one level higher than you do. he understands the principle so well he goes and makes a tradeoff in terms of memory used vs execution time.

    Take a symetric matrix with a conveniently zero'd out diagonal... you could go and memorize every element on the matrix....(no understanding, pure rote memorization).... you could go and memorize every element AND noticing it happens to be symmetric...(understanding, what you seem to be thinking of...) Or noticing it happens to be symmetric and then only memorizing half the entries in the first place(nominull's approach).

    I go with nominull's approach myself...I'm just a lot sloppier about selecting what info to rote memorize.

    My interpretation: if your brain can regenerate lost information from its neighbors, but you don't actually need that, then you have an inefficient information packing system. You can improve the situation by compressing more until you can't regenerate lost information.

    However, I have some doubts about this. Deep knowledge seems to be about the connections between ideas, and I don't think you can significantly decrease information regeneration without removing the interconnections.

    My experience is that some people have an easier time memorizing by rote than others. Not all brains are wired the same. Personally I learn relations and concepts much easier and quicker than facts. But that may not be the case for everybody. It might not even be advantageous for everybody - at least not in the ancestral environment which had much less easily detectable structure than our well-structured world.

    Reminds me of the time that my daughter asked me how to solve a polynomial equation. Many moons removed from basic algebra I had to start from scratch and quickly ended up with the quadratic equation without realizing where I was going until the end. It was a satisfying experience although there's no way to tell how much the work was guided by faint memories.

    Having recently reverse-engineered the quadratic equation, it involves quite a few steps that would be pretty tricky to capture without a lot of time and patience, a very good intuition for algebra, or a decent guiding hand from past memories. Given how much of the structure I can recall from memory, the latter seems most likely, but it's provably doable without knowing it in the first place, so I won't dismiss that possibility :)

    It's not too hard if you remember that you can get it from completing the square. Or of course you could use calculus.

    A valuable method of learning math is to start at the beginning of recorded history and read the math-related texts that were produced by the people who made important contributions to the progression of mathematical understanding.

    By the time you get to Newton, you understand the basic concepts of everything and where it all comes from much better than if you had just seen them in a textbook or heard a lecture.

    Of course, speaking from experience, reading page after page of Euclid's proofs can be exhausting to continue to pay enough mental attention to actual understand them before moving on to the next one. :)

    Still, it does help tremendously to be able to place the knowledge in the mental context of people who actually needed and made the advances.

    I believe that this is how St. John's College teaches math (and everything else). They only use primary texts. If anyone is interested in this approach, give them a look.

    [This comment is no longer endorsed by its author]Reply

    Sorry. I didn't see the comment immediately below this one.

    @Sharper: There's actually a school that teaches math (and other things) that way, St John's College in the US (http://en.wikipedia.org/wiki/St._John's_College%2C_U.S). Fascinating place.

    I make it a habit to learn as little as possible by rote, and just derive what I need when I need it. This means my knowledge is already heavily compressed, so if you start plucking out pieces of it at random, it becomes unrecoverable fairly quickly.

    This is why I find learning a foreign language to be extremely difficult. There's no way to derive the word for "desk" in another language from anything other than the word itself. There's no algorithm for an English-Spanish dictionary that's significantly simpler than a huge lookup table. (There's a reason it takes babies years to learn to talk!)

    I had a similar complaint, and the need to memorise a great quantity of seemingly arbitrary facts put me off learning languages and to a lesser extent history. Interestingly it seems easier to learn words from context and use for that reason, you can regenerate the knowledge from a memory of how and when it is used. I am also told that once you know multiple languages it becomes possible to infer from relations between them, which is perhaps why latin is still considered useful.

    I find that it helps to think of learning a foreign language as conducting a massive chosen-plaintext attack on encrypted communications, in which you can use differential analysis and observed regularities to make educated guesses about unknown ciphertexts.

    and to a lesser extent history

    My ability to learn history improved greatly when I stopped perceiving it as "A random collection of facts I have to memorize" and started noticing the regularities that link things together. Knowing that World War II was fought amongst major world powers around 1942 lets you infer that it was fought using automobiles and aeroplanes, and knowing that the American Revolution was fought in the late 1700s lets you infer the opposite, even if you don't know anything else specific about the wars.

    True, you can derive new information from previously learned information. But patterns like 'there were no cars in the american revolution' aren't going to score you anything or get radically new information. And theres no way to derive a lot of the information.

    I make it a habit to learn as little as possible by rote, and just derive what I need when I need it.

    Do realize that you're trading efficiency (as in speed of access in normal use) for that space saving in your brain. Memorizing stuff allows you to move on and save your mental deducing cycles for really new stuff.

    Back when I was memorizing the multiplication tables, I noticed that

    9 x N = 10 x (N-1) + (9 - (N-1))

    That is, 9 x 8 = 70 + 2

    So, I never memorized the 9's the same way I did all the other single digit multiplications. To this day I'm slightly slower doing math with the digit 9. The space/effort saving was worth it when I was 8 years old, but definitely not today.

    I always do my 9x multiplications like this! We were taught this, though. I can't say I figured it out on my own.

    I learned my nines like that too, except I think the teacher showed us that trick. Of the things I learned personally... My tricks were more about avoiding the numbers I didn't like than being efficient.

    I could only ever remember how to add 8 to a number by adding ten and then subtracting two. I learned my 8 times tables by doubling the 4th multiple, and 7 by subtracting the base number from that. I suppose I only ever really memorized 2-6 and 12.

    Knowing how to regenerate knowledge does not mean that you only store the information in it seed/compressed form. However if you need the room for new information you can do away with the flat storage and keep the seed form, knowing that you can regenerate it at will.

    I sure wish I could choose what gets deleted from memory that easily.

    In my experience it is just a matter of not using the memory/skill/knowledge. I was not trying to imply it was a quick process.

    There were actually a few times (in my elementary school education) when I didn't understand why certain techniques that the teacher taught were supposed to be helpful (for reasons which I only recently figured out). The problem of subtracting 8 from 35 would be simplified as such;

    35 - 8 = 20 + (15 - 8)

    I never quite got why this made the problem "easier" to solve, until, looking back recently, I realized that I was supposed to have MEMORIZED "15 - 8 = 7!"

    At the time, I simplified it to this, instead. 35 - 8 = 30 + (5 - 8) = 20 + 10 + (-3) = 27, or, after some improvement, 35 - 8 = 30 - (8 - 5) = 30 - 3 = 20 + 10 - 3 = 27.

    Evidently, I was happier using negative numbers than I was memorizing the part of the subtraction table where I need to subtract one digit numbers from two digit numbers.

    I hated memorization.

    35 - 8 = 20 + (15 - 8)

    Wow. I've never even conceived of this (on it's own or) as a simplification.

    My entire life has been the latter simplification method.

    I have a similar way, which i find simpler:
    9N=10N-N
    That is, 9 8=10 8-8

    So, what about the notion of mathematical proof? Anyone want to give a shot at explaining how that can be regenerated?

    If you still have the corresponding axioms, it should be pretty trivial to rebuild the idea of "combine these rules together to create significantly more complex rules", and then perhaps to relabel things in to "axioms" and "proofs". Leave a kid with a box of Legos and ey'll tend to build something, so the basic combination of "build by combination" seems pretty innate :)

    If you've lost he explicit idea of axioms, but still have algebra, then you can get basic algebraic proofs, like 10X = 9X + 1X. If you play around from there, you should be able to come up with, and eventually prove, a few generalizations, and eventually you'll have a decent set of axioms. I'd expect you'd probably take a while to develop all of them.

    I doubt this is feasible to regenerate from scratch, because I don't think anyone ever generated it from scratch. Euclid's Elements were probably the first rigorous proofs, but Euclid built on earlier, less-rigorous ideas which we would recognize now as invalid as proofs but better than a broad heuristic argument.

    And of course, Euclid's notion of proof wasn't as rigorous as Russell and Whitehead's.

    Dynamically_Linked: On that one, I'm having a hard time understanding what exactly is being regenerated. If it's just a matter of "systematizing the process of deducing from assumptions", then it doesn't sound hard. The question is just -- what knowledge do I have before that, on which I'm supposed to come up with the concept? What's the "the sides of this triangle are 3 and 4 and this angle is right, and the hypotenuse is calculable"?

    Very good post -- I think it'd be helpful to have a series of examples of knowledge being regenerated. Then people could really get your idea and use it.

    Those "meaningless" tokens aren't only used in one place, however. If you had a bunch of other facts including the tokens involved, like "waves produce interference patterns when they interact" and "light produces interference patterns when it interacts", then you can regenerate "light is waves" if it is lost.

    Similarly, while "happiness is a state of mind" is not enough to define happiness, a lot of other facts about it certainly would. The fact that it is a state of mind would also let us apply facts we know about states of mind, giving us even more information about happiness.

    Part of the fun of the Contact Project is trying to interpret a message that has been fully gensymmed.

    I've always been intimidated by this. I'm quite positive I couldn't regenerate the Pythagorian Theorem, but I know that I should be able to. I certainly wouldn't be able to figure out basic calculus on my own. I wish that I could, but I know that I wouldn't be able to. Are there any things we've learned from mathematicians in the past that make figuring out such things easier? Anything I can learn to make learning easier?

    Well, if you like reading things, I know of one extremely good book about the different methods and heuristics that are useful in problem-solving: George Polya's How to Solve It. I strongly recommend it. Hell, I'll mail it to you if you like.

    However, it feels to me personally that every single drop of the problem-solving and figuring-things-out ability I have comes purely from active experience solving problems and figuring things out, and not from reading books.

    Well, here's my background:
    I taught myself math from Algebra to Calculus (by "taught myself" I mean went through the Saxxon Math books and learned everything without a teacher, except for the few times when I really didn't understand something, when I would go to a math teacher and ask).
    I made sure I tried to understand every single proof I read. I found that when I understood the proofs of why things worked, I would always know how to solve the problems. However, I remember thinking, every time I came across a new proof, that I wouldn't have been able to come up with it on my own, without someone teaching it to me. Or, at least, I may have been able to come up with one or two by accident, as a byproduct of something I was working on, but I really don't think I'd be able to sit down and try to figure out the differentiation, for example, on purpose, if someone asked me to figure out a method to find the slope of a function.
    That's what I meant when I said that I'm intimidated by this. It's not impossible that I wouldn't ever figure out one of the theorems on accident, by working on something else, I just can't see myself sitting down to figure out the basic theorems of mathematics. If you think it'll help, I'll have to pick up "How to Solve It" from a library. Thanks for the advice!

    One true thing that might be applicable: Usually math textbooks have 'neat' proofs. That is, proofs that, after being discovered (often quite some time ago) where cleaned up repeatedly, removing the previous (intuitive) abstractions and adding abstractions that allow for simpler proofs (sometimes easier to understand, sometimes just shorter)

    Rather than trying to prove a theorem straight, a good intermediary step is to try to find some particular case that makes sense. Say, instead of proving the formula for the infinite sum of geometric progressions, try the infinite sum of the progression 1, 1/2, 1/4. Instead of proving a theorem for all integers, it it easier for powers of two ?

    Also, you can try the "dual problem". Try to violate the theorem. What is holding you back ?

    The Pythagorean Theorem is just a special case of the magnitude of a vector, aka the Euclidean Norm#Euclidean_norm). Though, I wouldn't be able to derive that if that were deleted from my brain.

    always ask myself: "How would I regenerate this knowledge if it were deleted from my mind?"

    Gold for this.

    I feel really stupid after reading this, so thanks a lot for shedding light onto the vast canvas of my ignorance.

    I have almost no idea which of the spinning gears in my head I could regrow on my own. I'm close to being mathematically illiterate, due to bad teaching and a what appears to be a personal aversion or slight inability - so I may have come up with the bucket plus pebble method and perhaps with addition, substraction, division and possibly multiplication - but other than that I'd be lost. I'd probably never conceive of the idea of a tidy decimal system, or that it may be helpful to keep track of the number zero.

    Non-mathematical concepts on the other hand may be easier to regrow in some instances. Atheism for example seems easy to regrow if you merely have decent people-intuition, a certain willingness to go against the grain (or at least think against the grain), plus a deeply rooted aversion against hypocricy. Once you notice how full of s*it people are (and notice that you yourself seem to share their tendencies) it's a fairly small leap of (non)faith, which would explain why so many people seem to arrive at atheism all due to their own observations and reasoning.

    I think I could also regrow the concept of evolution if I spent enough time around different animals to notice their similarities and if I was familiar with animal breeding - but it may realistically take at least a decade of being genuinely puzzled about their origin and relation to one another (without giving in to the temptation of employing a curiosity stopper needless to say). Also, having a rough concept of how incredibly old the earth is and that even landscapes and mountains shift their shape over time would have helped immensely.

    It feels so hard to understand why it took almost 10000 years for two human brains to make a spark and come up with the concept of evolution. How did smart and curious people who tended to animals for a living and who knew about the intricacies of artificial breeding not see the slightly unintuitive but nontheless simple implications of what they were doing there?

    Was it seriously just the fault of the all-purpose curiosity stopper superstition, or was it some other deeply ingrained human bias? It's unbelievable how long no one realized what life actually is all about. And then all of a sudden two people caught the right spark at the same point in history independently of each other. So apparently biologists needed to be impacted by many vital ideas (geological time, economics) to come up with something, that a really sharp and observant person could have realistically figured out 10000 years earlier.

    And who knows, maybe some people thought of it much earlier and left no trace due to illiteracy or fear of losing their social status or even their life. Come to think of it, most people in most places during most of the past would have gotten their brilliant head on a stick if they actually voiced the unthinkable truth and dared to deflate the everneedy morbidly obese ego of homo sapiens sapiens.

    It feels so hard to understand why it took almost 10000 years for two human brains to make a spark and come up with the concept of evolution. How did smart and curious people who tended to animals for a living and who knew about the intricacies of artificial breeding not see the slightly unintuitive but nontheless simple implications of what they were doing there?

    Just because you aren't aware of it, doesn't mean it didn't happen : )

    Back when I was a teenager, I distinctly remember wondering about how one would go about calculating the distance traveled by a constantly accelerating object during a given period of time. Of course, life -- as it is want to do -- quickly distracted me, and I didn't think about the problem again until years later when I learnt about integration and thought to myself "Oh, so that's how you'd do it!" Now, I don't think I would be able to regenerate interal calculus all on my own,but I know I'm at least observant enough to notice that something was missing -- or at least I was when I was 15 -- and I think that that's an important first step; the answers that we find are strictly limited by the questions that we ask As a side note, my cousin was, at the age of 6, able to derive multiplication from addition all on his own. He is made of win.

    To Mazur’s consternation, the simple test of conceptual understanding showed that his students had not grasped the basic ideas of his physics course: two-thirds of them were modern Aristotelians. “The students did well on textbook-style problems,” he explains. “They had a bag of tricks, formulas to apply. But that was solving problems by rote. They floundered on the simple word problems, which demanded a real understanding of the concepts behind the formulas.”...Serendipity provided the breakthrough he needed. Reviewing the test of conceptual understanding, Mazur twice tried to explain one of its questions to the class, but the students remained obstinately confused. “Then I did something I had never done in my teaching career,” he recalls. “I said, ‘Why don’t you discuss it with each other?’” Immediately, the lecture hall was abuzz as 150 students started talking to each other in one-on-one conversations about the puzzling question. “It was complete chaos,” says Mazur. “But within three minutes, they had figured it out. That was very surprising to me—I had just spent 10 minutes trying to explain this. But the class said, ‘OK, We’ve got it, let’s move on.’ “Here’s what happened,” he continues. “First, when one student has the right answer and the other doesn’t, the first one is more likely to convince the second—it’s hard to talk someone into the wrong answer when they have the right one. More important, a fellow student is more likely to reach them than Professor Mazur—and this is the crux of the method.

    ...There’s also better retention of knowledge. “In a traditional physics course, two months after taking the final exam, people are back to where they were before taking the course,” Mazur notes. “It’s shocking.” (Concentrators are an exception to this, as subsequent courses reinforce their knowledge base.) Peer-instructed students who’ve actively argued for and explained their understanding of scientific concepts hold onto their knowledge longer. Another benefit is cultivating more scientists. A comparison of intended and actual concentrators in STEM (science, technology, engineering, mathematics) fields indicates that those taught interactively are only half as likely to change to a non-STEM discipline as students in traditional courses.

    http://harvardmagazine.com/2012/03/twilight-of-the-lecture

    Sometimes it's good to learn things by rote, too, as long as you understand it later. For example, while I was reading the Intuitive guide to Bayesian Reasoning, I sometimes wished that there was something that I could memorize, instead of having to understand the concept, and then fiqure out how to apply it, and then understand what the answer meant.

    I agree, although I sense there's some disagreement on the meaning of "learning by rote".

    Learning by rote can be tactical move in a larger strategy. In introductory rhetoric, I wasn't retaining much from the lectures until I sat down to memorize the lists of tropes and figures of speech. After that, every time the lectures mentioned a trope or other, even just in passing, the whole lesson stuck better.

    Rote memorization prepares an array of "hooks" for lessons to attach to.

    This is not too different from what I did as a teenager in school. I separated out the facts as "axioms" and "theorems", noting that the theorems can be deduced from the axioms, should I forget them. I would try to figure out how to deduce the "redundant" theorems from my axioms, which would help me remember them. As a simple example, the Law of Conservation of Momentum, is redundant and easily derived from "every force has an equal and opposite force" -- simply multiply by time. Naturally, I also immediately deduced a conservation law for center of mass -- multiply by time again. I also noted places where two facts are redundant, but I couldn't decide which was the more fundamental. Mostly I did this because I know that my memory for boring disconnected facts is rather poor -- it "shouldn't" be easier for me to remember how to derive a fact than the fact itself, but often it is anyways.

    The idea of a concept having or being a "source" seems odd to me. There are many ways of looking at the same concept or idea; oftentimes, the key to finding a new path is viewing an idea in a different way and seeing how it "pours", as you put it. The problem as I see it is that there are often many ways of deriving any particular idea, and no discernible reason to call any particular derivation the source. I find that my mind seems to work like a highly interconnected network, and deriving something is kind of like solving a system of equations, so that many missing pieces can be regenerated using the remaining pieces. My mind seems less like an ordered hierarchy and more like a graph in which ideas/concepts are often not individual nodes but instead highly connected subgraphs within the larger graph, such that there is the potential for vast overlap between concepts, no obvious ordering, and no obvious way to know when you truly "contain" all of some concept. I do understand that, at least for math, ability to derive something is a good measure for some level of understanding, but even within math there are many deep theorems or concepts that I hardly believe that I truly understand until I have analyzed (even if only briefly in my head) examples in which the theorem applies and (often more importantly, imo) examples in which the theorem does not apply. Even then, a new theorem or novel way of looking at it may enhance my understanding of the concept even further. The more I learn about the math, the more connections I make between different and even seemingly disparate topics. I don't see how to differentiate between 1) "containing" a thought and new connections "changing" it and 2) gaining new connections such that you contain more of the "source" for the thought.

    Just my two cents.

    I'm inclined to agree.

    This comes up again in ways that I care more about in the Metaethics Sequence, where much is made of the distinction between normal ("instrumental") values and the so-called "terminal" values that are presumed to be their source. In both cases it seems to me that a directional tree is being superimposed on what's actually a nondirectional network, and the sense of directionality is an illusion born of limited perspective.

    That said, I'm not sure it makes much difference in practical terms.

    I haven't read any of that yet, but it sounds interesting. I'm commenting on articles as I read them, going through the sequences as they are listed on the sequences page.

    I think it makes a practical difference in actually understanding when you understand something. The practical advice given is to "contain" the "source" for each thought. The trouble is that I don't see how to understand when such a thing occurs, so the practical advice doesn't mean much to me. I don't see how to apply the advice given, but if I could I most definitely would, because I wish to understand everything I know. In part, writing my post was an attempt to make clear to myself why I didn't understand what was being said. I'm still kind of hoping I'm missing something important, because it would be awesome to have a better process for understanding what I understand.

    I expect that in practice, the advice to "contain the source for each thought" can be generalized into the advice to understand various paths to derive that thought and understand what those paths depend on, even if we discard the idea that there's some uniquely specifiable "source".

    Which is why I'm not sure it makes much difference.

    That said, I may not be the best guy to talk about this, as I'm not especially sympathetic to this whole "Truly Part of You" line of reasoning in the first place (as I think I mentioned in a comment somewhere in this sequence of posts a few years ago, back when I was reading through the sequences and commenting on articles as I went along, so you may come across it in your readings).

    Hmm, perhaps I was reading too much into it, then. I already do that part, largely because I hate memorization and can fairly easily retain facts when they are within a conceptual framework.

    It's intuitive that better understanding some concept or idea leads to better updating as well as better ability to see alternative routes involving the idea, but it seemed like there was something more being implied; it seemed like there he was making a special point of some plateau or milestone for "containment" of an idea, and I didn't understand what that meant. But, as I said, I was probably reading too much into it. Thanks, this was a pleasant discussion :)

    "Could I regenerate this knowledge if it were somehow deleted from my mind?"

    Epistemologically, that's my biggest problem with religion-as-morality, along with using anything else that qualifies as "fiction" as a primary source of philosophy. One of my early heuristic tests to determine if a given religious individual is within reach of reason is to ask them how they think they'd be able to recreate their religion if they'd never received education/indoctrination in that religion (makes a nice lead-in to "do people who've never heard of your religion go to hell?" as well). The possibles will at least TRY to imply that gods are directly inferable from reality (though Intelligent Design is not a positive step, at least it shows they think reality is real); the lost causes give a supernatural solution ("Insert-God-Here wouldn't allow that to happen! Or if He did, He'd just make more holy books!").

    If such a person's justification for morality is subjective and they just don't care that no part of it is even conceivably objective... what does that say for the relationship of any of their moral conclusions to reality?

    I think that is why biology students like to dissect animals. Our relatives think it gross, but when you see with your own eyes that a body consists of organs and you trace the links between them, it feels so great...

    A bit off-topic:

    "A wheel that can be turned though nothing turns with it, is not part of the mechanism" - what about a gyroscope wheel that is a part of a stabilizing mechanism?

    This comes to show, IMHO, two things: one should be extremely careful with one's intuitions and examples must not be taken too far.

    If you really wanted to nitpick, you could also point out non-driven wheels on cars. Such a wheel doesn't turn anything useful itself (it is turned but does not turn anything), but it still successfully prevents one end of a vehicle from dragging on the ground, which is its actual purpose. But we're merely amusing ourselves with literalistic counter-examples at this point, as I see you are well aware.

    “What I cannot create, I do not understand.” - Richard Feynman.

    This feels very important.

    Suppose that something *was* deleted. What was it? What am I failing to notice? 

    Maybe learning to 'regenerate' the knowledge that I currently possess is going to help me 'regenerate' the knowledge that 'was deleted'.

    Once i had a dispute, i told that in world with internet you don't need to know and remember facts or principles because you can just google, my opponent told that with this method you don't have general picture and understanding in your head. Now i understand that i was wrong.

    @Eliezer, some interesting points in the article, I will criticize what frustrated me:

    > If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like,
    > and you will be able to recognize it on future occasions whether it is called a “beaver” or not.
    > But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,”
    > you may not be able to recognize a beaver when you see one.

    Things do not have intrinsic meaning, rather meaning is an emergent property of
    things in relation to each other: for a brain, an image of a beaver and the sound
    "beaver" are just meaningless patterns of electrical signals.

    Through experiencing reality the brain learns to associate patterns based on similarity, co-occurence and so on, and labels these clusters with handles in order to communicate. ’Meaning’ is the entire cluster itself, which itself bears meaning in relation to other clusters.

    If you try to single out a node off the cluster, you soon find that it loses all meaning and
    reverts back to meaningless noise.

    > G1071(G1072, G1073)

    Maybe the above does not seem dumb now? experiencing reality is basically entering and updating relationships that eventually make sense as a whole in a system.

    I feel there is a huge difference in our models of reality:

    In my model everything is self-referential, just one big graph where nodes barely exist (only aliases for the whole graph itself). There is no ground to knowledge, nothing ultimate. The only thing we have
    is this self-referential map, from which we infer a non-phenomenological territory.

    You seem to think the territory contains beavers, I claim beavers exist only in the map, as a block arbitrarily carved out of our phenomenological experience by our brain, as if it were the only way to carve a concept out of experience and not one of infinitely many valid ways (e.g. considering the beaver and the air around and not have a concept for just a beaver with no air), and as if only part experience could be considered without being impacted by the whole of experience (i.e. there is no living beaver without air).

    This view is very influenced by emptiness by the way.