Beware Concept Intuitions

My dearest colleague Bryan Caplan has a broad solid training, a penetrating insight, and a laser-like focus on the important questions.  But Bryan shares an all-to-common intellectual flaw with other very smart folks: he trusts his concept intuitions way too much.

Our minds come built with concepts that let us categorize and organize the world we see.  Those concepts evolved to be useful in the world of our ancestors, and we expect them to reflect real, important, and consistent patterns of experience in that ancestral world.  Such concepts are surely far from random.

Nevertheless, we have little reason to think that our evolved concepts map directly and simply onto the fundamental categories of the universe, whatever those may be.  In particular, we have little reason to believe that categories that seem to us disjoint cannot in reality overlap.   For example:

  • Bryan Caplan’s intuition tells him it is obvious that “mind” and “matter” are disjoint categories, and cannot overlap; nothing could be both mind and matter.  Thus he thinks he knows, based only on this conceptual consideration, that conscious intelligent machines or emulations are impossible.
  • Bryan’s intuition tells him it is obvious that “is” and “ought” claims are distinct categories, and no ought claim could ever be justified by any set of is claims.   Since Bryan is sure he knows some ought claims that are true, he concludes he has a way to know things that doesn’t come via info about the world.
  • The brilliant David Chalmers (and others) thinks it obvious that the categories of things that “feel” is distinct from the category of things that can “cause” other things, which to him implies that there is a deep puzzle of why we humans can feel in addition to participating in cause and effect interactions.  Folks like Chalmers are sure we know we can feel but that the conceptual distinctness of feeling implies that this info does not come to us via our causal relations.  They conclude we have ways of knowing independent of our causal interactions.
  • The very smart Eliezer Yudkowsky, my once co-blogger, and others in his research group, think it obvious that “intelligence” tech is so conceptually distinct from other tech that devices that embody it can quickly explode to take over the world; our very different history with other tech so seems largely irrelevant to them.
  • Once upon a time many now-quaint conclusions were thought to follow from the conceptual distinctness of “living” vs. “dead”, or “spiritual” vs. “material”.

Yes categories such as “mind”, “matter”, “is”, “ought”, “cause”, and “feel” are powerful concepts that helped our ancestors to better organize their experiences.  But this usefulness is just not a strong enough basis on which to make sweeping conclusions about what must or cannot be true of all of reality, even parts, depths, and possibilities with which our ancestors never came into contact.   The categories in your head contain useful hints about what you might expect to see, but they simply cannot tell you what you must or can’t see; for that you have to actually look at the world out there.

On reflection, it seems to me quite possible that some real things are both mind and matter, that some claims are both is and ought, and that real things naturally both cause and feel.  And it seems to me that our theory of info, even if tentative, is the most well established theory we have.  It suggests an info fundamentalism: all that we know that could have been otherwise, even about ourselves, comes via our causal contact with what is; we have no good reason to think we have some other special ways of knowing.

GD Star Rating
Tagged as:
Trackback URL:
  • to

    Between reading your recent posts (signalling, not simulations) and watching one of my kids play a sims game, I’m now thinking that our interactions may be incredibly complicated, but we aren’t. I’m getting much more comfortable with the idea that not only can we create beings comparable to or better than us, but that we are beings in a created place. Does thinking about our cognitive limitations and crude programming have the same effect on other Hanson readers?

  • Jason Brennan

    Hmmm. Did Chalmers say that? Can you provide a citation for where he said that? That sounds like a cartoon version of the things I’ve read of his.

    • Tyrrell McAllister

      Chalmers (last I checked) doesn’t claim that the set of feeling things and the set of the proper kind of causing thing are distinct. However, he does claim that, if you look at the elements of these sets in other possible worlds, the sets are distinct. In our universe, there happen to be bridging laws that keep these sets equal. But, in some possible universes, these laws are absent.

      The implication of this for our own universe (on his view) is that no amount of information about the causal properties of a thing can explain why it feels. To complete the explanation, you will always need recourse to the bridging laws, and there will be no causal explanation for why those bridging laws obtain rather than other ones (or none).

      • Yes, Tyrrell is right; Chalmers says since it is “possible” to cause in all the ways we do, without feeling, no cause info can tell us we feel. Yet since we “know” we feel, we know this fact in some non-cause way.

    • Jesper Östman

      Furthermore, Chalmers doesn’t say it is “obvious”, but he bases his position on a bunch of arguments (such as the zombie argument, the knowledge argument and the explanatory gap argument). Of course, his arguments may be incorrect, but that’s another issue.

  • I, for one, eagerly await Hanson’s refutation of Hume. What exactly can we learn (with reason, or senses, or however) about ought? Surely without that primitive, you accept the arguments that I can’t move logically from a strict is to a strict ought. Indeed, I think the claim above is a much, much stronger one than some sort of Kantian “I know an ought exists because it’s intuitively obvious” argument.

    • Buck Farmer

      Hear hear!

      One route he could take would be to make ‘ought’ statements sort of moot by showing that the concept of decisions, agents, et cetera are not necessary/real.

      Of course, without ‘ought’ I have a hard time determining relevance which I think of as being tied up with ‘If statement A were true, would my decision be different than if statement A were false.”

    • Tyrrell McAllister

      If the mind is matter, then relying on your concepts is using info about the world, namely that part of the world that is contained in your skull. Of course, that might not be the part of the world that you should be looking at. But I take Hume’s argument to establish that info about your concepts is all you need to answer pure “ought” questions.

      That is, in this case, your questions really are just about certain material structures within your skull. Intuition and reflection on your own concepts happens to be the best way we have to discern info about these structures. But we can certainly hope that technology will eventually give us far superior means.

      • Right again; when you rely on your intuitions to decide between ought claims, you are relying on is facts about your mind. Even claims about what intuitions you would have given lots of time and evidence are is claims about real minds.

      • Matthew C.

        The proposition that matter is a form of mind is just as consonant with the everyday accepted facts than the proposition that mind is a form of matter.

        However the first proposition does away with the “hard problem” of consciousness — explaining consciousness / awareness / experiencing in terms of the laws of physics. Instead, physics is simply a description of a certain patterns of regularity we observe and can measure within the mental / conceptual / experiential world that we all live in.

        This first proposition is also very much compatible with ideas like the simulation hypothesis, so long as one is willing to drop the requirement that we are being simulated on some kind of particular hardware.

        I think the reluctance of materialists to explore this possibility is very much tied into their allergy to any idea that might bear any relationship to the “G” word. Because universal mind/consciousness that is the foundation of everything else, including “matter” (whatever that is in the world of quantum mechanics) sounds far too much like religion to be acceptable to their intellectual guardian memes.

        The derivation of matter from mind also allows the possibility of these sorts of phenomena, which are all pretty much anathema to the religion of reductionistic materialism. . .

        Part of a willingness to question prevalent dogmas about the nature of mind / consciousness is often a sincere investigation into our own personal ontology — how do we know what WE know, rather than what do some high-status scientists write in high-status journals or popular books or essays. Until one has investigated ones own personal ontology, instead of relying on the “cached thoughts” of the modern memeplex, one is not able to apply the same kind of sociological analysis to one’s own beliefs that are so devastatingly effective at seeing through other previously culturally dominant mythologies (aka religions).

        What we see in people like Caplan and Chalmers is the kind of real humility that appears when that kind of actual ontological investigation is begun. At that point the edifice of the modern memeplex begins to crumble, and such people begin a search for something more sound and solid to replace it with. . .

    • The answer lies in how you disassemble the word “ought”. Arguments about whether you can or can’t move from is to ought, without defining “ought”, are useless.

  • symme7ry

    Robin, can you give an example of a statement that is both is and ought? Does this mean you are a moral realist?

  • Kevin

    So if Bryan isn’t entitled to rely on his intuitions but you’re entitled to rely on your “reflection” as related in your last paragraph, how do you distinguish between which intuitions are in and which are out?

    • It is a particular sort of intuition I’m critiquing, not all intuitions.

      • Robert Johnson

        Right, so which are in and which are out, like the man asked?

  • Norman

    “our theory of info”

    I wonder, upon reading this, exactly who is being included in the pronoun “our.”

    “It suggests an info fundamentalism: all that we know that could have been otherwise, even about ourselves, comes via our causal contact with what is”

    I was also not under the impression that theory of information was the same thing as epistemology (as this assertion implies), or that either is generally monolithic or even necessarily presented as well defined problems.

    “we have no good reason to think we have some other special ways of knowing.”

    I am unsure how this statement can be derived from “causal contact with what is,” unless of course “good reason” is defined to mean “supported by significant causal contact with what is,” which is a sort of question begging anyway.

    My impression is that the strict empiricism implied, which seems to ignore matters of axiom and tautology, would not sit well with most serious logicians or philosophers of mind.

    In general, demonstrating that concepts may overlap is a long way from showing that the distinction is misleading; and in general, based on claims I’ve seen Bryan make, it seems attributing to him the claim that mind and matter or is and ought are not only separate concepts, but mutually exclusive ones, mischaracterizes his argument. You are no doubt privy to discussions we on the blogosphere are not, but there’s no evidence presented here that Bryan has ever actually made the claim you are rejecting.

    • We do have a standard and pretty integrated info/epistemology theory, used in physics, econ, communication, computer science, statistics, and even philosophy.

      • Aaron

        Whose epistemology would you say that is? (I don’t mean that as a rhetorical question. It seems like there are many different ones out there, who do you think provides the best summation?)

  • One of these people is not like the other, one of these people doesn’t belong. One of these people is a reductionist who takes apart minds into nonmental parts, doesn’t claim unsharability/mysteriousness of his info, and makes his argument for the real-world distance between AI and cars on the basis of their visibly and nonmysteriously different decompositions into nonmental parts. It’s kinda a large difference. Homo sapienses are different from chimps, and AIs are different from screwdrivers – it’s just that the difference is a difference in how the atoms are put together, not a difference between atoms and nonatoms.

    Sure, you can be a fake reductionist who says “It’s all made of atoms!” while still using essentially dualist mental models. But come on – I of all people don’t quite deserve that accusation, do I? Or at least a bit more detail on the accusation.

    Yes, I see a discontinuity between AI and screwdrivers. It’s not a discontinuity of different parts, but it’s a discontinuity of different patterns and different real-world results. I’ll defend that.

    • I agree your example is the most different from the others listed, but included it because it still seems based on a strong concept intuition, that the way in which AIs differ from other tech is far more fundamental than most other differences we see. I keep hearing that it is “obvious” that AI is so vastly different from other tech as to make comparisons irrelevant.

      • It’s not obvious, it just takes a lot of explaining to flesh it out.

      • Comparisons being “irrelevant” seems like an incredibly strong thesis. Surely one should not defend it, but rather ask, “who said that”?

      • Cameron Taylor

        One concept with which we could categorize and organise the world we see around us is that of ‘human made technology’. Our ancestors have observed a real, important and consistent pattern that such technology develops gradually and a servant of its creators. It violates a powerful intuition to consider a technology a peer of humanity, joining us in our concept of ‘intelligent, self aware, creative agent’. It violates a further categorical intution to consider that an ‘intelligent, self aware, creative agent’ need not be constrained by limits we take for granted in humanity.

  • mitchell porter

    I had not seen Bryan’s essay on the mind-body problem before. It seems reasonable enough as far as it goes. I might take this occasion to advertise my own approach to the problem, currently under attack at LessWrong, because it evades one of the problems identified by Bryan and one of the problems identified by Robin.

    The problem identified by Bryan is: even if we suppose that atoms individually have some extra property of proto-consciousness, how does that explain the existence of a complex single consciousness in a pile of atoms? My answer is to say that the world is not made of atoms, but of disjoint sets of entangled fundamental degrees of freedom, and that the conscious mind is one such entity, not a collection of them. Consciousness is thereby ascribed to a single entity rather than an amorphously bounded collection of them, and we avoid the Conscious Sorites Paradox.

    The problem identified by Robin is the disjunction between feelings and causes. Chalmers calls this the paradox of phenomenal judgment: if conscious states are not causally efficacious – something implicit in the usual philosophy-of-mind concept of a zombie as a conscious being with the consciousness subtracted but physical causal relations preserved – then what is the cause of beliefs about consciousness, talk about consciousness, and so forth? The monadic approach is monistic rather than dualistic (follow the link to LessWrong), so it’s not possible to subtract the consciousness from the physics and leave the physics intact.

    • TGGP

      Bryan didn’t write that, Mike Huemer did.

      I agree with Robin’s diagnosis of Bryan, but I also think Hume correctly diagnosed the gap between is and ought. Personally, I would lump questions of “ought” with Robin’s mention of “spiritual” questions. There’s no such thing as a spirit and there are no objective oughts, just subjective attitudes on the part of individuals.

  • Robin, I really don’t see how your criticism of Eliezer fits the pattern of the others. Which exact concepts is he wrongly treating as disparate?

  • Hi Robin,

    With respect to cognition, there is a philosophical concept called Eliminative Materialism that I think maps closely to what you’re describing:


  • jb

    Sounds like concept intuitions are like leaky abstractions in software – seductively easy to accept because they appear to add a lot of certainty.

  • Most of these conceptual distinctions do seem pretty ridiculous and unfounded.

    I think that, many years ago, I would have understood the mind vs matter distinction; matter would have been ‘real’ and ‘solid’, and mind some strange overlaid quality.

    Now I just see them both as abstract patterns that share some common ontology, and can be used to generate each other. The world of matter certainly implies worlds of mind, and the world of mind implies worlds of matter.

  • Steve Rayhawk

    Under your theory of info fundamentalism, how could it even be physically possible for people to be able to predict things about the behavior of one computer program using only mathematical reasoning, logical reasoning, and the behavior of other computer programs?

    If you decide logic is “causal contact”, but you decide qualitative models are a “special way of knowing” that we have no good reason to think we have, then what mental process do you use to decide whether something is someone else’s “special way of knowing” or your “causal contact”? If it is a matter of degree, then how is a model based on the analogy “AI is continuous with technology” different enough in degree from a model based on the analogy “AI is continuous with game-changers such as life and cultural reflective thinking” to justify anything like a 100:1 probability ratio between their predictions? How is the basis for your choice of analogy sufficiently like causal contact and unlike a special way of knowing, and how is the basis for Eliezer’s choice of analogy sufficiently unlike causal contact and like a special way of knowing?

    We have causal contact with the natures and effects of ecospheres and civilizations, just as we have causal contact with the natures and effects of technologies. And what we have come to know, from this causal contact, is that game-changing optimization platforms do make their “very different histories” irrelevant; in fact, they exist precisely because their optimizers find ways that history can be made irrelevant.

    • If one considers impossible possible worlds, one can see even learning math to be about causal contact with reliable calculators. Else one can set logic learning aside. The claim that we are about to invent a tech that is nothing like anything seen since life vs. death or humans vs. animals seems to me a very strong concept intuition, relative to comparing that new tech to other more recent tech.

      • michael vassar

        “We do have a standard and pretty integrated info/epistemology theory, used in physics, econ, communication, computer science, statistics, and even philosophy.”
        and consideration of impossible possible worlds, rather than being a part of this theory, is a shared non-standard intuition of Eliezer’s and of Robin’s, though some others have predated them in its use.

        Also, Robin, you didn’t address Steve’s question.

        Finally, are you *really* claiming that the natural comparison class for thinking machines is things like cars *not* things like thinking brains?!?

      • Steve Rayhawk

        I don’t understand what weight the phrase “concept intution” is carrying in this argument. Why can’t Eliezer argue:

        The very smart Robin Hanson, my once co-blogger, and others in his research group, think it obvious that a “technological” general optimization platform is so conceptually distinct from other general optimization platforms that devices that embody it will behave in a manner entirely historically precedented for other technologies; our very different history with other optimization platforms seems largely irrelevant to them.

        What is the asymmetry?

        It is of course correct to put nonzero prior probability on the appropriateness of any analogy, but why are your posterior odds so lopsided? — of the two analogies, why do you assign such a low relative posterior probability to the predictions from Eliezer’s analogy? Why do you mistrust his concept intuition so much compared to yours?

        Does the asymmetry really come from looking for “strong concept intuition”-related badness, and seeing that the optimization platform analogy has more of the badness than the technology analogy does? Which of these groups is more internally similar:

        “Mind is all something special, different from matter”, “Oughts are all something special, different from what can be known from is’es”, “Feelings are all something special, different from physically causable things”, “Some AI is analogous to known past game-changing optimization platforms and so is something special, different from other technologies”;
        “Mind is all something special, different from matter”, “Oughts are all something special, different from what can be known from is’es”, “Feelings are all something special, different from physically causable things”, “All AI is just a technology and so is nothing special, unlike known past game-changing optimization platforms some AI is analogous to”?

        A more charitable reading of your argument is that the theory of “too strongly trusted concept intuition” is just part of the prior for how people like Eliezer and us could be wrong, just like Bryan Caplan and David Chalmers, while the actual likelihood ratio in favor of our wrongness comes from a different part of the argument. (E.g., as you said, base rates for new optimization platforms vs. new technologies.) If our claim of “not like anything seen since life vs. death” was wrong, then the “very strong concept intuition” explanation for our wrongness would get a lot of posterior probability mass, as you say.

  • I’m pretty sure that at least some of the specific intuitions you’re citing aren’t hard-wired into the brain, thought the having intuitions and feeling that they’re the absolute truth may well be.

    “Is does not imply ought” had to be invented. So did the idea of mindless matter.

    • Agreed.

    • Concept intuition. Are we talking about those attributes of truth such as good, truth, beauty, etc that arises directly out of cause and effect?
      These I would consider to be hardwired.

  • I don’t come away from reading this post with any solid sense of how to tell the difference between reliable from unreliable intuitions.

    Is the idea is something like (a) many of our concepts are theory laden, (b) many of our implicit theories are unsupported by evidence, and (c) confidence based in “concept intuition” does not count as evidence?

    • Will, the kind of intuitions targeted here are about the properties associated with very basic categories of thought, and especially about their disjointness.

  • This all reminds me of Arnold Kling’s command to “drop the we.” We’re all materialists now.

    I’ve always distrusted the is-ought distinction because the claim “you can’t get an ought from an is” is an ought statement (telling me what to do), so the entire claim must be contradictory. Therefore, we can have ought claims from a materialistic worldview.

    • TGGP

      You are assuming the implicit normative claim that one should not believe or espouse incorrect, or at least unfounded things. It is possible to reject that, Straussians on a “noble lie” for example, Scott Aaronson’s irrationalist short story is another.

    • Tyrrell McAllister

      I’ve always distrusted the is-ought distinction because the claim “you can’t get an ought from an is” is an ought statement (telling me what to do), so the entire claim must be contradictory.

      The is/ought distinction isn’t a claim about what you ought to do. It isn’t normative. It’s a claim about what is possible as a matter of logic. The assertion is that a certain class of statements (“is”-claims) cannot by themselves imply members of another class of statements (“ought”-claims).

      Those who hold to the is/ought distinction argue that it is analogous to the fact that you can’t use Peano arithmetic alone to deduce that there are eight planets in the Solar System. The language of Peano arithmetic just isn’t powerful enough to make such assertions about the Solar System. Similarly, it is claimed, the language of “is”-claims just isn’t powerful enough to make assertions involving “ought”.

    • Interesting. That seems like an important distinction.

      How do I deal with the fact that most come into conversations committed to that implicit assumption? (At least in far mode.) It seems like the contradiction has just been moved to a different place.

      • Tyrrell McAllister

        Committed to what implicit assumption?

      • From your cohort: “One should not believe or espouse incorrect, or at least unfounded things”

  • The very smart Eliezer Yudkowsky, my once co-blogger, and others in his research group, think it obvious that “intelligence” tech is so conceptually distinct from other tech that devices that embody it can quickly explode to take over the world; our very different history with other tech so seems largely irrelevant to them

    This seems like a straw-man of Eliezer’s position. He doesn’t think that intelligence is important because it has some magical aura of conceptual distinctness, he thinks it is important because of the massive empirical effect it has had upon the world. Given that he thinks that intelligence is especially important, it is rational to think that technology that produces intelligence will probably be more important than tech that produces lipstick or cars.

    The prediction that intelligence tech will probably cause one particular intelligent mind to “take over” seems to be backed by the fact that humans managed to “take over” the global ecosystem.

    Our “very different history with other technology” is less relevant than it would otherwise be because we can see why a lipstick factory doesn’t take over the world: its production function requires inputs that it cannot produce, and there is no way that this can change if it remains a lipstick factory; the lipstick cannot be turned into labor and electricity and chemical supplies.

    • If minds are what brains make, and brains are local concentrations of computing power, it seems likely that we will have multiple minds for some time to come – since a single brain would be vulnerable to meteorite strikes and other localised disasters. Excessive centralisation would be an extremely obvious mistake.

      We could have multiple minds – with them all being related. Much as we have multiple computers today, with most of them running highly similar operating systems.

  • A dude

    Is it me, or is this an “ought” blog entry?

    If AI is possible it will happen regardless of whether we think mind is separate from matter or not. That’s an “is” statement.

    If you say that animals are not intelligent, and we are, because we can operate with ideas, you know that the question is likely to be resolved when we can put together 10^11 artificial neurons and 10^14 connection between them and run the optimization corresponding to ~10^5 generations times 10^X events. Then we’ll see if the magic happens.

    We may not recognize it as such because it may not even follow the pattern of distinct individuals sized to the same scale as our intelligence, it may leapfrog straight to an amorphous distributed being. Do microbs “know” that humans exist?

  • Steve Rayhawk

    But this usefulness is just not a strong enough basis on which to make sweeping conclusions about what must or cannot be true of all reality [. . .] they simply cannot tell you what you must or can’t see; for that you have to actually look at the world out there.

    But you’re the one saying that, for almost all practical purposes, AI cannot be a game-changer; it must act just like other technologies and intelligences. Eliezer is the one saying AI can be a game-changer, and he’s not saying it must.

    (He is effectively saying that there “must” be research on AI safety, but that’s a decision based on a present state of uncertainty, not a belief distribution. A decision whose expected value is +1 “must” be higher in expected value than a decision whose expected value is -1; certainty like that is completely normal.)

    • I don’t like people who try to suddenly draw back and make their conclusions look weaker and humbler (when they previously came on very strongly) in order to try to avoid an attack. I certainly don’t want to be guilty of that behavior myself.

      Insufficiently powerful AIs wouldn’t be game-changers, but “almost all” sufficiently powerful AIs would be. I do put forth that assertion, at that strength, and I am happy to be criticized on that basis.

      • michael vassar

        Did you just say that almost all AIs sufficiently powerful to be game changers are game changers, or that almost all AIs sufficiently powerful for some other purpose are game changers? If the latter, for what purpose?

  • If someone made a forecast based on astrology, and you criticized it saying astrological beliefs are unreliable, someone might respond that not relying on astrology also relies on an astrological belief, just a belief that such things are unreliable. Similarly if I say that people rely too much on strong conceptual priors that some kinds of things are very very different from other kinds of things, it is not that I am saying not to rely on any beliefs whatsoever about the reliability of conceptual priors. I am saying to rely less on such things, in favor of other kinds of arguments.

    • It seems as though we would need some stats on the reliability of such reasoning (compared to other approaches) to see if this is useful advice or not.

      Intuition relies on deep, unconscious brain mechanisms, which do have their strengths, and which can’t be easily accessed via things like abstract models. That compute power is worth something. I would rather see guidelines on when best to access such mechanisms – rather than a damning of the whole approach – but for that we would need some empirical results that bear on the issue.

      • Stats offer zero reliabilty. If six out of ten represent a stat, which six of the ten are we talking about. If stats worked then we would live in a perfect world. We are dealing here with probability.

        As to intuition, this is done through the binding of experiences and the building of abstraction. It is done concurrently from all experiences, including the experience of reflection. These experiences are shaped by our likes and dislikes. However growth occurs through the binding of our likes to our dislikes, which allows us to experience the inexpressed. These inexpressions are experiences that have remained unbound in our indifference. Once bound we may have an ahah moment. I would suggest that there is no such thing as the unconcious.

        If we are ever to have AI we will need to provide a machine with experiences in order to give it ambivalence. Ambivalence is the lever which shapes the flow.

    • I am saying to rely less on such things [as strong conceptual priors], in favor of other kinds of arguments.

      In human intuitive decision-making, two opposing arguments can’t usually be balanced by tweaking weights on them. One wins over. One can’t start consciously relying less on some consideration, it’s not practically possible. One can only keep in mind the warning, try to change the amount of attention different ideas receive, and see what conclusion falls out. In all the cases you’ve listed in the post, shifting attention won’t help: one needs a crisis of faith that constructs a strong argument that blazes its way through the old believes. It can be catalyzed, but can’t be ignited to a charge of unreliability.

    • Y’know, from my perspective, I’m not saying that AI is different from the other things in its natural class. From my perspective, AI “obviously” doesn’t belong to the class you put it in – is outrageously different (as a matter of pattern, not ontology, thank you very much!). Are 747s like birds (“flying things”) or are 747s like cars (“travel things”) or are 747s like factories (“large capital investment things”)? It seems to me that if you can reasonably get into that sort of argument, then you really do have to drop out of categorizations and say, “There’s a big metal thing out there with wings, which is neither a bird nor a car nor a factory, and it stays the same thing no matter what you call it, now what do we think we know about it and how do we think we now it?” rather than “I’m taking my reference class and going home!”

  • must echo jb. intuitions are abstract models. there isn’t much of a problem with leaky models as long as you stay flexible enough to change the level of abstraction when it is required. allowing your ego to extend and invest itself in a particular model will make you resistant to doing this.

  • “Bryan Caplan’s intuition tells him it is obvious that “mind” and “matter” are disjoint categories, and cannot overlap; nothing could be both mind and matter. Thus he thinks he knows, based only on this conceptual consideration, that conscious intelligent machines or emulations are impossible.”

    Think concurrence here boy, I say.
    What does he think we are? Mechanism : any given aggregate of levers that implement a lever. Machine : any given aggregate of mechanisms that implement a mechanism. Complex Mechanism : any given machine that implements its acquisition through its expression and implements its expression through its acquisition – metabolism.

    Of course we are machines. We build abstractions by changing the context. If that is not a machine than I am…….?

  • mjgeddes

    Ignoring theory and considering practice, no one acts as if categories don’t exist and reductionism is true, as there is never enough information, knowledge or computational power available to compute from first principles of physics, so in practice, any computation needs a multi-level map of reality and some intuitive categories to begin with.

    Readers can be very sure that in practice any AGI will require at least 27 base classes (corrresponding to basic categories or prototypes required for general reality modeling) and corresponding bridging laws for a 27-level map of reality.

    Battle of intuitions seems to be equivalent to battle of the priors – and prior setting seems to depend on categorization (analogical reasoning). To test whose intuition is best, need some initial agreed on base categories, and then need to compute concept distances to known base categories, best intuitive concepts are concepts with shortest distance to agreed base categories in feature space.

  • Robin, you claim to know some propositions that meet both ‘is’ and ‘ought’ claims. I wonder if you’ll expand on that, since this is an ongoing puzzle and it seems like a strange claim to make without evidence or even an example.

    One candidate bridging the divide is to be found promising, for instance, but I’m not sure that promises have much in common with the kinds of ‘ought’ statements that Caplan claims to know are true.

    • Tyrrell McAllister

      Robin, you claim to know some propositions that meet both ‘is’ and ‘ought’ claims.

      Where did he make that claim?

      • “On reflection, it seems to me quite possible… that some claims are both is and ought….”

        Though this is not the same as a claim of knowledge, I should think that, on reflection, Robin would want to explain how such hybrid claims are possible or point to someone who does. The reason this is not ‘quite possible’ has to do with the structure of the claims themselves: for instance, “I detest non-self-defensive killing” is a statement about my preferences, while “You ought not to murder” is a statement about a moral reality.

        As I pointed out earlier, one possible bridge between this gap is promising, since “I promise to return the five dollars you lent me,” is a statement of fact and a statement of obligation in the same fell swoop. Perhaps this is what Robin is gesturing towards in his “upon reflection” because, like minds or quale, the obligation is supervenient on some state of affairs (a particular configuration of neurons, a particular configuration of phonemes.)

        But I suspect that Robin is actually a naturalist who takes utility- or preference-maximization to be a meta-ethical obligation in itself, which is doesn’t really succeed in bridging the is-ought gap but rather ignores it. That’s why I inquired.

  • Allen

    On the idea that a computer simulation of a brain would be conscious, Hans Moravec had some interesting comments on this.

    How do we know when a given physical system implements a given computation? What is our criteria for establishing a valid mapping from the physical system running the simulation to the original physical system that is being simulated?

    With the right mapping, any given physical system could be said to implement any given computation. Similar to the way that with the right “one-time pad” any random collection of bytes can be “decoded” to become any data you want. Hilary Putnam discussed this in his 1988 book “Representation and Reality”.

    Where does meaning come from? The Symbol Grounding Problem would also seem to need addressing.

    Computationalism and functionalism are not without their problems…

    • …but this particular problem is overhyped.

      A phyical system P implements a given computatation C for an observer O iff there is mutual information between C and P given O.

      In other words, if learning the results of the physical process tells you something about the answer to the computation, then it is an implementation of the computation to the extent of how much it tells you.

      Yes, that means the existence of a computation is observer-dependent, and, to an observer who cannot harness the computational aspect of the phenomenon, there is no computation.

      More here

  • Allen

    Yes, that means the existence of a computation is observer-dependent, and, to an observer who cannot harness the computational aspect of the phenomenon, there is no computation.

    So if we have a physical system such as a computer that implements a causal structure that is isomorphic (via some mapping) to that of my brain (at some substitution level) over a given period of time (maybe just a couple of seconds), then computationalism says that the activities of this physical system should have resulted in a conscious experience that would be equivalent to my own subjective experience over the period that is simulated. Same computations performed, same conscious experience.

    Whether there actually is an external observer who knows the mapping between the two physical systems (e.g. the computer and my brain) would seem to be irrelevant to the question of whether there was a conscious experience associated with the computer’s activities, right?

    Hans Moravec discusses this in my previous link. Here’s another good example. And here’s an interesting paper by Tim Maudlin highlighting another related problem with computationalism (takes a while to download). And of course Stephan Wolfram’s Principle of Computational Equivalence. And you may have seen the debate between David Chalmers and Mark Bishop on this.

    It really kind of looks like a Kantian style antinomy to me. Assuming physicalism/materialism, computationalism seems like the best explanation for conscious experience…BUT, computation is ubiquitous, so how do you avoid having to make arbitrary distinctions about what physical systems implement what computations?