Feels Data Is In

Bryan Caplan and I recently discussed if brain emulations “feel.”  In such discussions, many prefer to wait-and-see, saying folks with strong views are prematurely confident. Surely future researchers will have far more evidence, right?  Actually, no; we already know pretty much everything relevant we are ever going to know about what really “feels”.

We know we each believe we feel (or are “conscious”) at the moment; we say so when asked, and remember so later.  When we trace out the causal/info processes that produce such sayings and memories, they seems adequately explained as a complex computation in signals passed between brain cells, and in state stored in cell type and connection.  We can see that this info process, including how it has us believe we feel, is basically preserved even when our brains are physically perturbed in many substantial ways, such as by changing location, chemical densities, atomic isotopes, the actual atoms, etc.

In the future our better understanding of brain details will let us make much bigger changes that preserve the basic computational process, and so still result in very different brains that say and remember that they feel similar things in similar situations.  Yes, some changes might modify their experiences somewhat; these new brains might be much faster, for example, and so talk about feeling events pass by more slowly. But they’d still say they feel things much like us.

Of course we don’t think that video game characters today really feel when they say they feel or remember feeling – their talking about feelings behavior seems canned, and not remotely as flexibly responsive to circumstances as ours. So we believe this canned behavior isn’t connected to feelings processes inside that are anything like ours. Thus we do think that some things with surface similarities to us can only apparently, but not really, feel (what they claim to feel).

Yes it is an open question just how flexibly responsive must apparent feeling behavior be for us to believe genuine feelings are behind it.  In our world we see a empty huge gap between real people and video game fakes, making it easy to distinguish cases, but we may learn in the future of more awkward intermediate cases. We will also learn just how wide a range of physical systems can support flexible processes with a causal/info structure similar to those in our brains, that say and believe that they feel.

Nevertheless, we while we expect to learn much about which sorts of things can flexibly say and remember that they feel, we have no reason to expect we will ever learn any more than we know now about which of these processes really feel. Not only don’t we have any info or signals whatsoever telling us what other creatures in our world feel, we don’t actually know if we ever felt in the past, nor does our left brain know if our right brain feels.

We humans are built to assume that we feel now, that we felt in the past, that both our brain halves feel, and that most other humans feel too.  But we have absolutely no evidence for any of this in the sense of signals/info that are correlated with such a state due to our interactions with that state.  If those things didn’t actually feel, but still had the same signal/info process structure making them say and think they feel, we’d still get exactly the same signals/info from them.

Future flexible creatures very different from us may also be built to flexibly assume that they feel now, have felt, all their parts feel, and so on.  You might decide that you don’t believe they really feel because they are made of silicon, because their temperature is below freezing, because they are located far from Earth, or because of any of a thousand other similarly simple criteria one might choose.  But it is not clear what basis one might have for any of these criteria; after all, we’ll never have any signal/info evidence for or against any of these claims.

It seems to me simplest to just presume that none of these systems feel, if I could figure out a way to make sense of that, or that all of them feel, if I can make sense of that.  If I feel, a presumption of simplicity leans me toward a pan-feeling position: pretty much everything feels something, but complex flexible self-aware things are aware of their own complex flexible feelings.  Other things might not even know they feel, and what they feel might not be very interesting.

You could consistently hold many other positions on what feels, using many other criteria; but I can’t see what reasonable basis you could have for such positions.  I can at least see some sort of basis for presuming simplicity, though I can also see reasonable critiques of that.  But what I can’t see is much hope that any new good reasons will ever be found to favor any of these positions over others.

The data on feeling is in; we are quite ignorant, and apparently must forever remain so.  If it isn’t fair to presume simplicity about which processes that flexibly say and remember they feel really do feel, then I can’t see anything else but to just remain radically and permanently uncertain.

Added 5Dec: Apparently my position above is close to “panexperientialism”.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • secretivek

    Do we really have an “empty huge gap” between those we accept as feeling as those we don’t? Seems to me we can observe our various responses to animals, from insects up through fish and reptiles, to birds, mammals, ponies, dogs, and primates.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    Prof. Hanson,
    I think this post is a huge step forward in serious thought on this topic. I disagree with some of your framings (the stance that our knowledge on feelings won’t change, so let’s assume x is less reasonable than your stance on a wide range of other topics, which acknowledges that new technology will likely change our current models of reality in deep ways).
    But overall, great and serious post.

  • Dave

    My brain does tonnes of stuff that I’m not concious of. Perhaps comparing areas of the brain that do stuff I’m concious of to those areas that do stuff I’m not concious of would yield further data, although I’d be surprised if this hasn’t already been done. I should look into that.

    • http://silasx.blogspot.com Silas Barta

      That reminds me of a Douglas Hofstadter quote: “I think, therefore I have no access to the level at which I sum.”

  • mitchell porter

    There exists a comparably comprehensive uncertainty with respect to physical reality. You do not know what happens in times or places you cannot observe, you do not know what hidden complexities there may be in basic physics. You might even be a brain in a vat. But you do have some methods for representing in an exact way some of the physical possibilities, and for reasoning with precision about which of them are likely (in conjunction with experiment).

    There does not exist a similarly precise way of talking about “feeling”. Thus, in this post you talk about whether or not things might feel, but hardly at all about what they feel. However, there is no reason to think that this is an inevitable or permanent condition. I recommend Husserl’s phenomenology for a glimpse of what a first-person “science of feeling” would look like. There are intimidating methodological and other problems, but the human race has brought order to intellectual chaos before, though sometimes it required centuries.

    I think that given time, a better phenomenology combined with progress in the material and causal analysis of the brain, and in the high-level abstract understanding of its dynamics, should in fact produce a convergence of evidence that will tightly constrain plausible theories about consciousness. The capacity to emulate a brain by bottom-up reconstruction does not guarantee any such development of understanding with respect to consciousness, but we should expect that people will go on trying to actually understand the brain, and not just to copy it and hack with it. Therefore, I disagree. The data on feeling is not in, but it is coming, and given time it will eventually make deep sense.

  • Stuart Armstrong

    This is a good post. Hopefully there will be some more developing of the ideas in here in future.

  • http://considerables.tumblr.com consider

    Maybe we want to wait-and-see what our intuitions tell us when we are face to face with a simulation, or in fact considering uploading ourselves once the technology actually becomes available. Not to wait-and-see if further evidence changes our minds. In the former case, I think we will find our intuitions do change. Not so in the latter, as I agree with your post.

  • Mike Howard

    Please could you state more precisely what you mean by “feel” and “conscious”? These words carry a lot of baggage, and are open to a great deal of interpretation.

  • http://hanson.gmu.edu Robin Hanson

    secretivek, I meant the gap between people and game characters.

    Hopefully, we may change how we view quantum gravity, but not the basic causal structure of interactions of atoms in brain cells.

    Dave, we will of course learn more about what in our brains influences the conscious feelings expressed in our words and memories, but not about whether those are *real* and not zombie feelings.

    mitchell, yes we do not know if the physical universe extends beyond our causal boundaries. But we similarly invoke simplicity to assume that it does.

    Stuart, not sure what more there is to develop here.

    consider, perhaps we know that near and far mode intuitions differ?

    Mike, I haven’t seen conversations of this sort helped by attempts at such precision.

    • http://www.rationalmechanisms.com Richard Silliker

      ” but not the basic causal structure of interactions of atoms in brain cells. ”

      Don’t bet the farm on it.

      • http://hanson.gmu.edu Robin Hanson

        I’d happy bet the farm at 1-20 odds.

      • http://www.hopeanon.typepad.com Hopefully Anonymous

        “Dave, we will of course learn more about what in our brains influences the conscious feelings expressed in our words and memories, but not about whether those are *real* and not zombie feelings.”

        Prof. Hanson, I feel like you’re still reaching for certitudes, prematurely.

        Zombies sounds like a move in a strawman direction. Here’s something more basic. A screen with speakers. When the screen is black and the speakers off, it indicates it indicates in our theatre of consciousness we’re not actually seeing or hearing anything, even if we’re physically reacting, (at whatever level of neuroanatomic organization our conscious awareness exists). Call it sleepwalking, not “zombie”. When we’re actually experiencing the qualia, it’s represented by images on the screen and sound coming out of the speakers.

        If and when we can develop a device like this, I think we’ll have come a lot further in understanding the obverver’s subjective conscious experience.

      • http://www.rationalmechanisms.com Richard Silliker

        ‘okely-dokely’.

  • Alex

    I’m surprised that you didn’t mention David Chalmers, who also says panpsychism is plausible, using pretty much the exact same argument.

    He brings it up when he argues for his “double-aspect theory of information,” which is a specific proposal for how matter might give rise to experience (including some slightly notorious speculations on the inner lives of thermostats). But along the way he argues that ubiquitous experience is not such a ridiculous idea in general, and we even have reasons to suspect it.

    Here’s the best quote I could find easily, from his famous paper Facing Up to the Problem of Consciousness:

    An obvious question is whether all information has a phenomenal aspect. One possibility is that we need a further constraint on the fundamental theory, indicating just what sort of information has a phenomenal aspect. The other possibility is that there is no such constraint. If not, then experience is much more widespread than we might have believed, as information is everywhere. This is counterintuitive at first, but on reflection I think the position gains a certain plausibility and elegance. Where there is simple information processing, there is simple experience, and where there is complex information processing, there is complex experience. A mouse has a simpler information-processing structure than a human, and has correspondingly simpler experience; perhaps a thermostat, a maximally simple information processing structure, might have maximally simple experience? Indeed, if experience is truly a fundamental property, it would be surprising for it to arise only every now and then; most fundamental properties are more evenly spread. In any case, this is very much an open question, but I believe that the position is not as implausible as it is often thought to be.

    That quote makes it sound like he might think it bottoms out at simple devices or organisms that do information processing in the usual sense of the word, but he’s clear that this would also mean that rocks, electrons, and everything else has experience.

    • http://hanson.gmu.edu Robin Hanson

      Yes I had Chalmers in mind. But if I started to mention who said what when this post would have been far longer.

    • iconoclast

      I liked the double-aspect theory of information better when it was called animism.

  • vincible

    Without having read Chalmers, I lean toward panpsychism as well. That said, I think it’s clear that explanations of the subjective experience of consciousness aren’t something that are ever going to arise naturally out of our current physics paradigms. The natural implication is that our description of the universe is incomplete in a really important way (assuming you think consciousness is important–and arguably it’s the most important thing of all).

    I certainly don’t see how this gap in our description of the universe can be closed. Your post says the same. However, I’m not sure why you’re so comfortable taking the leap from “I don’t see how to solve the problem and the ideas that have been proposed don’t work” (which I agree with) all the way to “no one will ever solve the problem.”

    I think someday this will actually be seen as an important problem that will be attacked by far greater brainpower and resources than we can bring to bear, and I would not then bet against seeing some progress.

  • Psychohistorian

    Cogito ergo sum covers most of the question of “Do I feel?” It is not possible that I don’t actually feel, but am being deluded into thinking I do, because that delusion itself is a feeling of some kind. Perhaps my left brain and right brain don’t feel, perhaps I’m some kind of computer program, but one thing that I can say with absolute certainty is that I feel.

    That said, the word “feeling” is a poor choice. “First-person experience” seems to capture what you’re talking about with greater precision and without the complex baggage that “feeling” carries. I think that a thing experiencing itself in the first-person is the same thing as a thing feeling. If it’s not, I’m quite curious as to what the difference is.

    Given that the universe appears to be reductionist, our first-person experience should be reducible. It appears to relate to our brain structure on some level. Thus, we can say with a high degree of confidence that video game characters do not have first-person experiences, because there’s simply no apparatus by which they could.

    Making your evaluation of first-person experience based solely on completely unevaluated testimony seems, well, deliberately naive. We know exactly what causes a video game character to say “I’m angry,” and it does not correlate with anything that would cause anger in an actual person. Indeed, we have no reason to believe that the symbols uttered have any correspondence with their content; the character could have just as easily said, “I’m perfectly calm!” or “Purple radishes actualize subserviently!” There’s no evidence of any kind that it understands the meaning of the symbols, as the only relevant input are the lines of code describing its response. Perhaps one day we will design characters who do have a first person experience, but it is overwhelmingly unlikely that we have managed to do so yet.

    The behaviourist approach may be good for a laugh or a counterintuitive controversial statement, but, given what we know about the actual existence of a brain and its importance, limiting oneself to a purely behavioural theory of “feeling” seems like sticking your head in a bucket and complaining loudly about how dark the world is.

    • http://www.rationalmechanisms.com Richard Silliker

      “First-person experience” ”

      How about experience of experience of experience?

    • http://www.rationalmechanisms.com Richard Silliker

      “simply no apparatus by which they could. ”

      How about; they lack a mechanism.

    • http://hanson.gmu.edu Robin Hanson

      I made no claim that one must judge only on surface features.

  • Jackson

    Seems rather nightmarish to me… maybe we could create hell without realizing it.

  • Robert Ayers

    Of course we don’t think that video game characters today really feel when they say they feel …
    The entire thread feels like a re-run of the large introductory class on “behaviorism” that B F Skinner gave at Harvard circa 1960. One of Skinner’s goals in the class was to expose, to spotlight, the thought “Animals just have base instincts, while I, of course, am a reasoning being” and make the students worry about it.
    It was an excellent thought-provoking class. Fifty years later I still remember it and society is still re-creatng it.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      Robert, I think an important distinction here is between claims that our sense of being “reasoning beings” isn’t illusory, and curiosity about the experience at least some of us have of qualia, a “theatre of consciousness”. Even if life is a ride and the controls are operated by instinct (or, ultimately randomness or the initial conditions of the universe, etc.) there still is that theatre of conscious experience and its neuroanatomic, neuroalgorithmic (or even extraneural) material intricacies to discover and explain.

  • http://modeledbehavior.com Karl Smith

    How is saying that one’s own memories of feelings is not evidence that we felt in the past any different from saying the geological record is not evidence that the continents had specific forms in the past?

    It could be the case that memories are false, sure. But I can use knowledge of those memories to predict my current feelings. Isn’t this the standard for evidence? I remember being hurt when I hit my head. I can now hit my head and yes it hurts.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      karl, i think expert consensus about the accuracy of specific past memories you have of your own feelings would be a closer match to expert consensus about the geologic record.

  • rob

    I think this touches on the main problem with emulations. If they can’t get laid, how will they feel about their status? We would just be creating an army of the worst sort of disgruntled workers.

  • Dagon

    We have data on whether things of our level of complexity qualify as claiming to feel and being believed. We have some data on less-complex things (computer games) that claim to feel and are disbelieved.

    Open question 1: where is the line “below” us (things that claim to feel, and we assign 50% probability that they’re correct)?

    Open question 2 (relevant to the prior discussion): are things more complex than us likely to claim that they feel? Perhaps they’ll have sufficient capacity that they don’t need the heuristics and simplifications that present as “feeling” to us.

    Open Question 3 (also relevant): even if more complex beings claim to “feel”, is it the same thing we claim? They may have a very different level of abstraction at which point they can no longer enumerate their calculations and have to summarize as “feeling” than we do. The flip side of this is whether they’d agree that what we claim is actual “feeling” compared to what they mean.

    • http://www.rationalmechanisms.com Richard Silliker

      “Open question 1: where is the line “below” us (things that claim to feel, and we assign 50% probability that they’re correct)?”

      Look for a complex mechanism,

      • http://www.rationalmechanisms.com Richard Silliker

        Sorry. I will provide a definition.

        Complex Mechanism : any given machine that implements its acquisition through its expression and implements its expression through its acquisition – metabolism.

    • Jeffrey Soreff

      I’ll think we’ll have an easier time answering the question about things that are less complex than us than about things that are more complex than us, but for historical reasons rather than directly because of the difficulty in analyzing complex systems. If we want to know whether, for instance, a cat shares our experiences of, e.g. hunger, we can look for the neurological structures that fire in a human when he or she reports hunger and look for analogous structures and functions in fluffy (as well as analogous behaviors). We share common ancestry with that cat, which hugely improves the odds of finding analogous structures. If we build an AGI which is more complex than a human, and if its architecture wasn’t intentionally designed to be as close to human as possible, there is likely to be much less analogous structure, which would make corresponding arguments much weaker.

  • mjgeddes

    The ideas behind every successful theory should be able to be stated in a few sentences and understood by a bright 12-year old. Yes the technical details might be hard and need a high IQ, but the basic ideas should not be, no matter how advanced the theory. For example, take the theory of relativity:

    *’The speed of light is independent of the speed of the source, the laws of physics are the same in all reference frames and gravity is locally equivalent to acceleration’.

    That’s it. And now my basic conception of consciousness:

    *Categorization is the process of grouping concepts according to their degree of similarity. Consciousness is simply the special case of categorization that is applied to our own internal decision making systems. This categorization enables the sub-systems to be integrated and coordination’.

    That’s it. Yes, I believe it really is that simple.

    The reason feeling/conciousness seems so puzzling is, I think, because the human brain cannot perform introspection to a high enough level, I do not believe we have any real second-order consciousness (consciousness about consciousness). Yes, we know we feel, but we do not have direct conscious awareness of the structure of our feelings.

    But there is no reason why a suffciently advanced mind should not be able to have a sensory modality for feeling/consciousness itself. (consciousness about consciousness). To do this, the various categorization procedures of the mind would have to be extended to enable self-categorization (i.e. categorization of the categorization procedures themselves).

    Such a mind would see no mystery about matter and feeling, since both of these concepts would simply be categorized into a new super category that summarized their relationshiip.

    And here’s the really big punch-line, something the entire OB/LW community has failed to grasp (and yes it really is something that any bright 12-year should be able to see):

    Such a mind wouldn’t hesitate to classify the concept of ‘Bayesian Induction’ itself under some particular super category. This would enable such a mind to make correct inferences about Bayesian Induction itself, which are not themselves based on any Bayesian evidence!!!.

    If I am right, the implication is that further insights about feelings are possible, even in the absence of empirical data about the feeling/matter relation.

  • http://rationalmechanisms.com DWCRMCM

    “The reason feeling/conciousness seems so puzzling is, I think, because the human brain cannot perform introspection to a high enough level, I do not believe we have any real second-order consciousness (consciousness about consciousness). Yes, we know we feel, but we do not have direct conscious awareness of the structure of our feelings.”

    I think the model shows that We remember events, and we remember attributes arising from feelings that enclose events. The feelings themselves arise fresh and bound to the event memorial rather than the memorial event.
    If it were any other way we would always punch the guy we remembered punching previously.
    It would be a closed loop paradox of having no way in and no way out.
    Phantom limb syndrome for every cubic inch of your body.

  • haig

    Zombies rear their obfuscating heads once more!

    If those things didn’t actually feel, but still had the same signal/info process structure making them say and think they feel, we’d still get exactly the same signals/info from them.

    If you’re saying on the surface we can’t tell whether something is conscious or not, that may be true. But if you’re saying that no amount of deep inspection of what exactly the apparently conscious thing is doing can inform us if it actually is feeling, then I can’t agree. This is the core of Chalmers’ zombie argument and I’ve never understood why people are convinced by it.

    An analogy would be a computer program that just prints the string “2+2=4” and a computer program that REALLY goes through the process of computing 2+2 and returning the answer 4. The physical process in the world that occurs when both these programs run is different. Similarly, if we simulate a ball falling using Newton’s equations, or if we just animate several key frames of a ball moving on the screen, they may appear identical, but inspecting the code, seeing exactly what it is doing, will tell us the truth of the phenomenon. Why is consciousness any different? In the future, we can look into an EM that claims to feel and see if what it is doing is what our brains do, or if its just some bot that is coded to say it ‘feels’. We should be able to tell the difference upon a deeper inspection.

  • haig

    (sorry for double post, forgot the last part)

    I can definitely see where you’re coming from. The best we can do empirically is to compare what an entity that claims to be conscious is doing with what brains do (once we know exactly what that is). That is the nature of the ‘hard’ problem, the subjective perspective is by definition not open to objective analysis. But why would exact (or sufficiently similar) processes occurring in the universe not be the same phenomenon? The way you framed your conclusion, either we all feel or we all don’t feel, is just word play and it’s why the ‘hard’ problem is so controversial.

  • Pingback: Flavors of Computation Are Flavors of Consciousness – Foundational Research