Some complained that I didn’t include a question on consciousness in my list of big questions. My reason is that I can’t see how we will ever know more than we do now. There’s nothing to learn: Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)
Graziano is the guy who think puppets are conscious?
Sorry, I meant
So what gives with the current blog post, whose very title is based on the assumption that zombies *are* *not* conceivable?
Robin, your "More" link points to an earlier blog post in which you quote Sean Carroll, I think approvingly:
"Philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems. The phrase “experiencing the redness of red” is part of a higher-level vocabulary we use to talk about the emergent behavior of the underlying physical system, not something separate from the physical system."
So what gives with the current blog post, whose very title is based on the assumption that zombies are *not* conceivable?
[I agree that zombies are inconceivable, btw. How could an aspiring zombie conceivably fool indefinitely the mind-reading machines of an arbitrarily technologically advanced civilization (think of your father clutching a rose between his teeth; think of the saddest thought you ever had; etc.), without actually possessing an inner life?]
I think when people get familiar with the idea of consciousness as (roughly) the proprioception of perception, then they won't be so weird about whether other people or AIs or ems have it. People who currently hold the feeling of "really being conscious" special and untouchable may seem immune to rational argument, but I think having a good theory that lines up with all three of 1) what we experience, 2) how we're used to talking about it (modeling it conceptually), and 3) a detailed facility it makes sense for us to be wired to have, then the philosophical squeamishness around it will transition to concern with brass tacks. Graziano's picture is the first one that enabled me to start making that transition.
I wrote a long comment earlier trying to summarize Graziano without getting to the point that I think his approach will coax people out of their caves.
Certainly no spirituality but there's plenty of mystery. We have no idea how your "top level controller" works, for example. I don't dispute that it likely exists. It is such a vague concept that no matter what we eventually learn, there will surely be something we can label "top-level controller".
I doubt NNs have anything worth calling consciousness. If we weaken the concept to that extent, you are verging on panpsychism which I regard as non-scientific and utterly ridiculous. Could NNs be used to implement consciousness once we know what that means? Yes, of course, as they are Turing Equivalent and, therefore, they can theoretically be set up to compute anything computable. That's not important but knowing the consciousness (and the brain's) algorithm is what is missing.
I don't dispute NN's are far from human capabilities, but they are non-linear function *discovery* machines (like much of the brain). That is a big difference. Research has mainly been focused on the equivalent of the subconscious 'disambiguation' processes (vision, language, sound, biology, e-data...), and not on the higher level interpretations. That is changing, but those higher level things *have* to be a simulation of some sort because there are no universals there. For lower level processes like vision, the emergent properties are remarkably similar to a brain considering they aren't added intentionally. Things like layer filters in convolutional NN's being similar in complexity and patterns to a brain's visual system. So it does model the world it experiences (data), just not in a way that is interesting or familiar to us (unless it is similar data with a similar architecture).
At the higher conscious level, we know the types of things we are interested in, but there is no way that naturally emerges out of a neural network. There is no universal mapping of shape to sound to meaning, even among human reading systems, so will never converge on those without coercion. Same with beauty, taste, preferences or movie plots - that all is adaptively 'imagined' by us in the broad sense. We don't expect NN's to attain turtle, dolphin or even spider types of awareness either, though they obviously have their own versions of consciousness. So at some level human consciousness will always e be faked, but that doesn't mean it can't be real (like in an Em world).
I don't think it's controversial to say consciousness as we think of it is an emergent property of singular focus and control over these lower systems in a human context, and that top level controller is something we understand well enough to set bounds on how it works. It is totally fascinating, but I don't see any mystery or spirituality behind the concept.
 Section 6 of https://neurdiness.wordpres...
OK a discussion killer. People claim to understand the world even though no one does in the sence that we undertand a paricilar alloy say. We can make it via long seies of steps in exact order and to exact scale in very narrow circumstances. There is no theory for that alloy, just models.All inanimate material is a combination of charges and an organization of those charges. We do not underatand the organization except in sime descriptive sence eg orbitals/clouds/distributions. Living organisms are similar except the organization is somehow different. Please demonstrate understanding or admit the total mystery of our world and existence. This has nothing to do with religion or nihilism.
As many have pointed out (eg, Gary Marcus), neural networks are really just function optimization. That the human brain also does function optimization is no great revelation. Same for an attention mechanism. There are many ways to do these things but we have no idea how the brain actually does them. Neural networks can't do object recognition anywhere near as well as humans. They also don't model the world except at a really low level but, at the same time, we know that modelling the world is very, very important. We know that neural networks can be trained to play a great game of Pong, for example, but move the paddle a few pixels and it has to start over from scratch. There are many such examples. In short, we have function optimization but it doesn't come close to reproducing human performance.
I choose to believe solipsism is true and all of you are merely puppets animated for my amusement. :-)
We know computationally neural networks can process stimulus, find patterns and learn (even simple ones like today's). A single task attention mechanism can be informed by and direct that, as well as well as use it to hypothesize scenarios based on motion and causation etc. We know the brain works something like this at a grand messy scale, and that our version of consciousness can emerge from the top layer (as evidenced by us and most animals). While knowing more about the mechanics of it will explain many things, 'what is consciousness' will still boil down to that, there is no extra magic piece we're missing afaik.
Maybe a concrete example would help - what brain/mind discoveries do you think would change our minds about what consciousness is and how it happens?
Robin, I agree that consciousness is a topic that would cause trouble if put on your list of big questions. But that's not because the topic itself is empty, but because it's still like Goedel's theorem just after it was published: even smart rationalist generalists get all tangled in the circularity of it. Also, although one can outline the topic, consciousness is a research area, not a cosmic question.
But you introduce the piece saying, unconditionally, that there's nothing to say about consciousness. Maybe you mean it's just a confused hiccup in the midst of real neuroscience, information science, and psychology. It is in those areas but it's not a hiccup. Consciousness is a big, real topic.
You start with the zombie counterfactual, where the feeling of consciousness is disconnected from any actual effect. You mention the question of whether others "really feel," and how some people insist we can't really verify whether people do.
The zombie idea is like a fable, The Story of the Ostrich that Lost its Head in the Sand. But after the story we're supposed to have a heightened appreciation of our heads, or keeping our heads, or at least keeping track of our heads... or what's the point?
(There's a side point, though tough for science, analogous to asking how one can verify without the receiver's key that an encrypted message actually contains meaningful content. But it's a side point.)
There's a lot to find out about consciousness! The mechanisms, wiring, encodings, how we learn to use the basic mechanisms to get the sophisticated skills, how culture has evolved the language aspect of it, etc.. Consciousness, feeling, etc. are terms we've developed before getting a grip on how that stuff works, but it's things everyone needs to do and talk about on a regular basis as practical matters.
A vivid example is what happens just before and then after this: "WOOPS, jeez, remind me never to use the phone when I'm driving!" What is distraction? Why and how would it occur to the driver to talk about it, and then how is it possible for him to retrieve the information? How is the passenger able to understand it? What could the driver possibly usefully do about having been distracted (that speaking about it would contribute to), and how could "reminding" the driver later possibly help the driver to improve his focus?
Obviously the driver's feeling of having been distracted motivated the driver's shouting (Since when is shouting not an action? OTOH, what exactly is the function of shouting in a closed car if the task is driving?), which then presumably affected the passenger, and that interaction may continue. The passenger believes the driver's report of distraction. Why should we be especially skeptical about this sort of report?
Only because there's a lot more to know about the process.
I'm mostly paraphrasing Michael Graziano: control and awareness of attention and information flow are vital to us. We have the ability to change focus, but also to be aware of where our focus is directed, and qualitative meta info about the information that's coming in. The awareness of focus and information flow, and the actual contents, are tagged to each other. (There's a bird there. I'm watching. What am I watching? The bird. What about the bird? I'm watching it. Why? Because what I see it doing is interesting.) Without these meta abilities we would be disabled (as humans anyway).
If a baseball player says he feels the bat in his hands, we don't discuss the hard problem of baseball. We understand that hands have touch sensors, and that "feel," in this case, refers roughly to that. Sensors, actuators and feedback systems per se are interesting but don't drag us into philosophy.
We don't ask, "How can I verify that you actually have a sense of touch and you're not faking it?" I mean we could go there, but do we have to? Where were we? Oh yeah, the actual sense of touch and its role in batting. But where were we? Oh yeah, that was a metaphor for consciousness's function in life.
Talking about saying "Where were we?" and popping the conversational stack is another example of employing consciousness in everyday conversation. And a metaphor for having got caught in the dizziness of the subject of consciousness and needing not to give up because of the morass around it, but to pop back out: consciousness is first of all a real set of coping abilities.
It has always operated in the dark next to the light pipes, feeling a set of controls and braille readouts that have always been in the dark. But we know-- never mind that we "only feel"-- that they are part of how we play the game of life. There's a lot left to know about consciousness!
I think Robin is saying he's happy with only being a color scientist and never seeing color. Others will disagree; color (as a stand-in for experience) is the only thing worth knowing; every other "knowledge" is an abstraction to capture it.
Maybe the question that's still open for a bullet-swallowing color-scientist like Robin: Sure you'll be able to engineer arbitrary experiences as we get better tech, but where is that range of experience coming from? What do we use as ground truth to train our intelligent systems on? What is our epistemology there?
I'm not sure we can say "we're as done as we can be" without answering those.
I would consider an ability to correlate our brain activity with other systems meaningful progress on consciousness. If we can entangle our thoughts deeply with other people or animals with some kind of direct wired connection, it might be possible to gain a deeper subjective understanding of consciousness that would actually satisfy people.
While I'm generally sympathetic to this pov there is at least the possibility that we will gain some partial knowledge the same way we do in theoretical physics (just finding the most elegant models that unify that effect with other aspects of reality). Of course, we'll never be able to be completely confident the results are true but it doesn't have to be zero or all driven just by intuitions of what we think should count as conscious. It can also be driven by intuitions about theoretical simplicity.
I think it's useful to separate feeling from consciousness here. Most of the brain is subconscious (System 2 in the Kahneman sense) that processes input and learns, with information passing in a simplified encoded form to and from the self aware part we have control over (System 1). This top level 'us' is much simpler, evidenced by the way we can communicate what goes on there with something as low dimensional as language (10-15 phonemes per second). We are sent information from System 2, but things like pain, smell, or even satisfaction are just encodings for something much more complex below.
So we can talk about pain or the smell of baking bread, but only through shared experience - it is impossible to describe a smell to someone who who's never had the sense of smell. But at the same time, we wouldn't consider them less conscious. Examining the System 2 part for insights is the same as asking GPT3 why it chose the next sentence - it's just the output of it's extensive training experience on 175 billion parameters. That is the 'why', and the output is the encoded representation of that feeling.
I am confident we can get to examining and fully emulating the conscious part of our brain, and that will be approximately explainable (after all, we can explain some things to each other already, and deeper motivations are knowable). But to ask if a zombie 'feels' a different version of pain, while knowable to us at the conscious level ("yes I do, ouch!"), it isn't something we can experience or discuss at the System 2 level.
That said, System 2 is just a much larger parallel version of System 1, so if we have technology to create an Em, we could also create something much larger that monitors and understands all its neurons at a meta level. It would know things about 'how Em pain feels very different to human pain', but it could never explain it to us. But, it could explain it to another machine like itself. For that it would just map to some expressible encoding, akin to 'it smells like baking bread' - and the other would nod knowingly.
It might be too early to throw the towel on questions of consciousness. I guess the main question ist: Why am I conscious rather than a philosophical zombie? I.e. why is there consciousness in the universe?
This question asks for an explanation, but it is not even clear what an explanation is, in a precise sense. Are explanations laws? No. Explanations are asymmetric, laws of physics are symmetric, so laws by themselves are not explanations. Neither are explanations something purely epistemic, since explanations (B because A) can be true or false depending on what the world is like instead of depending on what we know. The state of the universe at the Big Bang explains the state of the universe now, but the state of the universe now doesn't explain the state of the universe at the Big Bang. So has explanation something to do with entropy? Maybe, but this can't be the whole story, since there are non-causal explanations. The average speed of the molecules explains the heat of the gas, and not the other way round. Einstein's theory of gravitation explains Newton's laws of gravitation, and those in turn explain Kepler's laws. The irrationality of pi explains why squaring the circle is impossible.
It is not even clear what would even count as a possible explanation for why consciousness exists in the universe, but it is also far from clear whether there can be no such explanation when we don't even know what an explanation is.
(My guess is that the laws of physics, and the initial state of the universe, must entail that we are conscious. Laplace's demon must be able to deduce from the initial state of the universe and the complete laws of physics that humans are conscious rather than zombies. Otherwise physics wouldn't completely describe the universe. There must be some sort of psycho-physical laws. Those laws would of course also predict whether ems are conscious or not.)
Note that the case of consciousness is pretty much analogous to your first question: Why does the universe exist rather than not? This question, too, asks for an explanation. And here, too, it is not clear what would count as a possible answer to that question. An other analogy is that both question seem to be about something "primary": The existence of the universe is ontologically primary, while consciousness is epistemically primary.