34 Comments

What does it mean to be "a computed feeling"? I understand what it would be to act like a person with certain feelings but maybe if you used different physical substrates you permute the actual feelings to differ from those that a human would experience with similar I/O.

At a more general level I'd note that the universe isn't actually very simple if you want to give a complete description. We tend to divide our description of the universe into simple dynamics and complex (Kolmogorov sense) initial conditions. However, note that there are many ways that you might embed computational like behavior in some ordered system (you could imagine ordered structures that 'compute' in some fashion but do so in what we would call a spatial dimension or something far weirder). Point is that there is arguably an anthropic argument going on here (the dynamics look simple to us bc thats only way you get complex computation via evolution) but it's not at all clear that argument should extend to phenomenology which seems irrelevant to evolutionary success -- a p-zombie would be no less favored).

Expand full comment

Agree. Note that I didn't put a probability on this specific outcome. It is an example of what people may try, not that it will be successful. It might. Society changes in surprising ways sometimes.

Expand full comment

"Now consider the “hard problem” sort of question of human “consciousness”, where one assumes that we could have exactly the same physical state paired with with no feeling or actual feeling"

The Hard Problem does not assume that.as an actual possibility. You may be thinking of the zombie argument, also by Chalmers.

The Hard Problem is the problem of reductively explaining consciousness ... So it assumes physicalism (not exclusively of computationalism).

Computationalism as opposed to physicalism is a theory of multiple realisability: the hardware on which the computation runs doesn't matter, so long as it is adequate to run the computation, so grey matter and silicon can run the same computations...and a lot of physical details are therefore irrelevant to conscious.

Non computational physicalism holds the opposite, that the physical implementation can matter, and an algorithm implemented on the wrong hardware would lose some aspect of conscious.

Non physicalist,.non computationalist theories of consciousness can also have the implication that consciousness,or at least qualia, can disappear and change. Chalmers is often characterised as an epiphenomenalist.,.and epiphenomenalism is a non-physicalist theory according to which consciousness can disappear without any outward sign. But that's his solution to the hard problem, not the hard problem itself,.nor an inevitable implication of the hard problem. It's possible to accept the existence and hardness of the hard.problem without accepting Chalmers solution.

And rejecting non physicalist theories doesn't solve the whole.problem, since qualia can change and disappear under non-computational physicalism as well.

I can't even guess who you are arguing against when you say "only physics computes". There are non-physical theories, and computational theories, but I have never heard of a theory where some non-physical substance performs additional computation.

And of course, insisting *that* consciousness is physical does nothing to resolve the hard problem, because the point of the hard problem is to explain *how*.

Expand full comment

Well, astronauts do say they are lighter on their feet when they are in a place of lower gravity, but they don't say they are losing consciousness (unless the change in gravity impairs blood flow to their brains, which would make them "regular"-unconscious as opposed to P-zombie unconscious). So the former is more plausible than the latter.

Also, there's a clear chain of causation, from the gravity field to the astronaut's sensory nerves to their speech, that causes them to say they are lighter on their feet. We don't know of such a chain of causation that would cause them to say they're becoming a P-zombie. Again, this makes the latter less plausible than the former.

Expand full comment

A gradually changing (say, weakening) gravity field would definitely ultimately cause me to emit a comment such as “I’m feeling lighter on my feet,” which is just typical human brain behavior. Why are you apparently more mystified by analogous behavior in your own scenario than in this one?

Expand full comment

"But these seem just silly and arbitrary to me as theories of how our universe works."

Is this a good basis to reject? Thought experiment: imagine you are a stone aged homo sapiens, with the same innate cognitive capacity you have, but with none of the knowledge of the universe accumulated by humanity since the adoption of agriculture. If many of the basic facts we take for granted were presented to you in that context, would they seem 'silly and arbitrary'?

Expand full comment

My personal opinions about what is conscious don't matter, since the subject is what society as a whole does to chickens despite believing they are conscious, and as a corollary what society would do to machines that they believe are conscious.

I think most people believe chickens feel pain. They are vertebrates, and their nervous system is pretty similar to ours. I'd be interested if there was a poll done about something like that. The existence of "animal cruelty" laws, which do apply to chickens, would indicate a belief that chickens can suffer and therefore are P-conscious.

Expand full comment

So chickens are conscious? Hmm, ok. Well then, how about turtles? Or fish? Are snails conscious? Surely a bacterium is not (right?). So... where/how do you set the boundary?

Expand full comment

Chickens are conscious, but society permits their slaughter. So I don't think society will end up with laws against shutting off conscious computers, unless the conscious computers start acquiring enough bargaining power to lobby for it.

Remember that most people are religious, and therefore assign moral value to "human souls," not consciousness in general. The laws will reflect that.

Also remember that powerful people and corporations look out for their own interests above all. Corporations used to keep slaves and company towns, and still many large corporations in the US exploit slave labor overseas. Ethics are not much of a concern unless the public cares enough to force the corporation to care, which happens rarely.

It doesn't seem that it would be in the financial interests of Google or Facebook, or any large corporation, to have laws against shutting off conscious AIs. It could complicate their business operations if they use conscious AIs to make money. So they would work to prevent such laws being passed.

Expand full comment

I predict that - consciousness will turn out to be explainable with managable complexity (by 2040, 90%)- the explanation will match observed behavior and allow decent predictions (like, which diseases or drugs will have an effect on it)- many people will dispute this and come up with edge cases and/or put on extra demands on what consciousness is supposed to be (but that wouldn't change the predictivity of the theory of course)- the theory will allow engineers to build systems that are conscious in a recognizable way (by 2050, 85%)- many people will dispute this and claim those are zombies- some of the big systems will be ruled moral persons by at least a few courts (60%)- the engineers with optimize the systems allow smaller and smaller systems to be conscious in this sense to the point where they do little else beside being conscious (70%)- people will do all kinds of crazy stuff with this, maybe embed minimal such systems in devices to prevent turning off.

Expand full comment

The argument I posed doesn't rely on consciousness (by which I mean P-consciousness) having any independent causal effects on behavior.

It instead relies on the concept (essentially, functionalism) that you don't have mental events without a corresponding physical process. But this is causation in the opposite direction, where physical processes cause mental events.

The argument is not that gradually phasing out consciousness would cause the consciousness to directly influence the physical world through non-physical means - that is a misunderstanding.

The argument is that gradually phasing out consciousness would cause a brain process to form a representation of this phasing-out, because brain processes are constantly monitoring and representing the neural correlates of P-consciousness, which is what allows us to talk or reason about P-consciousness. And this is likely to result in a brain process producing a verbal comment about the phasing-out.

We can be confident that the brain processes would do that, because all our stories about what the mind does correspond to stories about what the brain processes do. And we have a clear story about the mind, that if a person is gradually losing consciousness, they're likely to notice and say so. This story corresponds to the physical statements about the brain processes. It doesn't have any "effect" on the brain processes, it merely corresponds to them.

Expand full comment

I'm presuming the "hard problem" sort of consciousness, that has NO effects at all on behavior.

Expand full comment

Your addendum, "any change of any physical parameter in your immediate environment risks turning you into a zombie," is not quite correct. There are some constraints we can deduce about which changes in physical parameters might turn you into a zombie, and which cannot.

Specifically, we can say that if the loss of consciousness causing zombiehood is gradual, then the physical change must have enough of a functional influence on the brain to potentially result in verbal comments about the loss of consciousness.

Why? Because, if the loss of consciousness is gradual, then because you are capable of reflection about your state of consciousness and are still partially conscious as the change is going on, you would be aware of it. You would be able to notice your visual qualia gradually fading away, for example. And that means you could comment about the fading.

That is what would happen mentally, and what happens mentally corresponds to brain processes, so there must be a corresponding story about brain processes. In terms of brain processes, the gradual loss of consciousness would cause a brain process to represent the gradual loss of consciousness, which would cause another brain process to emit a verbal remark about it. All of this would have to be the physical result of the physical change that led to the loss of consciousness.

So, as a corrolary, a gradually changing gravity field could not cause a gradual loss of consciousness, unless there's some physical mechanism by which the gradually changing gravity field would cause the physical brain to emit a comment such as "I am losing consciousness." I don't see a plausible physical mechanism for that, unless the change in gravity becomes so severe that it impairs blood circulation.

Expand full comment

If so, then the universe would be allowing all feelings that any brain chooses to allow.

Expand full comment

I strongly agree with the title and beginning of this piece; I weakly agree with the end.

“Only physics computes” because physics is real in ways that bits aren’t. If we want our understanding of consciousness to point to something real, we should ground it in something real like physics, not something frame-dependent like Turing-level processes.

I recently wrote a short piece about “AI consciousness” and how it conflicts with the view from nowhere:https://opentheory.net/2022...

Here’s a summary of my invited talk for Mathematical Consciousnesses Science on key forks in the road:https://opentheory.net/2022...

I tend to be on “team physics”, but a more modest, big-tent position I prefer to advocate for is “consciousness is the sort of thing that could be crisply formalized.” I.e. Tonini’s thesis that there exists a mathematical representation of what-it-feels-like-to-be-you-right-now. I believe taking formalism seriously leads to physics, and also (somewhat orthogonally) leads to the Symmetry Theory of Valence.

Expand full comment

Why can’t my brain compute not only the potential feelings which influence my behavior, but also which subset of those potential feelings I actually feel?

Expand full comment