50 Comments

This bet depends utterly on mutual good faith: you each believe the other will apply the agreed upon criteria honestly. This degree of trust would not be shared by people occupying different ideological camps on fundamental questions.

What is the main signaling function of announcing this bet? Hanson and Fox vouch for each other as the kind of honorable person the other can rely on.

Per discussion on Katja's site, I think this kind of signaling is generally conscious rather than subconscious. I'd be interested in opinions.

Expand full comment

I just added to the post.

Expand full comment

Here's an article elaborating on the question of where the proof is for the accuracy of these brain simulations.

http://www.scientificameric...

The idea of copying the complexity inscribed by evolution into our brain in theory sounds good. But that these models lack incremental feedback seems to me a red flag. With no feedback, there's no profit. With no profit, no investment. There may be this needle floating around in a haystack of possible futures but that doesn't mean anybody is going to find it.

Expand full comment

No secret ingredient. Not mere "complexity". I already answered above: a planning system with a self-model and internal sensors. Most computer systems don't match that description, but a few do.

Your search for a "secret ingredient" isn't driven by real-world data. You've invented a fantasy problem, and then you're trying to hypothesize imaginary solutions to it. But none of it is grounded in any actual observations. It's fire's phlogiston or light's ether.

Expand full comment

"The idea is not that all information processing is necessarily conscious."

But that's basically what you were arguing, at least it seemed that way to me. If you think only "complex" systems are conscious then you too are saying there is some secret ingredient that determines how complex is complex enough.

"The resolution of the actual zombie problem is: no, you only need to be concerned about implementing the behavior. If you can reproduce the right self-aware behavior, there's nothing "else" you need to add, to get consciousness."

I don't understand why people feel so uneasy at the thought that a zombie could replicate human behavior well enough to fool an outside observer, after all, we do most of the things we do on auto-pilot anyway. I get the impression people think that people like myself are saying the secret ingredient is of divine origin or something like that, but that's not what we're saying. It may just be a matter of complexity, with the human brain having some added complexity on top of what's necessary to have a human-like zombie, personally I'm thinking about something deeper (hence the remark about fundamental physics), but that something would be a part of nature and behave according to set laws without the aid of god(s). And it is my belief that consciousness is about more than just the firing of neurons, so that a simulation would have to also simulate the inner structures of neurons and their interactions with the world to create a real conscious mind, making it far easier to work from the ground up once we understand consciousness, to build the simplest and most robust machinery that captures the spark of consciousness. So some sort of artificial brain (hardware) instead of a simulation of human neuron firing.

Expand full comment

Most people accept that human minds have both conscious and sub-conscious processing (although I love ASR's hypothetical!). The idea is not that all information processing is necessarily conscious. The idea is the other way around: that conscious activity is nothing more than (a special kind of) information processing. (Go back to your own original comment here at the top, where, even if you emulate the entire network of neurons from a human brain, you doubt that the resulting computer would be self-aware.)

Your example of "sleeping" kind of misses the point. Obviously, in that moment, you aren't an entity that exhibits the behavior we want from consciousness: self-awareness, reflection, etc. The whole question is, if you exhibit the right behaviors, is there still something else required to be conscious? The (mistaken) idea of a philosophical zombie is that you could act exactly like a normal human, only somehow still be missing conscious experience. Your "sleeping" example isn't like that, so it's kind of beside the point.

The resolution of the actual zombie problem is: no, you only need to be concerned about implementing the behavior. If you can reproduce the right self-aware behavior, there's nothing "else" you need to add, to get consciousness.

Expand full comment

"Perhaps complex 'unconscious' information processing taking place in your brain *does* produce consciousness, and you just don't experience it because it (like the consciousness my brain produces) is not 'your' consciousness."

I actually thought about that, I suppose it's possible but I don't think it's very likely (I mean why wouldn't they leave memories for me to find, "we" share the same brain?), I think it's more likely the conscious self is connected to (and influenced by) "zombie" functions of the mind. But self-aware pocket calculators are of course a whole different story, to suppose that pocket calculators are consciousness (as in self-aware), to avoid having to deal with the hard problem reeks of desperation to me, but of course I could be proven wrong one day.

Expand full comment

Of course computers will augment humans initially. They have been doing so for decades. In that respect there's not really any such thing as "from-scratch AGI" - since computers were born in symbiosis with humans.

Expand full comment

"I know zombie-like information processing exists: it is proven to me every time my brain regulates my breathing while I'm in dreamless sleep and everytime I perform an action on auto-pilot, really every time my body does something I did not consciously tell it to."

You seem to be assuming that if this information processing produced some level of consciousness you would experience this. But why would that necessarily be true? I assume you'll grant that whatever is going on in *my* brain produces consciousness, but you have no experience of that.

Why can't a similar phenomenon operate within a single brain? Perhaps complex 'unconscious' information processing taking place in your brain *does* produce consciousness, and you just don't experience it because it (like the consciousness my brain produces) is not 'your' consciousness. I see no present basis for belief in a principle like 'all awareness produced within a single computing device will merge into a single stream of subjective experience.'

Expand full comment

I know zombie-like information processing exists: it is proven to me every time my brain regulates my breathing while I'm in dreamless sleep and everytime I perform an action on auto-pilot, really every time my body does something I did not consciously tell it to. And I also happen to believe my mobile phone is not self-aware.

If I was unclear, I meant experience, not emotions when I used the word "feelings". Experience is part of consciousness (the very act of being self-aware is an experience).

Expand full comment

You have a theory that it would be possible to have a "zombie-like information processing" system that doesn't have consciousness, and then you pose yourself a hard problem of what "additional" step is needed to make conscious experience.

The resolution is simple. Your zombie idea is simply incoherent. Consciousness isn't something "added" to complex information processing with a self-model; it's what such processing actually is.

(And BTW: adding "feelings" is more confusion. That's a completely separate topic, essentially orthogonal to consciousness.)

Expand full comment

"We do, actually, have a sketch theory of consciousness: a planning process that has models and sensors on the external world, develops a self-model and internal sensors as well. So it can think about its own future possible actions as part of its planning process, and it can sense recent internal thoughts."

That's like saying light is something that allows eyes to see things: too general to be a real model or in fact be of any use. We really have no idea how, in a complex enough system, zombie-like information processing can evolve into self-awareness and feelings: even if aliens gave us incredible supercomputing technology tomorrow we wouldn't know where to even begin to make that technology conscious, we do not even understand the most basic principles of consciousness, like cavemen wondering where what fire is and how it works.

"IMASBA: You asserted that current computers don't have consciousness. I dispute that; I think some of them already do."

Even if they do we lack the means to show it

Expand full comment

We scan pictures, movies and music - but rarely complex machinery. Pumps aren't scanned hearts, cameras aren't scanned eyes - and so on. Bioinspiration seems most effective in practice if used sparingly. An 'em-first' world thus seems tremendously unlikely.

Expand full comment

Y'all sure are attached to your qualia illusions, aren't you? ( http://tinyurl.com/c3zq8ht ) What would it be like to be a zombie? Is the thought of being one horrific? Or does it just seem so obvious?

Expand full comment

A handful of bits can describe the features birds need to have to fly, while vastly more bits are needed to describe how to make minds smart like humans.

This derives from what theorem?

Expand full comment

EPH: No conflation; that was an analogy, not a proof. We do, actually, have a sketch theory of consciousness: a planning process that has models and sensors on the external world, develops a self-model and internal sensors as well. So it can think about its own future possible actions as part of its planning process, and it can sense recent internal thoughts.

And BTW: there's a big difference between having an understanding of what consciousness is, vs. having a detailed theory of how it works or how to implement it. The two of you are using a word, but I wonder if you're referring to a real-world concept with your word.

IMASBA: You asserted that current computers don't have consciousness. I dispute that; I think some of them already do. Of course, not anywhere near as rich as human conscious experiences, but easily sufficient to cross the threshold from "none" to "a little bit".

Expand full comment