128 Comments

Yes, that means the existence of a computation is observer-dependent, and, to an observer who cannot harness the computational aspect of the phenomenon, there is no computation.

So if we have a physical system such as a computer that implements a causal structure that is isomorphic (via some mapping) to that of my brain (at some substitution level) over a given period of time (maybe just a couple of seconds), then computationalism says that the activities of this physical system should have resulted in a conscious experience that would be equivalent to my own subjective experience over the period that is simulated. Same computations performed, same conscious experience.

Whether there actually is an external observer who knows the mapping between the two physical systems (e.g. the computer and my brain) would seem to be irrelevant to the question of whether there was a conscious experience associated with the computer's activities, right?

Hans Moravec discusses this in my previous link. Here's another good example. And here's an interesting paper by Tim Maudlin highlighting another related problem with computationalism (takes a while to download). And of course Stephan Wolfram's Principle of Computational Equivalence. And you may have seen the debate between David Chalmers and Mark Bishop on this.

It really kind of looks like a Kantian style antinomy to me. Assuming physicalism/materialism, computationalism seems like the best explanation for conscious experience...BUT, computation is ubiquitous, so how do you avoid having to make arbitrary distinctions about what physical systems implement what computations?

Expand full comment

"On reflection, it seems to me quite possible... that some claims are both is and ought...."

Though this is not the same as a claim of knowledge, I should think that, on reflection, Robin would want to explain how such hybrid claims are possible or point to someone who does. The reason this is not 'quite possible' has to do with the structure of the claims themselves: for instance, "I detest non-self-defensive killing" is a statement about my preferences, while "You ought not to murder" is a statement about a moral reality.

As I pointed out earlier, one possible bridge between this gap is promising, since "I promise to return the five dollars you lent me," is a statement of fact and a statement of obligation in the same fell swoop. Perhaps this is what Robin is gesturing towards in his "upon reflection" because, like minds or quale, the obligation is supervenient on some state of affairs (a particular configuration of neurons, a particular configuration of phonemes.)

But I suspect that Robin is actually a naturalist who takes utility- or preference-maximization to be a meta-ethical obligation in itself, which is doesn't really succeed in bridging the is-ought gap but rather ignores it. That's why I inquired.

Expand full comment

...but this particular problem is overhyped.

A phyical system P implements a given computatation C for an observer O iff there is mutual information between C and P given O.

In other words, if learning the results of the physical process tells you something about the answer to the computation, then it is an implementation of the computation to the extent of how much it tells you.

Yes, that means the existence of a computation is observer-dependent, and, to an observer who cannot harness the computational aspect of the phenomenon, there is no computation.

More here

Expand full comment

The answer lies in how you disassemble the word "ought". Arguments about whether you can or can't move from is to ought, without defining "ought", are useless.

Expand full comment

On the idea that a computer simulation of a brain would be conscious, Hans Moravec had some interesting comments on this.

How do we know when a given physical system implements a given computation? What is our criteria for establishing a valid mapping from the physical system running the simulation to the original physical system that is being simulated?

With the right mapping, any given physical system could be said to implement any given computation. Similar to the way that with the right "one-time pad" any random collection of bytes can be "decoded" to become any data you want. Hilary Putnam discussed this in his 1988 book "Representation and Reality".

Where does meaning come from? The Symbol Grounding Problem would also seem to need addressing.

Computationalism and functionalism are not without their problems...

Expand full comment

Robin, you claim to know some propositions that meet both ‘is’ and ‘ought’ claims.

Where did he make that claim?

Expand full comment

The proposition that matter is a form of mind is just as consonant with the everyday accepted facts than the proposition that mind is a form of matter.

However the first proposition does away with the "hard problem" of consciousness -- explaining consciousness / awareness / experiencing in terms of the laws of physics. Instead, physics is simply a description of a certain patterns of regularity we observe and can measure within the mental / conceptual / experiential world that we all live in.

This first proposition is also very much compatible with ideas like the simulation hypothesis, so long as one is willing to drop the requirement that we are being simulated on some kind of particular hardware.

I think the reluctance of materialists to explore this possibility is very much tied into their allergy to any idea that might bear any relationship to the "G" word. Because universal mind/consciousness that is the foundation of everything else, including "matter" (whatever that is in the world of quantum mechanics) sounds far too much like religion to be acceptable to their intellectual guardian memes.

The derivation of matter from mind also allows the possibility of these sorts of phenomena, which are all pretty much anathema to the religion of reductionistic materialism. . .

Part of a willingness to question prevalent dogmas about the nature of mind / consciousness is often a sincere investigation into our own personal ontology -- how do we know what WE know, rather than what do some high-status scientists write in high-status journals or popular books or essays. Until one has investigated ones own personal ontology, instead of relying on the "cached thoughts" of the modern memeplex, one is not able to apply the same kind of sociological analysis to one's own beliefs that are so devastatingly effective at seeing through other previously culturally dominant mythologies (aka religions).

What we see in people like Caplan and Chalmers is the kind of real humility that appears when that kind of actual ontological investigation is begun. At that point the edifice of the modern memeplex begins to crumble, and such people begin a search for something more sound and solid to replace it with. . .

Expand full comment

Robin, you claim to know some propositions that meet both 'is' and 'ought' claims. I wonder if you'll expand on that, since this is an ongoing puzzle and it seems like a strange claim to make without evidence or even an example.

One candidate bridging the divide is to be found promising, for instance, but I'm not sure that promises have much in common with the kinds of 'ought' statements that Caplan claims to know are true.

Expand full comment

Ignoring theory and considering practice, no one acts as if categories don't exist and reductionism is true, as there is never enough information, knowledge or computational power available to compute from first principles of physics, so in practice, any computation needs a multi-level map of reality and some intuitive categories to begin with.

Readers can be very sure that in practice any AGI will require at least 27 base classes (corrresponding to basic categories or prototypes required for general reality modeling) and corresponding bridging laws for a 27-level map of reality.

Battle of intuitions seems to be equivalent to battle of the priors - and prior setting seems to depend on categorization (analogical reasoning). To test whose intuition is best, need some initial agreed on base categories, and then need to compute concept distances to known base categories, best intuitive concepts are concepts with shortest distance to agreed base categories in feature space.

Expand full comment

Did you just say that almost all AIs sufficiently powerful to be game changers are game changers, or that almost all AIs sufficiently powerful for some other purpose are game changers? If the latter, for what purpose?

Expand full comment

Y'know, from my perspective, I'm not saying that AI is different from the other things in its natural class. From my perspective, AI "obviously" doesn't belong to the class you put it in - is outrageously different (as a matter of pattern, not ontology, thank you very much!). Are 747s like birds ("flying things") or are 747s like cars ("travel things") or are 747s like factories ("large capital investment things")? It seems to me that if you can reasonably get into that sort of argument, then you really do have to drop out of categorizations and say, "There's a big metal thing out there with wings, which is neither a bird nor a car nor a factory, and it stays the same thing no matter what you call it, now what do we think we know about it and how do we think we now it?" rather than "I'm taking my reference class and going home!"

Expand full comment

I don't like people who try to suddenly draw back and make their conclusions look weaker and humbler (when they previously came on very strongly) in order to try to avoid an attack. I certainly don't want to be guilty of that behavior myself.

Insufficiently powerful AIs wouldn't be game-changers, but "almost all" sufficiently powerful AIs would be. I do put forth that assertion, at that strength, and I am happy to be criticized on that basis.

Expand full comment

Furthermore, Chalmers doesn't say it is "obvious", but he bases his position on a bunch of arguments (such as the zombie argument, the knowledge argument and the explanatory gap argument). Of course, his arguments may be incorrect, but that's another issue.

Expand full comment

One concept with which we could categorize and organise the world we see around us is that of 'human made technology'. Our ancestors have observed a real, important and consistent pattern that such technology develops gradually and a servant of its creators. It violates a powerful intuition to consider a technology a peer of humanity, joining us in our concept of 'intelligent, self aware, creative agent'. It violates a further categorical intution to consider that an 'intelligent, self aware, creative agent' need not be constrained by limits we take for granted in humanity.

Expand full comment

I am saying to rely less on such things [as strong conceptual priors], in favor of other kinds of arguments.In human intuitive decision-making, two opposing arguments can't usually be balanced by tweaking weights on them. One wins over. One can't start consciously relying less on some consideration, it's not practically possible. One can only keep in mind the warning, try to change the amount of attention different ideas receive, and see what conclusion falls out. In all the cases you've listed in the post, shifting attention won't help: one needs a crisis of faith that constructs a strong argument that blazes its way through the old believes. It can be catalyzed, but can't be ignited to a charge of unreliability.

Expand full comment

Concept intuition. Are we talking about those attributes of truth such as good, truth, beauty, etc that arises directly out of cause and effect?These I would consider to be hardwired.

Expand full comment