My dearest colleague Bryan Caplan has a broad solid training, a penetrating insight, and a laser-like focus on the important questions. But Bryan shares an all-to-common intellectual flaw with other very smart folks: he trusts his concept intuitions way too much.
Our minds come built with concepts that let us categorize and organize the world we see. Those concepts evolved to be useful in the world of our ancestors, and we expect them to reflect real, important, and consistent patterns of experience in that ancestral world. Such concepts are surely far from random.
Nevertheless, we have little reason to think that our evolved concepts map directly and simply onto the fundamental categories of the universe, whatever those may be. In particular, we have little reason to believe that categories that seem to us disjoint cannot in reality overlap. For example:
Bryan Caplan’s intuition tells him it is obvious that “mind” and “matter” are disjoint categories, and cannot overlap; nothing could be both mind and matter. Thus he thinks he knows, based only on this conceptual consideration, that conscious intelligent machines or emulations are impossible.
Bryan’s intuition tells him it is obvious that “is” and “ought” claims are distinct categories, and no ought claim could ever be justified by any set of is claims. Since Bryan is sure he knows some ought claims that are true, he concludes he has a way to know things that doesn’t come via info about the world.
The brilliant David Chalmers (and others) thinks it obvious that the categories of things that “feel” is distinct from the category of things that can “cause” other things, which to him implies that there is a deep puzzle of why we humans can feel in addition to participating in cause and effect interactions. Folks like Chalmers are sure we know we can feel but that the conceptual distinctness of feeling implies that this info does not come to us via our causal relations. They conclude we have ways of knowing independent of our causal interactions.
The very smart Eliezer Yudkowsky, my once co-blogger, and others in his research group, think it obvious that “intelligence” tech is so conceptually distinct from other tech that devices that embody it can quickly explode to take over the world; our very different history with other tech so seems largely irrelevant to them.
Once upon a time many now-quaint conclusions were thought to follow from the conceptual distinctness of “living” vs. “dead”, or “spiritual” vs. “material”.
Yes categories such as “mind”, “matter”, “is”, “ought”, “cause”, and “feel” are powerful concepts that helped our ancestors to better organize their experiences. But this usefulness is just not a strong enough basis on which to make sweeping conclusions about what must or cannot be true of all of reality, even parts, depths, and possibilities with which our ancestors never came into contact. The categories in your head contain useful hints about what you might expect to see, but they simply cannot tell you what you must or can’t see; for that you have to actually look at the world out there.
On reflection, it seems to me quite possible that some real things are both mind and matter, that some claims are both is and ought, and that real things naturally both cause and feel. And it seems to me that our theory of info, even if tentative, is the most well established theory we have. It suggests an info fundamentalism: all that we know that could have been otherwise, even about ourselves, comes via our causal contact with what is; we have no good reason to think we have some other special ways of knowing.
Yes, that means the existence of a computation is observer-dependent, and, to an observer who cannot harness the computational aspect of the phenomenon, there is no computation.
So if we have a physical system such as a computer that implements a causal structure that is isomorphic (via some mapping) to that of my brain (at some substitution level) over a given period of time (maybe just a couple of seconds), then computationalism says that the activities of this physical system should have resulted in a conscious experience that would be equivalent to my own subjective experience over the period that is simulated. Same computations performed, same conscious experience.
Whether there actually is an external observer who knows the mapping between the two physical systems (e.g. the computer and my brain) would seem to be irrelevant to the question of whether there was a conscious experience associated with the computer's activities, right?
Hans Moravec discusses this in my previous link. Here's another good example. And here's an interesting paper by Tim Maudlin highlighting another related problem with computationalism (takes a while to download). And of course Stephan Wolfram's Principle of Computational Equivalence. And you may have seen the debate between David Chalmers and Mark Bishop on this.
It really kind of looks like a Kantian style antinomy to me. Assuming physicalism/materialism, computationalism seems like the best explanation for conscious experience...BUT, computation is ubiquitous, so how do you avoid having to make arbitrary distinctions about what physical systems implement what computations?
"On reflection, it seems to me quite possible... that some claims are both is and ought...."
Though this is not the same as a claim of knowledge, I should think that, on reflection, Robin would want to explain how such hybrid claims are possible or point to someone who does. The reason this is not 'quite possible' has to do with the structure of the claims themselves: for instance, "I detest non-self-defensive killing" is a statement about my preferences, while "You ought not to murder" is a statement about a moral reality.
As I pointed out earlier, one possible bridge between this gap is promising, since "I promise to return the five dollars you lent me," is a statement of fact and a statement of obligation in the same fell swoop. Perhaps this is what Robin is gesturing towards in his "upon reflection" because, like minds or quale, the obligation is supervenient on some state of affairs (a particular configuration of neurons, a particular configuration of phonemes.)
But I suspect that Robin is actually a naturalist who takes utility- or preference-maximization to be a meta-ethical obligation in itself, which is doesn't really succeed in bridging the is-ought gap but rather ignores it. That's why I inquired.