Tag Archives: Mind

Philosophy Vs. Duck Tests

Philosophers, and intellectuals more broadly, love to point out how things might be more complex than they seem. They identify more and subtler distinctions, suggest more complex dependencies, and warn against relying on “shallow” advisors less “deep” than they. Subtly and complexity is basically what they have to sell.

I’ve often heard people resist such sales pressure by saying things like “if it looks like a duck, walks like a duck, and quacks like a duck, it’s a duck.” Instead of using complex analysis and concepts to infer and apply deep structures, they prefer to use a “duck test” and judge by adding up many weak surface clues. When a deep analysis disagrees with a shallow appearance, they usually prefer to go shallow.

Interestingly, this whole duck example came from philosophers trying to warn against judging from surface appearances:

In 1738 a French automaton maker fooled the world into thinking he’d replicated life, and accidentally created a flippant philosophical conundrum we are still using. You’ve heard the phrase “if it looks like a duck, walks like a duck and quacks like a duck, then it’s a duck” haven’t you? .. People were saying this .. in the 18th century about a certain mechanical duck. And they were being very serious.

That mechanical duck was built to astound audiences, by quacking, moving it’s head to eat some grain which the mechanical marvel seemingly digested and then after a short time, the machine would round things off by plopping out a dollop of, what has been described as, foul smelling sh*t. ..The “looks like a duck” phrase (or Duck Test as some call it) is now thought of as a mildly amusing philosophical argument but back in the 18th century would certainly have been more akin to the way the Turing Test challenges artificial intelligence systems to fool the assessor into believing the system is a real human and not a computer. (more)

Philosophers had lectured saying, “See, you wouldn’t want to be fooled by surface appearances to call this automaton a duck would you?” But then others defiantly embraced this example, saying, “We plan to do exactly that; if it appears in enough ways to be a duck, that’s good enough for us.”

The philosophy of mind topics of classifying and judging minds are topics full of intellectuals offering deep analysis. Imagine you have a wide range of creatures and objects that have various similarities to creatures. For each one you want to estimate many capacities. Does it have a distinct personality? Can it plan, remember, communicate, think, desire, exercise self-control, or judge right and wrong? Does it get embarrassed, proud, or enraged? Can it can guess how others feel? Does it feel fear, hunger, or joy, pain, pleasure? Is it conscious? You also want to judge: if you had two such characters, which one you would more try to make happy, save from destruction, avoid harming, or punish for causing a death. And which is more likely to have a soul?

This is a huge range of topics, on which learned intellectuals have written many thousands of books and articles, arguing for the relevance of a great many distinctions, to be taken into account in many subtle ways. But if ordinary people use simple-minded duck tests on such topics, they’d tend to judge each one by simply adding up many weak clues. And if people were especially simple minded, they might even judge them all using roughly same set of weak clues. Even though some of the above capacities (e.g., plan remember) describe many machines even today, while others (e.g. conscious, soul), are especially abstract and contentious.

Amazingly, as I posted a few days ago, this extreme scenario looks pretty close to the truth! At least as a first approximation. When 2400 people compared 13 diverse characters on the above 18 capacities and 6 judgements, one factor explained 88% of the total variance in capacities, while a second factor explained 8%, leaving only 4% unexplained. The study found some weak correlations with political and religious attitudes, but otherwise its main result is that survey responses on these many mind topics are mostly made using the same simple Duck test (plus noise of course).

Now this study is hardly the last word. I’d love to see a survey with even more characters, and the judgements should be included in the factor analysis. And we also know that people are capable of “dehumanization”, i.e., using motivated reasoning to give lower scores to humans when they want to avoid blame for mistreatment.

But if these basic results continue to hold, they have big implications for how most people will treat various possible future creatures, including aliens, chimeras, alters, robots, AI, and ems. We don’t need a subtle analysis to predict how people will treat such things. We need only to predict a wide range of apparent capacities for such creatures, and perhaps also a degree of motivated reasoning. The more such capacities they have, and at higher levels, and the weaker the motivated reasoning, then the higher people will rate such creatures. And when they are motivated to rate creatures lower, they will do this via rating them lower on many capacities at once, as slave-owners have often done with slaves.

If you believe that such ratings will often be influenced by whether creatures are made out silicon or biochemicals, or whether they are natural or artificial, then you either have to believe that such factors will only work indirectly via a broad influence over all of these capacities and judgements together, or that the factor analysis of a bigger survey will find such big factors associated with such things. I’ve offered to bet that a new bigger survey will not find such big factors.

Bryan Caplan says that he disagrees about how future ems will be treated, but calls survey factor analyses irrelevant, and so won’t bet on them. He is instead very impressed that subjects gave a low rating on the main factor to character called “robot” in the paper, and described this way to survey participants:

Kismet is part of a new class of “sociable” robots that can engage people in natural interaction. To do this, Kismet perceives a variety of natural social signals from sound and sight, and delivers his own signals back to the human partner through gaze direction, facial expression, body posture, and vocal babbles.

Apparently Bryan is confident that ems and all future artificial creatures must are rated as lowly as this character, so he offers to bet on how a survey will rank an “em” character. Alas it is unlikely that the next few surveys would include such a character, in part because it is a pretty unfamiliar concept for most people.

I just don’t see “robot” as useful category here such that we should expect most everything given this label to rate the same. The survey above suggests that what matters is how creatures are rated on many specific capacities. I expect that most people correctly estimated, from their experience with many other machines they’ve seen and heard of, that Kismet is in fact pretty bad at most of the listed capacities. In contrast, when most people are presented with fictional “robots” that are presented as being good at many of these capacities, such people consistently rate those “robots” relatively high on most other capacities and judgments. I’d bet a survey will also show that if such characters were included.

Because while people are often impressed with the subtle analysis by intellectuals, they still usually judge creature mental capacities via a simple duck test. If it quacks like a mind, its a mind.

GD Star Rating
Tagged as: ,

Theories vs. Metaphors

I have said things like:

We should expect brain emulation to be feasible because brains function to process signals, and the decoupling of signal dimensions from other system dimensions is central to achieving the function of a signal processor.

Bryan Caplan says I make:

the Metaphorical Fallacy. Its general form:

1. X is metaphorically Y.

2. Y is literally Z.

3. Therefore, X is literally Z.

…. To take a not-so-random example, … Robin says many crazy things … like:

1. The human mind is a computer.

2. Computers’ data can be uploaded to another computer.

3. Therefore, the human mind can be uploaded to a computer.

No, I’m pretty sure that I’m saying that your mind is literally a signal processing system. Not just metaphorically; literally. That is, while minds have a great many features, a powerful theory, in fact our standard theory, to explain the mix of features we see associated with minds, is that minds fundamentally function to process signals, and that brains are the physical devices that achieve that function. And our standard theories of how physical devices achieve signal processing functions predicts that we can replicate, or “emulate”, the same signal processing functions in quite different physical devices. In fact, such theories tell us how to replicate such functions in other devices.

Of course you can, like Bryan, disagree with our standard theory that the main function of minds is to process signals. Or you could disagree with our standard theories of how that function is achieved by physical devices. Or you could note that since the brain is a signal processor of unparalleled complexity, we are a long way away from knowing how to replicate it in other physical hardware.

But given how rich and well developed are our standard theories of minds as signal processors, signal processors in general, and the implementation of signal processors in physical hardware, it hardly seems fair to reject my conclusion based on a mere “metaphor.”

GD Star Rating
Tagged as: ,