Category Archives: Reductionism

BHTV: Jaron Lanier and Yudkowsky

My interview with Jaron Lanier is up.  Reductionism, zombies, and questions that you’re not allowed to answer:

This ended up being more of me interviewing Lanier than a dialog, I’m afraid.  I was a little too reluctant to interrupt.  But you at least get a chance to see the probes I use, and Lanier’s replies to them.

If there are any BHTV heads out there who read Overcoming Bias and have something they’d like to talk to me about, do let me or our kindly producers know.

GD Star Rating

Psychic Powers

Followup to: Excluding the Supernatural

Yesterday, I wrote:

If the "boring view" of reality is correct, then you can never predict anything irreducible because you are reducible.  You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.

Benja Fallenstein commented:

I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).

Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer — that performs operations on arbitrary real numbers, for example — but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much…

Well, that’s a very intelligent argument, Benja Fallenstein.  But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point:

Continue reading "Psychic Powers" »

GD Star Rating

Excluding the Supernatural

Followup toReductionism, Anthropomorphic Optimism

Occasionally, you hear someone claiming that creationism should not be taught in schools, especially not as a competing hypothesis to evolution, because creationism is a priori and automatically excluded from scientific consideration, in that it invokes the "supernatural".

So… is the idea here, that creationism could be true, but even if it were true, you wouldn’t be allowed to teach it in science class, because science is only about "natural" things?

It seems clear enough that this notion stems from the desire to avoid a confrontation between science and religion.  You don’t want to come right out and say that science doesn’t teach Religious Claim X because X has been tested by the scientific method and found false.  So instead, you can… um… claim that science is excluding hypothesis X a priori.  That way you don’t have to discuss how experiment has falsified X a posteriori.

Of course this plays right into the creationist claim that Intelligent Design isn’t getting a fair shake from science – that science has prejudged the issue in favor of atheism, regardless of the evidence.  If science excluded Intelligent Design a priori, this would be a justified complaint!

But let’s back up a moment.  The one comes to you and says:  "Intelligent Design is excluded from being science a priori, because it is ‘supernatural’, and science only deals in ‘natural’ explanations."

What exactly do they mean, "supernatural"?  Is any explanation invented by someone with the last name "Cohen" a supernatural one?  If we’re going to summarily kick a set of hypotheses out of science, what is it that we’re supposed to exclude?

By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s:  A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

Continue reading "Excluding the Supernatural" »

GD Star Rating

Points of Departure

Followup toAnthropomorphic Optimism

If you’ve watched Hollywood sci-fi involving supposed robots, androids, or AIs, then you’ve seen AIs that are depicted as "emotionless".  In the olden days this was done by having the AI speak in a monotone pitch – while perfectly stressing the syllables, of course.  (I could similarly go on about how AIs that disastrously misinterpret their mission instructions, never seem to need help parsing spoken English.)  You can also show that an AI is "emotionless" by having it notice an emotion with a blatant somatic effect, like tears or laughter, and ask what it means (though of course the AI never asks about sweat or coughing).

If you watch enough Hollywood sci-fi, you’ll run into all of the following situations occurring with supposedly "emotionless" AIs:

  1. An AI that malfunctions or otherwise turns evil, instantly acquires all of the negative human emotions – it hates, it wants revenge, and feels the need to make self-justifying speeches.
  2. Conversely, an AI that turns to the Light Side, gradually acquires a full complement of human emotions.
  3. An "emotionless" AI suddenly exhibits human emotion when under exceptional stress; e.g. an AI that displays no reaction to thousands of deaths, suddenly showing remorse upon killing its creator.
  4. An AI begins to exhibit signs of human emotion, and refuses to admit it.

Now, why might a Hollywood scriptwriter make those particular mistakes?

Continue reading "Points of Departure" »

GD Star Rating

Against Modal Logics

Continuation ofGrasping Slippery Things
Followup toPossibility and Could-ness, Three Fallacies of Teleology

When I try to hit a reduction problem, what usually happens is that I "bounce" – that’s what I call it.  There’s an almost tangible feel to the failure, once you abstract and generalize and recognize it.  Looking back, it seems that I managed to say most of what I had in mind for today’s post, in "Grasping Slippery Things".  The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A’] that follows from an initial starting world A operated on by a set of physical laws f."  Where realizable contains the full mystery of "possible" – but you’ve made it into a basic symbol, and added some other symbols: the illusion of formality.

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray – so far astray that I simply can’t make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.

The proliferation of modal logics in philosophy is a good illustration of one major reason:  Modern philosophy doesn’t enforce reductionism, or even strive for it.

Most philosophers, as one would expect from Sturgeon’s Law, are not very good.  Which means that they’re not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms.  Reductionism is, in modern times, an unusual talent.  Insights on the order of Pearl et. al.’s reduction of causality or Julian Barbour’s reduction of time are rare.

So what these philosophers do instead, is "bounce" off the problem into a new modal logic:  A logic with symbols that embody the mysterious, opaque, unopened black box.  A logic with primitives like "possible" or "necessary", to mark the places where the philosopher’s brain makes an internal function call to cognitive algorithms as yet unknown.

And then they publish it and say, "Look at how precisely I have defined my language!"

Continue reading "Against Modal Logics" »

GD Star Rating

Dreams of AI Design

Followup toAnthropomorphic Optimism, Three Fallacies of Teleology

After spending a decade or two living inside a mind, you might think you knew a bit about how minds work, right?  That’s what quite a few AGI wannabes (people who think they’ve got what it takes to program an Artificial General Intelligence) seem to have concluded.  This, unfortunately, is wrong.

Artificial Intelligence is fundamentally about reducing the mental to the non-mental.

You might want to contemplate that sentence for a while.  It’s important.

Living inside a human mind doesn’t teach you the art of reductionism, because nearly all of the work is carried out beneath your sight, by the opaque black boxes of the brain.  So far beneath your sight that there is no introspective sense that the black box is there – no internal sensory event marking that the work has been delegated.

Did Aristotle realize that when he talked about the telos, the final cause of events, that he was delegating predictive labor to his brain’s complicated planning mechanisms – asking, "What would this object do, if it could make plans?"  I rather doubt it.  Aristotle thought the brain was an organ for cooling the blood – which he did think was important:  Humans, thanks to their larger brains, were more calm and contemplative.

So there’s an AI design for you!  We just need to cool down the computer a lot, so it will be more calm and contemplative, and won’t rush headlong into doing stupid things like modern computers.

Continue reading "Dreams of AI Design" »

GD Star Rating

Three Fallacies of Teleology

Followup toAnthropomorphic Optimism

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as "cause", though Wikipedia suggests that a better translation is "maker".  Aristotle’s theory of the Four Causes, then, might be better translated as the Four Makers.  These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance’s form, its statue-shaped-ness.  The efficient aition best translates as the English word "cause"; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order.  In this, Aristotle followed in the path of Plato, who had earlier written:

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause.  It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it.  That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid.  As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force…

Continue reading "Three Fallacies of Teleology" »

GD Star Rating

When Anthropomorphism Became Stupid

Followup to:   Humans in Funny Suits, Brain Breakthrough

It turns out that most things in the universe don’t have minds.

This statement would have provoked incredulity among many earlier cultures.  "Animism" is the usual term.  They thought that trees, rocks, streams, and hills all had spirits because, hey, why not?

I mean, those lumps of flesh known as "humans" contain thoughts, so why shouldn’t the lumps of wood known as "trees"?

My muscles move at my will, and water flows through a river.  Who’s to say that the river doesn’t have a will to move the water?  The river overflows its banks, and floods my tribe’s gathering-place – why not think that the river was angry, since it moved its parts to hurt us? It’s what we would think when someone’s fist hit our nose.

There is no obvious reason – no reason obvious to a hunter-gatherer – why this cannot be so.  It only seems like a stupid mistake if you confuse weirdness with stupidity.  Naturally the belief that rivers have animating spirits seems "weird" to us, since it is not a belief of our tribe.  But there is nothing obviously stupid about thinking that great lumps of moving water have spirits, just like our own lumps of moving flesh.

If the idea were obviously stupid, no one would have believed it.  Just like, for the longest time, nobody believed in the obviously stupid idea that the Earth moves while seeming motionless.

Is it obvious that trees can’t think?  Trees, let us not forget, are in fact our distant cousins.  Go far enough back, and you have a common ancestor with your fern.  If lumps of flesh can think, why not lumps of wood?

Continue reading "When Anthropomorphism Became Stupid" »

GD Star Rating

Abstracted Idealized Dynamics

Followup toMorality as Fixed Computation

I keep trying to describe morality as a "computation", but people don’t stand up and say "Aha!"

Pondering the surprising inferential distances that seem to be at work here, it occurs to me that when I say "computation", some of my listeners may not hear the Word of Power that I thought I was emitting; but, rather, may think of some complicated boring unimportant thing like Microsoft Word.

Maybe I should have said that morality is an abstracted idealized dynamic.  This might not have meant anything to start with, but at least it wouldn’t sound like I was describing Microsoft Word.

How, oh how, am I to describe the awesome import of this concept, "computation"?

Perhaps I can display the inner nature of computation, in its most general form, by showing how that inner nature manifests in something that seems very unlike Microsoft Word – namely, morality.

Consider certain features we might wish to ascribe to that-which-we-call "morality", or "should" or "right" or "good":

• It seems that we sometimes think about morality in our armchairs, without further peeking at the state of the outside world, and arrive at some previously unknown conclusion.

Someone sees a slave being whipped, and it doesn’t occur to them right away that slavery is wrong.  But they go home and think about it, and imagine themselves in the slave’s place, and finally think, "No."

Can you think of anywhere else that something like this happens?

Continue reading "Abstracted Idealized Dynamics" »

GD Star Rating

Zombies: The Movie

< ?xml version="1.0" standalone="yes"?> < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "">

FADE IN around a serious-looking group of uniformed military officers.  At the head of the table, a senior, heavy-set man, GENERAL FRED, speaks.

GENERAL FRED:  The reports are confirmed.  New York has been overrun… by zombies.

COLONEL TODD:  Again?  But we just had a zombie invasion 28 days ago!

GENERAL FRED:  These zombies… are different.  They’re… philosophical zombies.

CAPTAIN MUDD:  Are they filled with rage, causing them to bite people?

COLONEL TODD:  Do they lose all capacity for reason?

GENERAL FRED:  No.  They behave… exactly like we do… except that they’re not conscious.

(Silence grips the table.)


Continue reading "Zombies: The Movie" »

GD Star Rating