Category Archives: Epistemology

Expected Creative Surprises

Imagine that I’m playing chess against a smarter opponent.  If I could predict exactly where my opponent would move on each turn, I would automatically be at least as good a chess player as my opponent.  I could just ask myself where my opponent would move, if they were in my shoes; and then make the same move myself.  (In fact, to predict my opponent’s exact moves, I would need to be superhuman – I would need to predict my opponent’s exact mental processes, including their limitations and their errors.  It would become a problem of psychology, rather than chess.)

So predicting an exact move is not possible, but neither is it true that I have no information about my opponent’s moves.

Personally, I am a very weak chess player – I play an average of maybe two games per year.  But even if I’m playing against former world champion Garry Kasparov, there are certain things I can predict about his next move.  When the game starts, I can guess that the move P-K4 is more likely than P-KN4.  I can guess that if Kasparov has a move which would allow me to checkmate him on my next move, that Kasparov will not make that move.

Much less reliably, I can guess that Kasparov will not make a move that exposes his queen to my capture – but here, I could be greatly surprised; there could be a rationale for a queen sacrifice which I have not seen.

And finally, of course, I can guess that Kasparov will win the game…

Continue reading "Expected Creative Surprises" »

GD Star Rating
loading...

Dark Side Epistemology

Followup toEntangled Truths, Contagious Lies

If you once tell a lie, the truth is ever after your enemy.

I have previously spoken of the notion that, the truth being entangled, lies are contagious.  If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach – well, do you know what a geologist knows about rocks?  I don’t.  But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption.  Do you know where the pebble in your driveway really came from?  Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.

What sounds like an arbitrary truth to one mind – one that could easily be replaced by a plausible lie – might be nailed down by a dozen linkages to the eyes of greater knowledge.  To a creationist, the idea that life was shaped by "intelligent design" instead of "natural selection" might sound like a sports team to cheer for.  To a biologist, plausibly arguing that an organism was intelligently designed would require lying about almost every facet of the organism.  To plausibly argue that "humans" were intelligently designed, you’d have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds…

Or you could just lie about evolutionary theory, which is the path taken by most creationists.  Instead of lying about the connected nodes in the network, they lie about the general laws governing the links.

And then to cover that up, they lie about the rules of science – like what it means to call something a "theory", or what it means for a scientist to say that they are not absolutely certain.

Continue reading "Dark Side Epistemology" »

GD Star Rating
loading...

Entangled Truths, Contagious Lies

"One of your very early philosophers came to the conclusion that a fully competent mind, from a study of one fact or artifact belonging to any given universe, could construct or visualize that universe, from the instant of its creation to its ultimate end…"
        — First Lensman

"If any one of you will concentrate upon one single fact, or small object, such as a pebble or the seed of a plant or other creature, for as short a period of time as one hundred of your years, you will begin to perceive its truth."
        — Gray Lensman

I am reasonably sure that a single pebble, taken from a beach of our own Earth, does not specify the continents and countries, politics and people of this Earth.  Other planets in space and time, other Everett branches, would generate the same pebble.  On the other hand, the identity of a single pebble would seem to include our laws of physics.  In that sense the entirety of our Universe – all the Everett branches – would be implied by the pebble.  (If, as seems likely, there are no truly free variables.)

So a single pebble probably does not imply our whole Earth.  But a single pebble implies a very great deal.  From the study of that single pebble you could see the laws of physics and all they imply.  Thinking about those laws of physics, you can see that planets will form, and you can guess that the pebble came from such a planet.  The internal crystals and molecular formations of the pebble formed under gravity, which tells you something about the planet’s mass; the mix of elements in the pebble tells you something about the planet’s formation.

I am not a geologist, so I don’t know to which mysteries geologists are privy.  But I find it very easy to imagine showing a geologist a pebble, and saying, "This pebble came from a beach at Half Moon Bay", and the geologist immediately says, "I’m confused" or even "You liar".  Maybe it’s the wrong kind of rock, or the pebble isn’t worn enough to be from a beach – I don’t know pebbles well enough to guess the linkages and signatures by which I might be caught, which is the point.

Continue reading "Entangled Truths, Contagious Lies" »

GD Star Rating
loading...

Friedman’s “Prediction vs. Explanation”

David D. Friedman asks:

We do ten experiments. A scientist observes the results, constructs a theory consistent with them, and uses it to predict the results of the next ten. We do them and the results fit his predictions. A second scientist now constructs a theory consistent with the results of all twenty experiments.

The two theories give different predictions for the next experiment. Which do we believe? Why?

One of the commenters links to Overcoming Bias, but as of 11PM on Sep 28th, David’s blog’s time, no one has given the exact answer that I would have given.  It’s interesting that a question so basic has received so many answers.

GD Star Rating
loading...

Horrible LHC Inconsistency

Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I’m horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such… then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that’s taking into account my uncertainty about whether the anthropic principle really works that way.)

Even having noticed this triple inconsistency, I’m not sure in which direction to resolve it!

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)

GD Star Rating
loading...

Optimization

"However many ways there may be of being alive, it is certain that there are vastly more ways of being dead."
        — Richard Dawkins

In the coming days, I expect to be asked:  "Ah, but what do you mean by ‘intelligence’?"  By way of untangling some of my dependency network for future posts, I here summarize some of my notions of "optimization".

Consider a car; say, a Toyota Corolla.  The Corolla is made up of some number of atoms; say, on the rough order of 1029.  If you consider all possible ways to arrange 1029 atoms, only an infinitesimally tiny fraction of possible configurations would qualify as a car; if you picked one random configuration per Planck interval, many ages of the universe would pass before you hit on a wheeled wagon, let alone an internal combustion engine.

Even restricting our attention to running vehicles, there is an astronomically huge design space of possible vehicles that could be composed of the same atoms as the Corolla, and most of them, from the perspective of a human user, won’t work quite as well.  We could take the parts in the Corolla’s air conditioner, and mix them up in thousands of possible configurations; nearly all these configurations would result in a vehicle lower in our preference ordering, still recognizable as a car but lacking a working air conditioner.

So there are many more configurations corresponding to nonvehicles, or vehicles lower in our preference ranking, than vehicles ranked greater than or equal to the Corolla.

Similarly with the problem of planning, which also involves hitting tiny targets in a huge search space.  Consider the number of possible legal chess moves versus the number of winning moves.

Which suggests one theoretical way to measure optimization – to quantify the power of a mind or mindlike process:

Continue reading "Optimization" »

GD Star Rating
loading...

Excluding the Supernatural

Followup toReductionism, Anthropomorphic Optimism

Occasionally, you hear someone claiming that creationism should not be taught in schools, especially not as a competing hypothesis to evolution, because creationism is a priori and automatically excluded from scientific consideration, in that it invokes the "supernatural".

So… is the idea here, that creationism could be true, but even if it were true, you wouldn’t be allowed to teach it in science class, because science is only about "natural" things?

It seems clear enough that this notion stems from the desire to avoid a confrontation between science and religion.  You don’t want to come right out and say that science doesn’t teach Religious Claim X because X has been tested by the scientific method and found false.  So instead, you can… um… claim that science is excluding hypothesis X a priori.  That way you don’t have to discuss how experiment has falsified X a posteriori.

Of course this plays right into the creationist claim that Intelligent Design isn’t getting a fair shake from science – that science has prejudged the issue in favor of atheism, regardless of the evidence.  If science excluded Intelligent Design a priori, this would be a justified complaint!

But let’s back up a moment.  The one comes to you and says:  "Intelligent Design is excluded from being science a priori, because it is ‘supernatural’, and science only deals in ‘natural’ explanations."

What exactly do they mean, "supernatural"?  Is any explanation invented by someone with the last name "Cohen" a supernatural one?  If we’re going to summarily kick a set of hypotheses out of science, what is it that we’re supposed to exclude?

By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s:  A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

Continue reading "Excluding the Supernatural" »

GD Star Rating
loading...

Unnatural Categories

Followup toDisguised Queries, Superexponential Conceptspace

If a tree falls in the forest, and no one hears it, does it make a sound?

"Tell me why you want to know," says the rationalist, "and I’ll tell you the answer."  If you want to know whether your seismograph, located nearby, will register an acoustic wave, then the experimental prediction is "Yes"; so, for seismographic purposes, the tree should be considered to make a sound.  If instead you’re asking some question about firing patterns in a human auditory cortex – for whatever reason – then the answer is that no such patterns will be changed when the tree falls.

What is a poison?  Hemlock is a "poison"; so is cyanide; so is viper venom.  Carrots, water, and oxygen are "not poison".  But what determines this classification?  You would be hard pressed, just by looking at hemlock and cyanide and carrots and water, to tell what sort of difference is at work.  You would have to administer the substances to a human – preferably one signed up for cryonics – and see which ones proved fatal.  (And at that, the definition is still subtler than it appears: a ton of carrots, dropped on someone’s head, will also prove fatal. You’re really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)

Where poison-ness is concerned, you are not classifying via a strictly local property of the substance.  You are asking about the consequence when a dose of that substance is applied to a human metabolism.  The local difference between a human who gasps and keels over, versus a human alive and healthy, is more compactly discriminated, than any local difference between poison and non-poison.

Continue reading "Unnatural Categories" »

GD Star Rating
loading...

Invisible Frameworks

Followup toPassing the Recursive Buck, No License To Be Human

Roko has mentioned his "Universal Instrumental Values" several times in his comments.  Roughly, Roko proposes that we ought to adopt as terminal values those things that a supermajority of agents would do instrumentally.  On Roko’s blog he writes:

I’m suggesting that UIV provides the cornerstone for a rather new approach to goal system design. Instead of having a fixed utility function/supergoal, you periodically promote certain instrumental values to terminal values i.e. you promote the UIVs.

Roko thinks his morality is more objective than mine:

It also worries me quite a lot that eliezer’s post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter’s notions. This property qualifies as "moral relativism" in my book, though there is no point in arguing about the meanings of words.

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X.

Continue reading "Invisible Frameworks" »

GD Star Rating
loading...

No License To Be Human

Followup toYou Provably Can’t Trust Yourself

Yesterday I discussed the difference between:

  • A system that believes – is moved by – any specific chain of deductions from the axioms of Peano Arithmetic.  (PA, Type 1 calculator)
  • A system that believes PA, plus explicitly asserts the general proposition that PA is sound.  (PA+1, meta-1-calculator that calculates the output of Type 1 calculator)
  • A system that believes PA, plus explicitly asserts its own soundness.  (Self-PA, Type 2 calculator)

These systems are formally distinct.  PA+1 can prove things that PA cannot.  Self-PA is inconsistent, and can prove anything via Löb’s Theorem.

With these distinctions in mind, I hope my intent will be clearer, when I say that although I am human and have a human-ish moral framework, I do not think that the fact of acting in a human-ish way licenses anything.

I am a self-renormalizing moral system, but I do not think there is any general license to be a self-renormalizing moral system.

And while we’re on the subject, I am an epistemologically incoherent creature, trying to modify his ways of thinking in accordance with his current conclusions; but I do not think that reflective coherence implies correctness.

Continue reading "No License To Be Human" »

GD Star Rating
loading...