Tag Archives: Epistemology

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
loading...
Tagged as: , ,

Responsibility and Clicking

Sometimes when people hear obvious arguments regarding emotive topics, they just tentatively accept the conclusion instead of defending against it until they find some half satisfactory reason to dismiss it. Eliezer Yudkowsky calls this ‘clicking’, and wants to know what causes it:

My best guess is that clickiness has something to do with failure to compartmentalize – missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

pjeby remarks (with 96 upvotes),

One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science….

Hypothesis: people expect reality to make sense roughly in proportion to how personally responsible for manipulating it they feel. If you think of yourself as in charge of strategically doing something, you are eager to understand how doing that thing works, and automatically expect understanding to be possible. If you are driving a car, you insist the streets fit intuitive geometry. If you are engaging in office politics, you feel there must be some reason Gina said that thing.

If you feel like some vague ‘they’ is responsible for most things, and is meant to give you stuff that you have a right to, and that you are meant to be a good person in the mean time, you won’t automatically try to understand things or think of them as understandable. Modeling how things work isn’t something you are ‘meant’ to do, unless you are some kind of scientist. If you do dabble in that kind of thing, you enjoy the pretty ideas rather than feel any desperate urge for them to be sound or complete. Other people are meant to look after those things.

A usual observation is that understanding things properly allows you to manipulate them. I posit that thinking of them as something you might manipulate automatically makes you understand them better. This isn’t particularly new either. It’s related to ‘learned blankness‘, and searching vs. chasing, and near mode vs. far mode. The followup point is that chasing the one correct model of reality, which has to make sense, straight-forwardly leads to ‘clicking’ when you hear a sensible argument.

According to this hypothesis, the people who feel most personally responsible for everything a la Methods Harry Potter would also be the people who are most notice whether things make sense. The people who less trust doctors and churches to look after them on the way to their afterlives are the ones who notice that cryonics makes sense.

To see something as manipulable is to see it in the same light that science does, rather than as wallpaper. This is expensive, not just because a detailed model is costly to entertain, but because it interferes with saying socially advantageous things about the wallpaper. So you quite sensibly only do it when you actually want to manipulate a thing and feel potentially empowered to do so, i.e. when you hold yourself responsible for it.

GD Star Rating
loading...
Tagged as: ,

Your existence is informative

Warning: this post is technical.

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as ‘this planet’, you can’t update directly on ‘there is life on this planet’, by excluding worlds where ‘this planet’ doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom’s case is good. However I’m not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I’d like to make another case against taking ‘this planet has life’ as equivalent evidence to ‘there exists a planet with life’.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don’t see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence ‘there exists a planet with life’ means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from ‘this planet has life’. Take any possible world where some other planet has life, and this planet has no life. ‘There exists a planet with life’ doesn’t exclude that world, while ‘this planet has life’ does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is ‘this planet’ in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, ‘this planet’ corresponds to one of the planets that has life. See the below image.

Which planet is which?

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define ‘this planet’ as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What’s more, since we can change the probability distribution we end up with, just by redefining which planets are ‘the same planet’ across worlds, indexical evidence such as ‘this planet has life’ must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be ‘this planet’, you can no longer know whether you are on ‘this planet’. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there’s some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn’t change that.

Perhaps a different definition of ‘this planet’ would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define ‘this planet’ to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified – which is ‘this’ planet when you don’t exist? Let’s say it is chosen randomly.

Now is learning that ‘this planet’ has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it’s not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, ‘this planet’ refers to the one you are on. If you don’t exist, this planet may not have life. Even if there are other planets that do. So again, ‘this planet has life’ gives more information than ‘there exists a planet with life’.

You either have to accept that someone else might exist when you do not, or you have to define ‘yourself’ as something that always exists, in which case you no longer know whether you are ‘yourself’. Either way, changing definitions doesn’t change the evidence. Observing that you are alive tells you more than learning that ‘someone is alive’.

GD Star Rating
loading...
Tagged as: , , , , ,

Ignorance About Intuitions

In common usage, intuitions lead us to believe things without being able to articulate evidence or reasons for those beliefs. Wikipedia.

I’m not offering you a phony seventeen-step “proof that murder is normally wrong.”  Instead, I begin with concrete, specific cases where morality is obvious, and reason from there.  Bryan Caplan.

My debate with Bryan Caplan made me reflect again on our differing attitudes toward intuition.  While we still differ, Bryan has greatly influenced my thinking.

For each of our beliefs, we can ask our mind to give our "reasons" for that belief.  Our minds usually then offer reasons, though we usually don't know how much those reasons have to do with the actual causes of our belief.  We can often test those reasons through criticism, increasing confidence when criticism is less effective than expected, and decreasing confidence when criticism is more effective than expected.

For some of our beliefs, our minds don't offer much in the way of reasons.  We say these beliefs are more "intuitive."  In a hostile debating context this response can seem suspicious; you might expect one side in a debate to refuse to offer reasons just when they had already tested those reasons against criticism, and found them wanting.  That is, we might expect a debater to pretend he didn't have any reasons when he knew his reasons were bad. 

But this doesn't obviously support much distrust of our own intuitive beliefs.  Not only is our internal mind not obviously like a hostile debating context, but we must admit that our minds are built so that the vast majority of our thinking is unconscious.  It is unreasonable to expect our minds to be able to tell us much in the way of reasons for most of our beliefs. 

Continue reading "Ignorance About Intuitions" »

GD Star Rating
loading...
Tagged as:

Who Loves Truth Most?

Who loves cars most?  Most people like cars, but the folks most vocal in their enthusiasm for cars are car sellers; they pay millions for ads gushing about how much their engineers love designing cars, their factory workers love building them, etc.  The next most vocal are probably car collectors, tinkerers, and racers; they'll bend your ear off about their car hobby.  Also vocal are folks visibly concerned that the poor don't have enough cars. 

But if you want to find the folks who most love cars for their main purpose, getting folks around in their daily lives, you'll have to filter out the sellers, hobbyists, and do-gooders to find ordinary people who just love their cars.  For the most part, car companies love to sell cars to make cash, car hobbyists love to use cars to show off their personal abilities, and do-gooders use cars to show off their compassion.  By comparison, those who just love to drive from point A to B don't shout much.

Truth loving is similar.  Most folks say they prefer truth, but the folks most vocal about loving "truth" are usually selling something.  For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth.  The next most vocal in their enthusiasm for truth are those who, like car hobbyists, use public demonstrations of truth-finding to show off personal abilities.  Academics, gamers, poker players, and amateur intellectuals of all sorts are proud of the fact that their efforts reveal truth, and they make sure you notice their proficiencies. And do-gooders earnestly talk about the importance of everyone understanding the truth of the uninsured, the illiterate, etc.

Continue reading "Who Loves Truth Most?" »

GD Star Rating
loading...
Tagged as: , , , ,

Share likelihood ratios, not posterior beliefs

When I think of Aumann's agreement theorem, my first reflex is to average.  You think A is 80% likely; my initial impression is that it's 60% likely.  After you and I talk, maybe we both should think 70%.  "Average your starting beliefs", or perhaps "do a weighted average, weighted by expertise" is a common heuristic.

But sometimes, not only is the best combination not the average, it's more extreme than either original belief.

Let's say Jane and James are trying to determine whether a particular coin is fair.  They both think there's an 80% chance the coin is fair.  They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.

Jane flips the coin five times, performs a perfect Bayesian update, and concludes there's a 65% chance the coin is unfair.  James flips the coin five times, performs a perfect Bayesian update, and concludes there's a 39% chance the coin is unfair.  The averaging heuristic would suggest that the correct answer is between 65% and 39%.  But a perfect Bayesian, hearing both Jane's and James's estimates – knowing their priors, and deducing what evidence they must have seen - would infer that the coin was 83% likely to be unfair.  [Math footnoted.]

Perhaps Jane and James are combining this information in the middle of a crowded tavern, with no pen and paper in sight.  Maybe they don't have time or memory enough to tell each other all the coins they observed.  So instead they just tell each other their posterior probabilities – a nice, short summary for a harried rationalist pair.  Perhaps this brevity is why we tend to average posterior beliefs.

However, there is an alternative.  Jane and James can trade likelihood ratios.  Like posterior beliefs, likelihood ratios are a condensed summary; and, unlike posterior beliefs, sharing likelihood ratios actually works.

Continue reading "Share likelihood ratios, not posterior beliefs" »

GD Star Rating
loading...
Tagged as: , ,

Moral uncertainty – towards a solution?

It seems people are overconfident about their moral beliefs.  But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don't know which moral theory is correct?

It doesn't seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.

Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework.  For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism.  Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils.  (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.)  Now what do you do, for different values of X?

The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc.  We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.

I'm working on a paper on this together with my colleague Toby Ord.  We have some arguments against a few possible "solutions" that we think don't work.  On the positive side we have some tricks that work for a few special cases.  But beyond that, the best we have managed so far is a kind of metaphor, which we don't think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:

Continue reading "Moral uncertainty – towards a solution?" »

GD Star Rating
loading...
Tagged as: , ,

Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?

In the early days of this blog, I would pick fierce arguments with Robin about the no-disagreement hypothesis.  Lately, however, reflection on things like public reason have brought me toward agreement with Robin, or at least moderated my disagreement.  To see why, it’s perhaps useful to take a look at the newspapers

the pope said the book “explained with great clarity” that “an interreligious dialogue in the strict sense of the word is not possible.” In theological terms, added the pope, “a true dialogue is not possible without putting one’s faith in parentheses.”

What are we to make of a statement like this?

Continue reading "Beliefs Require Reasons, or: Is the Pope Catholic? Should he be?" »

GD Star Rating
loading...
Tagged as: , , ,

The Problem at the Heart of Pascal’s Wager

It is a most painful position to a conscientious and cultivated mind to be drawn in contrary directions by the two noblest of all objects of pursuit — truth and the general good.  Such a conflict must inevitably produce a growing indifference to one or other of these objects, most probably to both.

- John Stuart Mill, from Utility of Religion

Much electronic ink has been spilled on this blog about Pascal’s wager.  Yet, I don’t think that the central issue, and one that relates directly to the mission of this blog, has been covered.  That issue is this: there’s a difference between the requirements for good (rational, justified) belief and the requirements for good (rational, prudent — not necessarily moral) action.

Presented most directly: good belief is supposed to be truth and evidence-tracking.  It is not supposed to be consequence-tracking.  We call a belief rational to the extent it is (appropriately) influenced by the evidence available to the believer, and thus maximizes our shot at getting the truth.  We call a belief less rational to the extent it is influenced by other factors, including the consequences of holding that belief.  Thus, an atheist who changed his beliefs in response to the threat of torture from the Spanish Inquisition cannot be said to have followed a correct belief-formation process. 

On the other hand, good action is supposed (modulo deontological moral theories) to be consequence-tracking.  The atheist who professes changed beliefs in response to the threat of torture from the Spanish Inquisition can be said to be acting prudently by making such a profession.

A modern gloss on Pascal’s wager might be understood less as an argument for the belief in God than as a challenge to that separation.  If, Modern-Pascal might say, we’re in an epistemic situation such that our evidence is in equipoise (always keeping in mind Daniel Griffin’s apt point that this is the situation presumed by Pascal’s argument), then we ought to take consequences into account in choosing our beliefs. 

There seem to be arguments for and against that position… 

Continue reading "The Problem at the Heart of Pascal’s Wager" »

GD Star Rating
loading...
Tagged as: , ,