Tag Archives: Philosophy

Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Paternalism can be kind, just not to present-you

You may want to file this under ‘incredibly obvious’, but I haven’t seen it noted elsewhere.

Liberals and libertarians have an instinctive aversion to paternalism. Their key objection is: how can anyone else be expected to know what is good for you, better than you do?

This is usually true, but it neglects a coherent justification for many paternalistic policies that doesn’t require that anyone know more than you. The paternalist could be fine with their policy being bad for ‘present-you’ if it benefits ‘future-you’ even more. But don’t you care about your future self’s welfare too? Sure, but maybe not as much as they do, relative to your current welfare!

Confusion about the intent of the paternalistic policy is generated by the fact that it is natural to say “this policy exists to help you”, without noting which instance of ‘you’ it is meant to help – you now, you tomorrow, you in ten years’ time, and so on.

While this justification would make sense especially often where people engaged in ‘hyperbolic discounting’ and as a result were ‘time inconsistent’, it does not rely on that. All it requires is that,

  • there are things you could do now that would benefit your future self, at the expense of your present self, and;
  • the paternalists’ ‘altruistic’ discount rate for the target’s welfare is lower than the discount rate the target has for their own welfare.

The first is certainly true, while the latter is often true in my experience.

In the near-far construal theory often used on this blog, us-now and immediate gratification are both ‘near’, while ourselves in the future, other people, and other people in the future are all ‘far’. In far mode we will want to encourage other folks to act toward their future selves in ways our far view thinks they ought to – usually patiently.

More intuitively: it’s easier to stick to a commitment to help a friend stay on their diet, than it is to stay to our diet ourselves. We don’t enjoy seeing our friends go without ice cream, but we like to see them reach their and our idealised goals even more. As La Rochefoucauld observed, “We all have strength enough to bear the misfortunes of others.” You could add that we all have strength enough to bear the delayed gratification of others.

If a paternalist really does have a lower discount rate in this way, they could justify all kinds of interventions that benefit someone’s future self: preventing suicide, reducing smoking, encouraging exercise, requiring people to save for emergencies and retirement, and so on. I often find these policies distasteful, but as I support a moral discount rate of zero (on valuable experiences), and almost all people are impatient in their own lives, I can’t justify a blanket opposition. We don’t give people an unrestricted freedom to harm their children, or strangers, just because they don’t care much about them. Why then should we give a young woman unrestricted freedom to hurt her far-off 60 year old self, just because they happen to pass through the same body at different points in time? I care about the 60 year old too, perhaps even more than that young woman does, relative to herself.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either

A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.

While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.

A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.

While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.

What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.

To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.

Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

Significance and motivation

Over at philosophical disquisitions, John Danaher is discussing Aaron Smuts’ response to Bernard Williams’ argument that immortality would be tedious. Smuts’ thesis, in Danaher’s words, is a familiar one:

Immortality would lead to a general motivational collapse because it would sap all our decisions of significance.

This is interestingly at odds with my observations, which suggests that people are much more motivated to do things that seem unimportant, and have to constantly press themselves to do important things once in a while. Most people have arbitrary energy for reading unimportant online articles, playing computer games, and talking aimlessly. Important articles, serious decisions, and momentous conversations get put off.

Unsurprisingly then, people also seem to take more joy from apparently long-run insignificant events. Actually I thought this was the whole point of such events. For instance people seem to quite like cuddling and lazing in the sun and eating and bathing and watching movies. If one had any capacity to get bored of these things, I predict it would happen within the first century. While significant events also bring joy, they seem to involve a lot more drudgery in preceding build up.

So it seems to me that living forever could only take the pressure off and make people more motivated and happy. Except inasmuch as the argument is faulty in other ways, e.g. impending death is not the only time constraint on activities.

Have I missed something?

GD Star Rating
a WordPress rating system
Tagged as: , , , ,

No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist. Continue reading "No theory X in shining armour" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
a WordPress rating system
Tagged as: ,

Does life flow towards flow?

Robin recently described how human brain ‘uploads’, even if forced to work hard to make ends meet, might nonetheless be happy and satisfied with their lives. Some humans naturally love their work, and if they are the ones who get copied, the happiness of emulations could be very high. Of course in Robin’s Malthusian upload scenario, evolutionary pressures towards high productivity are very strong, and so the mere fact that some people really enjoy work doesn’t mean that they will be the ones who get copied billions of times. The workaholics will only inherit the Earth if they are the best employees money can buy.

The broader question of whether creatures that are good at surviving, producing and then reproducing tend towards joy or misery is a crucial one. It helps answer whether it is altruistic to maintain populations of wild animals into the future, or an act of mercy to shrink their habitats. Even more importantly, it is the key to whether it is extremely kind or extremely cruel for humans to engage in panspermia and spread Malthusian life across the universe as soon as possible.

There is an abundance of evidence all around us in the welfare of humans and other animals that have to strive to survive in the environments they are adapted to, but no consensus on what that evidence shows. It is hard enough to tell whether another human has a quality of life better than no life at all, let alone determine the same for say, an octopus.

One of the few pieces of evidence I find compelling comes from Mihály Csíkszentmihályi research into the experience he calls ‘flow‘. His work suggests that humans are most productive, and also most satisfied, when they are totally absorbed in a clear but challenging task which they are capable of completing. The conditions suggested as being necessary to achieve ‘flow’ are

  1. “One must be involved in an activity with a clear set of goals. This adds direction and structure to the task.
  2. One must have a good balance between the perceived challenges of the task at hand and his or her ownperceived skills. One must have confidence that he or she is capable to do the task at hand.
  3. The task at hand must have clear and immediate feedback. This helps the person negotiate any changing demands and allows him or her to adjust his or her performance to maintain the flow state.”

Most work doesn’t meet these criteria and so ‘flow’ is not all that common, but it is amongst the best states of mind a human can hope for.

Some people are much more inclined to enter flow than others and if Csíkszentmihályi’s book is to be believed, they are ideal employees – highly talented, motivated and suited to their tasks. If this is the case, people predisposed to experience flow would be the most popular minds to copy as emulations and in the immediate term the flow-inspired workaholics would indeed come to dominate the Earth.

Of course, it could turn out that in the long run, once enough time has passed for evolution to shed humanity’s baggage, the creatures that most effectively do the forms of work that exist in the future will find life unpleasant. But our evolved capacity for flow in tasks that we are well suited for gives us a reason to hope that will not be the case. If it turns out that flow is a common experience for traditional hunter-gatherers then that would make me even more optimistic. And more optimistic again if we can find evidence for a similar experience in other species.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Your existence is informative

Warning: this post is technical.

Suppose you know that there are a certain number of planets, N. You are unsure about the truth of a statement Q. If Q is true, you put a high probability on life forming on any given arbitrary planet. If Q is false, you put a low probability on this. You have a prior probability for Q. So far you have not taken into account your observation that the planet you are on has life. How do you update on this evidence, to get a posterior probability for Q? Since your model just has a number of planets in it, with none labeled as ‘this planet’, you can’t update directly on ‘there is life on this planet’, by excluding worlds where ‘this planet’ doesn’t have life. And you can’t necessarily treat ‘this’ as an arbitrary planet, since you wouldn’t have seen it if it didn’t have life.

I have an ongoing disagreement with an associate who suggests that you should take ‘this planet has life’ into account by conditioning on ‘there exists a planet with life’. That is,

P(Q|there is life on this planet) = P(Q|there exists a planet with life).

Here I shall explain my disagreement.

Nick Bostrom argues persuasively that much science would be impossible if we treated ‘I observe X’ as ‘someone observes X’. This is basically because in a big world of scientists making measurements, at some point somebody will make most mistaken measurements. So if all you know when you measure the temperature of a solution to be 15 degrees is that you are not in a world where nobody ever measures its temperature to be 15 degrees, this doesn’t tell you much about the temperature.

You can add other apparently irrelevant observations you make at the same time – e.g. that the table is blue chipboard – in order to make your total observations less likely to arise once in a given world (at its limit, this is the suggestion of FNC). However it seems implausible that you should make different inferences from taking a measurement when you can also see a detailed but irrelevant picture at the same time than those you make with limited sensory input. Also the same problem re-emerges if the universe is supposed to be larger. Given that the universe is thought to be very, very large, this is a problem. Not to mention, it seems implausible that the size of the universe should greatly affect probabilistic judgements made about entities which are close to independent from most of the universe.

So I think Bostrom’s case is good. However I’m not completely comfortable arguing from the acceptability of something that we do (science) back to the truth of the principles that justify it. So I’d like to make another case against taking ‘this planet has life’ as equivalent evidence to ‘there exists a planet with life’.

Evidence is what excludes possibilities. Seeing the sun shining is evidence against rain, because it excludes the possible worlds where the sky is grey, which include most of those where it is raining. Seeing a picture of the sun shining is not much evidence against rain, because it excludes worlds where you don’t see such a picture, which are about as likely to be rainy or sunny as those that remain are.

Receiving the evidence ‘there exists a planet with life’ means excluding all worlds where all planets are lifeless, and not excluding any other worlds. At first glance, this must be different from ‘this planet has life’. Take any possible world where some other planet has life, and this planet has no life. ‘There exists a planet with life’ doesn’t exclude that world, while ‘this planet has life’ does. Therefore they are different evidence.

At this point however, note that the planets in the model have no distinguishing characteristics. How do we even decide which planet is ‘this planet’ in another possible world? There needs to be some kind of mapping between planets in each world, saying which planet in world A corresponds to which planet in world B, etc. As far as I can tell, any mapping will do, as long as a given planet in one possible world maps to at most one planet in another possible world. This mapping is basically a definition choice.

So suppose we use a mapping where in every possible world where at least one planet has life, ‘this planet’ corresponds to one of the planets that has life. See the below image.

Which planet is which?

Squares are possible worlds, each with two planets. Pink planets have life, blue do not. Define ‘this planet’ as the circled one in each case. Learning that there is life on this planet is equal to learning that there is life on some planet.

Now learning that there exists a planet with life is the same as learning that this planet has life. Both exclude the far righthand possible world, and none of the other possible worlds. What’s more, since we can change the probability distribution we end up with, just by redefining which planets are ‘the same planet’ across worlds, indexical evidence such as ‘this planet has life’ must be horseshit.

Actually the last paragraph was false. If in every possible world which contains life, you pick one of the planets with life to be ‘this planet’, you can no longer know whether you are on ‘this planet’. From your observations alone, you could be on the other planet, which only has life when both planets do. The one that is not circled in each of the above worlds. Whichever planet you are on, you know that there exists a planet with life. But because there’s some probability of you being on the planet which only rarely has life, you have more information than that. Redefining which planet was which didn’t change that.

Perhaps a different definition of ‘this planet’ would get what my associate wants? The problem with the last was that it no longer necessarily included the planet we are on. So what about we define ‘this planet’ to be the one you are on, plus a life-containing planet in all of the other possible worlds that contain at least one life-containing planet. A strange, half-indexical definition, but why not? One thing remains to be specified – which is ‘this’ planet when you don’t exist? Let’s say it is chosen randomly.

Now is learning that ‘this planet’ has life any different from learning that some planet has life? Yes. Now again there are cases where some planet has life, but it’s not the one you are on. This is because the definition only picks out planets with life across other possible worlds, not this one. In this one, ‘this planet’ refers to the one you are on. If you don’t exist, this planet may not have life. Even if there are other planets that do. So again, ‘this planet has life’ gives more information than ‘there exists a planet with life’.

You either have to accept that someone else might exist when you do not, or you have to define ‘yourself’ as something that always exists, in which case you no longer know whether you are ‘yourself’. Either way, changing definitions doesn’t change the evidence. Observing that you are alive tells you more than learning that ‘someone is alive’.

GD Star Rating
a WordPress rating system
Tagged as: , , , , ,

Resolving Paradoxes of Intuition

Shelly Kagan gave a nice summary of some problems involved in working out whether death is bad for one. I agree with Robin’s response, and have posted before about some of the particular issues. Now I’d like to make a more general observation.

First I’ll summarize Kagan’s story. The problems are something like this. It seems like death is pretty bad. Thought experiments suggest that it is bad for the person who dies, not just their friends, and that it is bad even if it is painless. Yet if a person doesn’t exist, how can things be bad for them? Seemingly because they are missing out on good things, rather than because they are suffering anything. But it is hard to say when they bear the cost of missing out, and it seems like things that happen happen at certain times. Or maybe they don’t. But then we’d have to say all the people who don’t exist are missing out, and that would mean a huge tragedy is happening as long as those people go unconceived. We don’t think a huge tragedy is happening, so lets say it isn’t. Also we don’t feel too bad about people not being born earlier, like we do about them dying sooner. How can we distinguish these cases of deprivation from non-existence from the deprivation that happens after death? Not in any satisfactorily non-arbitrary way. So ‘puzzles still remain’.

This follows a pattern common to other philosophical puzzles. Intuitions say X sometimes, and not X other times. But they also claim that one should not care about any of the distinctions that can reasonably be made between the times when they say X is true and the times when they say X is false.

Intuitions say you should save a child dying in front of you. Intuitions say you aren’t obliged to go out of your way to protect a dying child in Africa. Intuitions also say physical proximity, likelihood of being blamed, etc shouldn’t be morally relevant.

Intuitions say you are the same person today as tomorrow. Intuitions say you are not the same person as Napoleon. Intuitions also say that whether you are the same person or not shouldn’t depend on any particular bit of wiring in your head, and that changing a bit of wiring doesn’t make you slightly less you.

Of course not everyone shares all of these intuitions (I don’t). But for those who do, there are problems. These problems can be responded to by trying to think of other distinctions between contexts that do seem intuitively legitimate, reframing an unintuitive conclusion to make it intuitive, or just accepting at least one of the unintuitive conclusions.

The first two solutions – finding more appealing distinctions and framings – seem a lot more popular than the third – biting a bullet. Kagan concludes that ‘puzzles remain’, as if this inconsistency is an apparent mathematical conflict that one can fully expect to eventually see through if we think about it right. And many other people have been working on finding a way to make these intuitions consistent for a while. Yet why expect to find a resolution?

Why not expect this contradiction to be like the one that arises if you claim that you like apples more than pears and also pears more than apples? There is no nuanced way to resolve the issue, except to give up at least one.  You can make up values, but sometimes they are just inconsistent. The same goes for evolved values.

From Kagan’s account of death, it seems likely that our intuitions are just inconsistent. Given natural selection, this is not particularly surprising. It’s no mystery how people could evolve to care about the survival of they and their associates, yet not to care about people who don’t exist. Even if people who don’t exist suffer the same costs from not existing. It’s also not surprising that people would come to believe their care for others is largely about the others’ wellbeing, not their own interests, and so believe that if they don’t care about a tragedy, there isn’t one. There might be some other resolution in the death case, but until we see one, it seems odd to expect one. Especially when we have already looked so hard.

Most likely, if you want a consistent position you will have to bite a bullet. If you are interested in reality, biting a bullet here shouldn’t be a last resort after searching every nook and cranny for a consistent and intuitive position. It is much more likely that humans have inconsistent intuitions about the value of life than that we have so far failed to notice some incredibly important and intuitive distinction in circumstances that drives our different intuitions. Why do people continue to search for intuitive resolutions to such problems? It could be that accepting an unintuitive position is easy, unsophisticated, unappealing to funders and friends, and seems like giving up. Is there something else I’m missing?

GD Star Rating
a WordPress rating system
Tagged as: , ,