Tag Archives: Morality

Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
loading...
Tagged as: , , ,

Ethics For A Broken World

In The Philosophical Quarterly, ethicist Peter Singer reviews Ethics for a Broken World: Imagining Philosophy After Catastrophe:

Tim Mulgan’s first clever idea was to ask how Western moral and political philosophy might look to people living fifty or a hundred years from now if, during the interim, the basic necessities for supporting life become much more difficult to obtain than they are now. Climate change is the obvious way in which this might happen. … Mulgan’s second clever idea was to present his answer to the question he had posed in the form of a series of transcripts of a class held in the broken world on the history of philosophy. …

The affluent world was, by the standards of the broken world, astonishingly wasteful. A favourite leisure activity, for instance, was ‘to drive extremely inefficient carbon-fuelled vehicles around in circles’. In those days, philosophers just ‘took it for granted that everyone can survive.’ … The lectures begin with Nozick, who is taken to represent, ‘in an exaggerated form, the preoccupations and presuppositions of his age.’ … How could an initial acquirer in a pre-affluent world ever know whether the institution of private property will affect future people for the better or for the worse? To a philosopher of the affluent age this might seem obvious, but to the class in the broken world, it does not. …

The idea that utilitarianism leads to extremely demanding obligations to help those in great need was counter-intuitive in the affluent world, but is not in the broken world. So too was the view that it would be wrong for a sheriff to hang one innocent person if that is the only way to save several innocent people from being killed by rioters. … Those same utilitarians who said that we have extremely demanding obligations to the poor could also have pointed out that we have extremely demanding obligations to those who will exist in future. … In the broken world, liberty is not as highly valued as it was in the affluent world. Broken world people regret that affluent people were free to join ‘cults’ that denied climate change. …

The final lecture poses a challenge to affluent democracy on the grounds that, since governments make decisions that affect future generations, no democracy really has the consent of the governed, or of a majority. (more)

Since I also forecast a non-affluent future, I am also interested in how the morals and politics of non-affluent descendants will differ from ours. But I find the above pretty laughable as futurism. As described in this review, this book presents the morality and politics of future folk as overwhelmingly focused on what their ancestors (us) should have been doing for them, namely lots more.

But we have known lots of poor cultures around the world and through history, and their morality and politics has almost never focused on complaining that their ancestors did too little to help them. Most politics and morality has instead been focused on how people alive who interact often should treat each other. Which makes a lot of functional sense.

Wars have consistently caused vast destruction of resources could have gone to building roads, cities, canals, irrigation, etc. And most ancestors severely neglected innovation. Most everywhere in the globe, had ancestors prevented more wars and encouraged more innovation, their descendants would be richer. But almost no one complains about that today. Most discussion today of ancestors celebrates relative wins that suggest some of us are better than others of us, and to lament our ancestors’ backwardness, so we can feel superior by comparison.

The morality of our non-affluent descendants will likely also focus mostly on how they should treat each other, not on how we treated them. To the extent that they talk about us at all, they’ll mostly mention wins that suggest that some of them are better than others of them, and ways in which we seem backward, making them seem forward by comparison. And morality will probably return to be more like that of traditional farmers, relative to that of we rich forager-feeling industrialists of today.

It is a standard truism that discussion of the future is mostly a veiled discussion of today, especially on who today should be criticized or celebrated. The book Ethics for a Broken World seems an especially transparent example of this trend. It is almost all about which of us to blame, and almost none about actual future folk.

Added 11a: Here and here are similar but ungated reviews.

Added 1:30p: Interestingly, in Christianity the main bad guy is Satan, who supposedly obeys God, but not Adam and Eve, who disobeyed. If there were ever ancestors who should be blamed it would be Adam and Eve, but oddly Christians almost never complain about them, preferring to save their harsh words for Satan.

GD Star Rating
loading...
Tagged as: ,

We Add Near, Average Far

Quick, what is the best gift you ever got from a woman? From your parents? From a left-handed person? From a teacher? These aren’t easy questions to answer. But they seem easier than these questions: What is the total value of all the gifts you ever got from women? From your parents? From left-handed folks? From teachers?

For the first set of questions you can try to think of examples of particular people in those categories, and then think of particular gifts you got from those particular people. That can help you guess at the best gift from those categories. But to estimate the total value of gifts from people in categories, you’ll have to also estimate how many gifts you ever got from folks in each category.

Note that it also seems easy to estimate the average value of gifts from each category. To do this, you need only remember a few gifts that fit each category, and then average their values.

As another example, imagine you are looking at building entrance laid out in multi-colored tiles. Some tiles are blue, some red, some green, etc. You are looking at it from a distance, at an angle, in variable lighting. In this situation it will be much easier to estimate if there is more blue than red area in the tiles, than to estimate how many square inches of blue tile area is in that entrance. This later estimate requires you to additionally estimate distances to reference points, to estimate the total surface area.

These examples suggest that when we think in far mode, without a structured systematic representation of our topic, it is usually easier to average than to add values. So averaging is what we’ll tend to do. All of which I mention to introduce to a fascinating paper that I just noticed, even though it got a lot of publicity last December:

This analysis introduces the Presenter’s Paradox. Robust findings in impression formation demonstrate that perceivers’ judgments show a weighted averaging pattern, which results in less favorable evaluations when mildly favorable information is added to highly favorable information. Across seven studies, we show that presenters do not anticipate this averaging pattern on the part of evaluators and instead design presentations that include all of the favorable information available. This additive strategy (“more is better”) hurts presenters in their perceivers’ eyes because mildly favorable information dilutes the impact of highly favorable information. For example, presenters choose to spend more money to make a product bundle look more costly, even though doing so actually cheapened its value from the evaluators’ perspective. (more)

The authors attribute this to a near-far effect:

Presenters face many pieces of potentially relevant information and need to determine, in a bottom-up fashion, which ones to include in a presentation. This presumably draws attention to each individual piece of information as a discrete entity and a focus on piecemeal processing. If a given piece of information exceeds a neutrality threshold, the presenter will conclude that it is compatible with the message he or she seeks to convey and will include it. This results in presentations that would fare better under an adding rather than averaging rule. In contrast, evaluators’ primary task is to make a summary judgment of the overall presentation, which fosters a focus on holistic processing and the big picture and results in an averaging pattern as observed in many impression formation studies.

Additional experiments confirm this near-far interpretation. Those who prepare presentations and proposals tend to focus on them in detail, and so add part values in near mode style, while those who consume such presentations or proposals tend to pay much less attention, and so average their values in far mode style.

This result seems to me quite pregnant with interesting implications, none of which were mentioned in the dozen blog posts on the subject that have appeared since last December. So I guess it’s up to me.

First, this result predicts the usual academic advice to delete publications from low ranked journals from your vita. Yes those extra publications took extra work, and show more total intellectual contribution, but distracted readers evaluate you by averaging your publications, not adding them.

Second, this also predicts that academia will tend in general to neglect conclusions suggested by lots of weak clues, relative to conclusions based on a single strong theory or empirical comparison. People with a practical understanding of particular areas will correctly complain that academics tend too much to latch on to a few easy to explain and justify arguments, at the cost of lots of detail that practitioners appreciate.

Third, this predicts that in morality and politics, which are especially far sorts of topics, arguments tend to be won by those who push simple strong principles, even though people privately tend to choose actions that deviate from such principles. For example, while laws say no one can get medical advice from non-doctors, on the grounds that docs know best, but given a private choice most of us would often let other considerations convince us to listen to non-docs. While actions tend to be chosen in a near mode where lots of other weaker considerations get added, people know their best chance for winning an argument with a distracted audience is to focus on their one strongest point.

Fourth, this predicts Tetlock’s hedgehog vs. foxes result. Foreign policy is an especially far view sort of subject, and experts who focus on one strongest consideration get the most respect and attention, but experts who rely on many considerations, which are on average weaker, are more accurate.

Futurism is probably the most far view sort of topic, so I’d guess that all this holds there the most strongly. That is, while the most futurists who get the most attention from distracted audiences are those who harp endlessly on one clear plausible idea, the most accurate futurists are probably those who know and use hundreds of clues, many of them weak. Alas this is a problem for those of us who want to consider some aspect of the future in detail, since we quickly run out of strong principles, and then have to rely more on many weak clues.

Added Nov 25, 2012: This post gives data showing people donate money based more on the average than the total sympathy of the recipients. So you are better off asking for donations to help a particular especially sympathetic recipient, than to help many such folks.

GD Star Rating
loading...
Tagged as: , , ,

More 2D Values

Back in ’09 I posted on the 2D map of values from the World Values Survey, and how nations are distributed in that 2D space. A related 2D space of values is detailed in this new JPSP paper. Apparently 19 different values fall naturally on a circle:

Here are more detailed descriptions of these values:

  • Self-direction–thought: Freedom to cultivate one’s own ideas and abilities
  • Self-direction–action: Freedom to determine one’s own actions
  • Stimulation: Excitement, novelty, and change
  • Hedonism: Pleasure and sensuous gratification
  • Achievement: Success according to social standards
  • Power–dominance: Power through exercising control over people
  • Power–resources: Power through control of material and social resources
  • Face: Security and power through maintaining one’s public image and avoiding
  • humiliation
  • Security–personal: Safety in one’s immediate environment
  • Security–societal: Safety and stability in the wider society
  • Tradition: Maintaining and preserving cultural, family, or religious traditions
  • Conformity–rules: Compliance with rules, laws, and formal obligations
  • Conformity–interpersonal: Avoidance of upsetting or harming other people
  • Humility: Recognizing one’s insignificance in the larger scheme of things
  • Benevolence–dependability: Being a reliable and trustworthy member of the ingroup
  • Benevolence–caring: Devotion to the welfare of ingroup members
  • Universalism–concern: Commitment to equality, justice, and protection for all people
  • Universalism–nature: Preservation of the natural environment
  • Universalism–tolerance: Acceptance and understanding of those who are different from oneself

Of course since they are based on surveys, these are probably mostly about values as seen in a far-view.

Added 21Aug: The upper values on the circle are those celebrated more by richer societies like ours, relative to poorer societies like our farmer ancestors. (Foragers were more in the middle.) In older societies, the upper values are also more celebrated by the rich. The left-side more-community-oriented are also more common in the “East,” which I’ve suggested were centrally located places more often conquered by invaders. The more peripheral “West” tended more to emphasize right-side family and individual values.

Added 24 Aug: Far mode emphasizes the positive over the negative, and the social over the personal. So the upper left area of the circle are the most far values, and the lower right the most near values. This also seems to map onto the (near) things that we actually want, and the (far) things we want others to think that we want.

 

GD Star Rating
loading...
Tagged as: ,

No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist. Continue reading "No theory X in shining armour" »

GD Star Rating
loading...
Tagged as: , , ,

Ethical heuristics

I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin

I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.

People here and now are no different in these regards, as far as I can tell. We may think we have better social norms, but the average person has little more reason to believe this than the average person five hundred years ago did. People are perhaps freer here and now to follow their own hearts on many moral issues, but that can’t make much difference to issues where the problem is that people’s hearts don’t automatically register a problem. So even if you aren’t a slave-owner, I claim you are probably using a similar decision procedure to that which would lead you to be one in different circumstances.

Are these really bad ways for most people to behave? Or are they pretty good heuristics for non-ethicists? It would be a huge amount of work for everyone to independently figure out for themselves the answer to every ethical question. What heuristics should people use?

GD Star Rating
loading...
Tagged as:

Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
loading...
Tagged as: ,

Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
loading...
Tagged as: , ,

Does life flow towards flow?

Robin recently described how human brain ‘uploads’, even if forced to work hard to make ends meet, might nonetheless be happy and satisfied with their lives. Some humans naturally love their work, and if they are the ones who get copied, the happiness of emulations could be very high. Of course in Robin’s Malthusian upload scenario, evolutionary pressures towards high productivity are very strong, and so the mere fact that some people really enjoy work doesn’t mean that they will be the ones who get copied billions of times. The workaholics will only inherit the Earth if they are the best employees money can buy.

The broader question of whether creatures that are good at surviving, producing and then reproducing tend towards joy or misery is a crucial one. It helps answer whether it is altruistic to maintain populations of wild animals into the future, or an act of mercy to shrink their habitats. Even more importantly, it is the key to whether it is extremely kind or extremely cruel for humans to engage in panspermia and spread Malthusian life across the universe as soon as possible.

There is an abundance of evidence all around us in the welfare of humans and other animals that have to strive to survive in the environments they are adapted to, but no consensus on what that evidence shows. It is hard enough to tell whether another human has a quality of life better than no life at all, let alone determine the same for say, an octopus.

One of the few pieces of evidence I find compelling comes from Mihály Csíkszentmihályi research into the experience he calls ‘flow‘. His work suggests that humans are most productive, and also most satisfied, when they are totally absorbed in a clear but challenging task which they are capable of completing. The conditions suggested as being necessary to achieve ‘flow’ are

  1. “One must be involved in an activity with a clear set of goals. This adds direction and structure to the task.
  2. One must have a good balance between the perceived challenges of the task at hand and his or her ownperceived skills. One must have confidence that he or she is capable to do the task at hand.
  3. The task at hand must have clear and immediate feedback. This helps the person negotiate any changing demands and allows him or her to adjust his or her performance to maintain the flow state.”

Most work doesn’t meet these criteria and so ‘flow’ is not all that common, but it is amongst the best states of mind a human can hope for.

Some people are much more inclined to enter flow than others and if Csíkszentmihályi’s book is to be believed, they are ideal employees – highly talented, motivated and suited to their tasks. If this is the case, people predisposed to experience flow would be the most popular minds to copy as emulations and in the immediate term the flow-inspired workaholics would indeed come to dominate the Earth.

Of course, it could turn out that in the long run, once enough time has passed for evolution to shed humanity’s baggage, the creatures that most effectively do the forms of work that exist in the future will find life unpleasant. But our evolved capacity for flow in tasks that we are well suited for gives us a reason to hope that will not be the case. If it turns out that flow is a common experience for traditional hunter-gatherers then that would make me even more optimistic. And more optimistic again if we can find evidence for a similar experience in other species.

GD Star Rating
loading...
Tagged as: , ,

Resolving Paradoxes of Intuition

Shelly Kagan gave a nice summary of some problems involved in working out whether death is bad for one. I agree with Robin’s response, and have posted before about some of the particular issues. Now I’d like to make a more general observation.

First I’ll summarize Kagan’s story. The problems are something like this. It seems like death is pretty bad. Thought experiments suggest that it is bad for the person who dies, not just their friends, and that it is bad even if it is painless. Yet if a person doesn’t exist, how can things be bad for them? Seemingly because they are missing out on good things, rather than because they are suffering anything. But it is hard to say when they bear the cost of missing out, and it seems like things that happen happen at certain times. Or maybe they don’t. But then we’d have to say all the people who don’t exist are missing out, and that would mean a huge tragedy is happening as long as those people go unconceived. We don’t think a huge tragedy is happening, so lets say it isn’t. Also we don’t feel too bad about people not being born earlier, like we do about them dying sooner. How can we distinguish these cases of deprivation from non-existence from the deprivation that happens after death? Not in any satisfactorily non-arbitrary way. So ‘puzzles still remain’.

This follows a pattern common to other philosophical puzzles. Intuitions say X sometimes, and not X other times. But they also claim that one should not care about any of the distinctions that can reasonably be made between the times when they say X is true and the times when they say X is false.

Intuitions say you should save a child dying in front of you. Intuitions say you aren’t obliged to go out of your way to protect a dying child in Africa. Intuitions also say physical proximity, likelihood of being blamed, etc shouldn’t be morally relevant.

Intuitions say you are the same person today as tomorrow. Intuitions say you are not the same person as Napoleon. Intuitions also say that whether you are the same person or not shouldn’t depend on any particular bit of wiring in your head, and that changing a bit of wiring doesn’t make you slightly less you.

Of course not everyone shares all of these intuitions (I don’t). But for those who do, there are problems. These problems can be responded to by trying to think of other distinctions between contexts that do seem intuitively legitimate, reframing an unintuitive conclusion to make it intuitive, or just accepting at least one of the unintuitive conclusions.

The first two solutions – finding more appealing distinctions and framings – seem a lot more popular than the third – biting a bullet. Kagan concludes that ‘puzzles remain’, as if this inconsistency is an apparent mathematical conflict that one can fully expect to eventually see through if we think about it right. And many other people have been working on finding a way to make these intuitions consistent for a while. Yet why expect to find a resolution?

Why not expect this contradiction to be like the one that arises if you claim that you like apples more than pears and also pears more than apples? There is no nuanced way to resolve the issue, except to give up at least one.  You can make up values, but sometimes they are just inconsistent. The same goes for evolved values.

From Kagan’s account of death, it seems likely that our intuitions are just inconsistent. Given natural selection, this is not particularly surprising. It’s no mystery how people could evolve to care about the survival of they and their associates, yet not to care about people who don’t exist. Even if people who don’t exist suffer the same costs from not existing. It’s also not surprising that people would come to believe their care for others is largely about the others’ wellbeing, not their own interests, and so believe that if they don’t care about a tragedy, there isn’t one. There might be some other resolution in the death case, but until we see one, it seems odd to expect one. Especially when we have already looked so hard.

Most likely, if you want a consistent position you will have to bite a bullet. If you are interested in reality, biting a bullet here shouldn’t be a last resort after searching every nook and cranny for a consistent and intuitive position. It is much more likely that humans have inconsistent intuitions about the value of life than that we have so far failed to notice some incredibly important and intuitive distinction in circumstances that drives our different intuitions. Why do people continue to search for intuitive resolutions to such problems? It could be that accepting an unintuitive position is easy, unsophisticated, unappealing to funders and friends, and seems like giving up. Is there something else I’m missing?

GD Star Rating
loading...
Tagged as: , ,