Tag Archives: Morality

Why Good Is Crazy

My last post reminded me that the craziest beliefs ordinary folks endorse with a straight face are religious dogmas. And that seems an important clue to what situations break our minds. But to interpret this clue well, we need a sense for what is the key thing that “religions” have common. My last post suggested a hypothesis to me: compared to beliefs on who is dominant, impressive, or conformist, beliefs on who is “good” are the least connected to a constant reality. They and associated beliefs can thus be the most crazy.

Dominance is mostly about power via raw physical force and physical or legal resources. So it is relatively easy to discern, and we have strong incentives to avoid mistakes about it. And while prestige varies greatly by culture, the elements of prestige tend to be commonly impressive features. For example, the most popular sports vary by culture, but most sports show off a similar set of physical abilities. The most popular music genre varies by culture, but most music draws on a common set of musical abilities.

So while beliefs about the best sport or music may vary by culture, for the purpose of picking good mates or allies you can’t go too wrong by being impressed by whomever impresses folks from other cultures, and you have incentives not to make mistakes. For example, if you are mistakenly impressed by and mate with someone without real sport or music abilities, you who may end up with kids who lack those abilities, and fail to impress the next generation.

To discern who is a good conformist you do have to know something about the standards to which they conform. But if you want to associate with a conformist person, you can’t go too wrong by selecting people who are seen as conformist by their local culture. And if you mistakenly associate with someone who is less conformist than you thought, you may well suffer by being seen as non-conformist via your association with them.

Thus cultural variations in beliefs on dominance, prestige, or conformity are not huge obstacles to selecting and associating with people with desirable characteristics. That is to say, beliefs on such things tend to remain tied with strong personal incentives to important objective functional features of the world, ensuring they do not usually get very crazy.

Beliefs on goodness, however, are less tied to objective reality. Yes, beliefs on goodness can serve important functions for societies, encouraging people to do what benefits the society overall. The problem is that this isn’t functional in the same way for individuals. Each individual wants to seem to be good to others, to seem to praise others for being what is seen to be good, and to seem to approve when others praise others who seem to be good. But these are mostly pressures to go along with whatever the local cultures says is good, not to push for a concept of good that will in fact benefit society.

Thus concepts of what makes someone good are less tied to a constant reality than are concepts of what makes someone dominant, conformist, or prestigious. There may be weak slow group selection pressures that encourage cultures to see people as good who help that culture overall, but those pressures are much weaker than the pressures that encourage accurate assessment of who is dominant, conformist, or prestigious.

I suspect that our minds are built to notice that our concepts of goodness are less tied to reality, and so give such concepts more slack on that account. I also suspect that our minds also notice when other concepts are mainly tied to our concepts of goodness, and to similarly give them more slack.

For example, if you notice that your culture thinks people who act like Jesus are good, you will pay close attention to how Jesus was said to act, so you can act like that. But once you notice that the concept of Jesus mainly shows up connected to concepts of goodness, and is not much connected to more practical concepts like how to not crash your car, you will not think as critically about claims on the life or times of Jesus. After all, it doesn’t really matter to you if those are or could be true; what matters are the “morals” of the story of Jesus.

Today, a similar lack of attention to consistency or detail is probably associated with many aspects of things that are seen as good somewhat separately from if they are impressive or powerful. These may include what sorts of recycling or energy use is good for the planet, what sort of policies are good for the nation, what sort of music or art is good for your soul, and so on.

Since this analysis justified a lot of skepticism on concepts of and related to goodness, I am drawn toward a very cautious skeptical attitude in constructing and using such concepts. I want to start with the concepts where there is the least reason to doubt calling them good and well connected to reality, and want to try to go as far as I can with such concepts before adding in other less reliable concepts of good. It seems to me that giving people what they want is just about the least controversial element of good I can find, and thankfully economic analysis goes a remarkably long way with just that concept.

This analysis also suggests that, when doing policy analysis, one should spend as much time as possible doing neutral positive analysis of what is likely to happen if one does nothing, before proceeding to normative analysis of what actions would be best. This should help minimize the biases from our tendency toward wishful and good-based crazy thinking.

GD Star Rating
Tagged as: ,

Is Social Science Extremist?

I recently did two interviews with Nikola Danaylov, aka “Socrates”, who has so far done ~90 Singularity 1 on 1 video podcast interviews. Danaylov says he disagreed with me the most:

My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. … I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future. … On the one hand, I really like Robin a lot: He is that most likeable fellow … who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.

On the other hand, after reading his book draft on the [future] Em Economy I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”

So, here is the gist of our disagreement:

I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…

Suggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science. … Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights we can use for the benefit of our whole society.) (more)

You might think that Danaylov’s complaint is that I use the wrong social science, one biased too far toward libertarian conclusions. But in fact his complaint seems to be mainly against the very idea of social science: an ability to predict social outcomes. He apparently argues that since 1) future social outcomes depend in many billions of individual choices, 2) ethical and political considerations are relevant to such choices, and 3) humans have free will to be influenced by such considerations in making their choices, that therefore 4) it should be impossible to predict future social outcomes at a rate better than random chance.

For example, if allowing some ems to run faster than others might offend common ethical ideals of equality, it must be impossible to predict that this will actually happen. While one might be able to use physics to predict the future paths of bouncing billiard balls, as soon as a human will free will enters the picture making a choice where ethics is relevant, all must fade into an opaque cloud of possibilities; no predictions are possible.

Now I haven’t viewed them, but I find it extremely hard to believe that out of 90 interviews on the future, Danaylov has always vigorously complained whenever anyone even implicitly suggested that they could any better than random chance in guessing future outcomes in any context influenced by a human choice where ethics or politics might have been relevant. I’m in fact pretty sure he must have nodded in agreement with many explicit forecasts. So why complain more about me then?

It seems to me that the real complaint here is that I forecast that human choices will in fact result in outcomes that violate the ethical principles Danaylov holds dear. He objects much more to my predicting a future of more inequality than if I had predicted a future of more equality. That is, I’m guessing he mostly approves of idealistic, and disapproves of cynical, predictions. Social science must be impossible if it would predict non-idealistic outcomes, because, well, just because.

FYI, I also did this BBC interview a few months back.

GD Star Rating
Tagged as: , , ,

Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
Tagged as: , , ,

Ethics For A Broken World

In The Philosophical Quarterly, ethicist Peter Singer reviews Ethics for a Broken World: Imagining Philosophy After Catastrophe:

Tim Mulgan’s first clever idea was to ask how Western moral and political philosophy might look to people living fifty or a hundred years from now if, during the interim, the basic necessities for supporting life become much more difficult to obtain than they are now. Climate change is the obvious way in which this might happen. … Mulgan’s second clever idea was to present his answer to the question he had posed in the form of a series of transcripts of a class held in the broken world on the history of philosophy. …

The affluent world was, by the standards of the broken world, astonishingly wasteful. A favourite leisure activity, for instance, was ‘to drive extremely inefficient carbon-fuelled vehicles around in circles’. In those days, philosophers just ‘took it for granted that everyone can survive.’ … The lectures begin with Nozick, who is taken to represent, ‘in an exaggerated form, the preoccupations and presuppositions of his age.’ … How could an initial acquirer in a pre-affluent world ever know whether the institution of private property will affect future people for the better or for the worse? To a philosopher of the affluent age this might seem obvious, but to the class in the broken world, it does not. …

The idea that utilitarianism leads to extremely demanding obligations to help those in great need was counter-intuitive in the affluent world, but is not in the broken world. So too was the view that it would be wrong for a sheriff to hang one innocent person if that is the only way to save several innocent people from being killed by rioters. … Those same utilitarians who said that we have extremely demanding obligations to the poor could also have pointed out that we have extremely demanding obligations to those who will exist in future. … In the broken world, liberty is not as highly valued as it was in the affluent world. Broken world people regret that affluent people were free to join ‘cults’ that denied climate change. …

The final lecture poses a challenge to affluent democracy on the grounds that, since governments make decisions that affect future generations, no democracy really has the consent of the governed, or of a majority. (more)

Since I also forecast a non-affluent future, I am also interested in how the morals and politics of non-affluent descendants will differ from ours. But I find the above pretty laughable as futurism. As described in this review, this book presents the morality and politics of future folk as overwhelmingly focused on what their ancestors (us) should have been doing for them, namely lots more.

But we have known lots of poor cultures around the world and through history, and their morality and politics has almost never focused on complaining that their ancestors did too little to help them. Most politics and morality has instead been focused on how people alive who interact often should treat each other. Which makes a lot of functional sense.

Wars have consistently caused vast destruction of resources could have gone to building roads, cities, canals, irrigation, etc. And most ancestors severely neglected innovation. Most everywhere in the globe, had ancestors prevented more wars and encouraged more innovation, their descendants would be richer. But almost no one complains about that today. Most discussion today of ancestors celebrates relative wins that suggest some of us are better than others of us, and to lament our ancestors’ backwardness, so we can feel superior by comparison.

The morality of our non-affluent descendants will likely also focus mostly on how they should treat each other, not on how we treated them. To the extent that they talk about us at all, they’ll mostly mention wins that suggest that some of them are better than others of them, and ways in which we seem backward, making them seem forward by comparison. And morality will probably return to be more like that of traditional farmers, relative to that of we rich forager-feeling industrialists of today.

It is a standard truism that discussion of the future is mostly a veiled discussion of today, especially on who today should be criticized or celebrated. The book Ethics for a Broken World seems an especially transparent example of this trend. It is almost all about which of us to blame, and almost none about actual future folk.

Added 11a: Here and here are similar but ungated reviews.

Added 1:30p: Interestingly, in Christianity the main bad guy is Satan, who supposedly obeys God, but not Adam and Eve, who disobeyed. If there were ever ancestors who should be blamed it would be Adam and Eve, but oddly Christians almost never complain about them, preferring to save their harsh words for Satan.

GD Star Rating
Tagged as: ,

We Add Near, Average Far

Quick, what is the best gift you ever got from a woman? From your parents? From a left-handed person? From a teacher? These aren’t easy questions to answer. But they seem easier than these questions: What is the total value of all the gifts you ever got from women? From your parents? From left-handed folks? From teachers?

For the first set of questions you can try to think of examples of particular people in those categories, and then think of particular gifts you got from those particular people. That can help you guess at the best gift from those categories. But to estimate the total value of gifts from people in categories, you’ll have to also estimate how many gifts you ever got from folks in each category.

Note that it also seems easy to estimate the average value of gifts from each category. To do this, you need only remember a few gifts that fit each category, and then average their values.

As another example, imagine you are looking at building entrance laid out in multi-colored tiles. Some tiles are blue, some red, some green, etc. You are looking at it from a distance, at an angle, in variable lighting. In this situation it will be much easier to estimate if there is more blue than red area in the tiles, than to estimate how many square inches of blue tile area is in that entrance. This later estimate requires you to additionally estimate distances to reference points, to estimate the total surface area.

These examples suggest that when we think in far mode, without a structured systematic representation of our topic, it is usually easier to average than to add values. So averaging is what we’ll tend to do. All of which I mention to introduce to a fascinating paper that I just noticed, even though it got a lot of publicity last December:

This analysis introduces the Presenter’s Paradox. Robust findings in impression formation demonstrate that perceivers’ judgments show a weighted averaging pattern, which results in less favorable evaluations when mildly favorable information is added to highly favorable information. Across seven studies, we show that presenters do not anticipate this averaging pattern on the part of evaluators and instead design presentations that include all of the favorable information available. This additive strategy (“more is better”) hurts presenters in their perceivers’ eyes because mildly favorable information dilutes the impact of highly favorable information. For example, presenters choose to spend more money to make a product bundle look more costly, even though doing so actually cheapened its value from the evaluators’ perspective. (more)

The authors attribute this to a near-far effect:

Presenters face many pieces of potentially relevant information and need to determine, in a bottom-up fashion, which ones to include in a presentation. This presumably draws attention to each individual piece of information as a discrete entity and a focus on piecemeal processing. If a given piece of information exceeds a neutrality threshold, the presenter will conclude that it is compatible with the message he or she seeks to convey and will include it. This results in presentations that would fare better under an adding rather than averaging rule. In contrast, evaluators’ primary task is to make a summary judgment of the overall presentation, which fosters a focus on holistic processing and the big picture and results in an averaging pattern as observed in many impression formation studies.

Additional experiments confirm this near-far interpretation. Those who prepare presentations and proposals tend to focus on them in detail, and so add part values in near mode style, while those who consume such presentations or proposals tend to pay much less attention, and so average their values in far mode style.

This result seems to me quite pregnant with interesting implications, none of which were mentioned in the dozen blog posts on the subject that have appeared since last December. So I guess it’s up to me.

First, this result predicts the usual academic advice to delete publications from low ranked journals from your vita. Yes those extra publications took extra work, and show more total intellectual contribution, but distracted readers evaluate you by averaging your publications, not adding them.

Second, this also predicts that academia will tend in general to neglect conclusions suggested by lots of weak clues, relative to conclusions based on a single strong theory or empirical comparison. People with a practical understanding of particular areas will correctly complain that academics tend too much to latch on to a few easy to explain and justify arguments, at the cost of lots of detail that practitioners appreciate.

Third, this predicts that in morality and politics, which are especially far sorts of topics, arguments tend to be won by those who push simple strong principles, even though people privately tend to choose actions that deviate from such principles. For example, while laws say no one can get medical advice from non-doctors, on the grounds that docs know best, but given a private choice most of us would often let other considerations convince us to listen to non-docs. While actions tend to be chosen in a near mode where lots of other weaker considerations get added, people know their best chance for winning an argument with a distracted audience is to focus on their one strongest point.

Fourth, this predicts Tetlock’s hedgehog vs. foxes result. Foreign policy is an especially far view sort of subject, and experts who focus on one strongest consideration get the most respect and attention, but experts who rely on many considerations, which are on average weaker, are more accurate.

Futurism is probably the most far view sort of topic, so I’d guess that all this holds there the most strongly. That is, while the most futurists who get the most attention from distracted audiences are those who harp endlessly on one clear plausible idea, the most accurate futurists are probably those who know and use hundreds of clues, many of them weak. Alas this is a problem for those of us who want to consider some aspect of the future in detail, since we quickly run out of strong principles, and then have to rely more on many weak clues.

Added Nov 25, 2012: This post gives data showing people donate money based more on the average than the total sympathy of the recipients. So you are better off asking for donations to help a particular especially sympathetic recipient, than to help many such folks.

GD Star Rating
Tagged as: , , ,

More 2D Values

Back in ’09 I posted on the 2D map of values from the World Values Survey, and how nations are distributed in that 2D space. A related 2D space of values is detailed in this new JPSP paper. Apparently 19 different values fall naturally on a circle:

Here are more detailed descriptions of these values:

  • Self-direction–thought: Freedom to cultivate one’s own ideas and abilities
  • Self-direction–action: Freedom to determine one’s own actions
  • Stimulation: Excitement, novelty, and change
  • Hedonism: Pleasure and sensuous gratification
  • Achievement: Success according to social standards
  • Power–dominance: Power through exercising control over people
  • Power–resources: Power through control of material and social resources
  • Face: Security and power through maintaining one’s public image and avoiding
  • humiliation
  • Security–personal: Safety in one’s immediate environment
  • Security–societal: Safety and stability in the wider society
  • Tradition: Maintaining and preserving cultural, family, or religious traditions
  • Conformity–rules: Compliance with rules, laws, and formal obligations
  • Conformity–interpersonal: Avoidance of upsetting or harming other people
  • Humility: Recognizing one’s insignificance in the larger scheme of things
  • Benevolence–dependability: Being a reliable and trustworthy member of the ingroup
  • Benevolence–caring: Devotion to the welfare of ingroup members
  • Universalism–concern: Commitment to equality, justice, and protection for all people
  • Universalism–nature: Preservation of the natural environment
  • Universalism–tolerance: Acceptance and understanding of those who are different from oneself

Of course since they are based on surveys, these are probably mostly about values as seen in a far-view.

Added 21Aug: The upper values on the circle are those celebrated more by richer societies like ours, relative to poorer societies like our farmer ancestors. (Foragers were more in the middle.) In older societies, the upper values are also more celebrated by the rich. The left-side more-community-oriented are also more common in the “East,” which I’ve suggested were centrally located places more often conquered by invaders. The more peripheral “West” tended more to emphasize right-side family and individual values.

Added 24 Aug: Far mode emphasizes the positive over the negative, and the social over the personal. So the upper left area of the circle are the most far values, and the lower right the most near values. This also seems to map onto the (near) things that we actually want, and the (far) things we want others to think that we want.


GD Star Rating
Tagged as: ,

No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist. Continue reading "No theory X in shining armour" »

GD Star Rating
Tagged as: , , ,

Ethical heuristics

I would like to think I wouldn’t have been friends with slave owners, anti-semites or wife-beaters, but then again most of my friends couldn’t give a damn about the suffering of animals, so I guess I would have been. – Robert Wiblin

I expect the same friends would have been any of those things too, given the right place and period of history. The same ‘faults’ appear to be responsible for most old fashioned or foreign moral failings: not believing that anything bad is happening if you don’t feel bad about it, and not feeling bad about anything unless there is a social norm of feeling bad about it.

People here and now are no different in these regards, as far as I can tell. We may think we have better social norms, but the average person has little more reason to believe this than the average person five hundred years ago did. People are perhaps freer here and now to follow their own hearts on many moral issues, but that can’t make much difference to issues where the problem is that people’s hearts don’t automatically register a problem. So even if you aren’t a slave-owner, I claim you are probably using a similar decision procedure to that which would lead you to be one in different circumstances.

Are these really bad ways for most people to behave? Or are they pretty good heuristics for non-ethicists? It would be a huge amount of work for everyone to independently figure out for themselves the answer to every ethical question. What heuristics should people use?

GD Star Rating
Tagged as:

Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
Tagged as: ,

Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
Tagged as: , ,