The Americanization of Emily (1964) starred James Garner (as Charlie) and Julie Andrews (as Emily), both whom call it their favorite movie. Be warned; I give spoilers in this post. Continue reading "Are War Critics Selfish?" »
The Americanization of Emily (1964) starred James Garner (as Charlie) and Julie Andrews (as Emily), both whom call it their favorite movie. Be warned; I give spoilers in this post. Continue reading "Are War Critics Selfish?" »
Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.
One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?
To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:
Would our lives today be better or worse because of such rights?
Added: I expect to hear this response:
Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.
A few years ago I posted on Kevin Kelly on the Unabomber:
The Unabomber’s manifesto … succinctly states … the view … that the greatest problems in the world are due not to individual inventions but to the entire self-supporting system of technology itself. … The technium also contains power to harm itself; because it is no longer regulated by either nature of humans, it could accelerate so fast as to extinguish itself. …
But … the Unabomber is wrong to want to exterminate it … [because] the machine of civilization offers use more actual freedoms than the alternative. … We willingly choose technology with its great defects and obvious detriments, because we unconsciously calculate its virtues. … After we’ve weighted downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. (more)
Lately I’ve been reading Against Civilization, on “the dehumanizing core of modern civilization,” and have been struck by the strength and universality of its passions; I agree with much of what they say. Yes, we humans pay huge costs because we were built for a different world than this one. Yes, we see gains, but mostly because we are culturally plastic – we let our culture tell us what we want and like, and thus what to do.
And yes, contrary to Kelly, we mostly do not choose how civilization changes, nor would we pick the changes that do happen if we could. As I reported a week ago, our usual main criteria in verbal evaluations of distant futures is if future folks will be caring and moral, and since moral standards change most would usually rate future morals as low. Also, high interest rates show that we try hard to transfer resources from the future to ourselves. And if we could, we’d also probably make future folks remember and honor us more, and not forget our favorite art, music, stories, etc.
So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.
And yet, it is our unique willingness and ability to let our civilization change and be selected by forces out of our control, and then to tell us that we like it, that has let our species dominate the Earth, and gives us a good chance to dominate the galaxy and more. While our descendants may be somewhat less happy than us, or than our distant ancestors, there may be trillions of trillions or more of them. I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.
By my lights, things would probably have gone badly had our ancestors chosen their collective futures, and I doubt things have changed much lately. Yes, our descendants may not share today’s moral sense, or remember us and our art as much as most of us might like. But they will want something, often get it, and there may be so so many of them. And that could be so very good, by my lights.
So I say let us venture on, out of control, into the great and perhaps terrible civilization that we may become. Yes, it might be even better if a few forward looking elites could at least steer civilization modestly away from total destruction. But I fear that once substantial steering-abilities exist, they may not stay modest.
The future of 2050 might be different in many ways if, for example, climate change were mitigated, abortion laws relaxed, marijuana legalized, or the power of different religious groups changed. Which of the following types of differences matter most to you? To most people?
In fact, most people can hardly be bothered to care about the distant future world as a whole, and to the extent they do care, a recent study (details below) suggests that the main thing they care about from the above list is how warm and moral future folks will be. That is, people hardly care at all about future poverty, freedom, suicide, terrorism, crime, poverty, homelessness, disease, skills, laziness, or sci/tech progress. They care a bit more about self-enhancement (e.g., success, pleasure, wealth). But mostly they care about benevolence (warmth & morality, e.g., honesty, sincerity, caring, and friendliness).
Now this study only looked at eight future changes, half of them religious, and I’m not that happy with the way they did their statistics. So there’s a slim hope better studies will get different results. But overall this is pretty sad; like us, future folks will actually care about many more things than their benevolence, and so they may well lament our priorities in helping them.
This result is what one should expect if people think about the far future in a very far mode, and if the main distinct function of far views is to make good social impressions. To the extend they have any opinions about the distant future, people focus overwhelmingly on showing their support for standard social norms of good behavior. They reassure their associates of their support for good norms by showing them that making people nicer according to such norms is the main thing they care about regarding the distant future.
Those promised details: Continue reading "What About The Future Matters?" »
My last post reminded me that the craziest beliefs ordinary folks endorse with a straight face are religious dogmas. And that seems an important clue to what situations break our minds. But to interpret this clue well, we need a sense for what is the key thing that “religions” have common. My last post suggested a hypothesis to me: compared to beliefs on who is dominant, impressive, or conformist, beliefs on who is “good” are the least connected to a constant reality. They and associated beliefs can thus be the most crazy.
Dominance is mostly about power via raw physical force and physical or legal resources. So it is relatively easy to discern, and we have strong incentives to avoid mistakes about it. And while prestige varies greatly by culture, the elements of prestige tend to be commonly impressive features. For example, the most popular sports vary by culture, but most sports show off a similar set of physical abilities. The most popular music genre varies by culture, but most music draws on a common set of musical abilities.
So while beliefs about the best sport or music may vary by culture, for the purpose of picking good mates or allies you can’t go too wrong by being impressed by whomever impresses folks from other cultures, and you have incentives not to make mistakes. For example, if you are mistakenly impressed by and mate with someone without real sport or music abilities, you who may end up with kids who lack those abilities, and fail to impress the next generation.
To discern who is a good conformist you do have to know something about the standards to which they conform. But if you want to associate with a conformist person, you can’t go too wrong by selecting people who are seen as conformist by their local culture. And if you mistakenly associate with someone who is less conformist than you thought, you may well suffer by being seen as non-conformist via your association with them.
Thus cultural variations in beliefs on dominance, prestige, or conformity are not huge obstacles to selecting and associating with people with desirable characteristics. That is to say, beliefs on such things tend to remain tied with strong personal incentives to important objective functional features of the world, ensuring they do not usually get very crazy.
Beliefs on goodness, however, are less tied to objective reality. Yes, beliefs on goodness can serve important functions for societies, encouraging people to do what benefits the society overall. The problem is that this isn’t functional in the same way for individuals. Each individual wants to seem to be good to others, to seem to praise others for being what is seen to be good, and to seem to approve when others praise others who seem to be good. But these are mostly pressures to go along with whatever the local cultures says is good, not to push for a concept of good that will in fact benefit society.
Thus concepts of what makes someone good are less tied to a constant reality than are concepts of what makes someone dominant, conformist, or prestigious. There may be weak slow group selection pressures that encourage cultures to see people as good who help that culture overall, but those pressures are much weaker than the pressures that encourage accurate assessment of who is dominant, conformist, or prestigious.
I suspect that our minds are built to notice that our concepts of goodness are less tied to reality, and so give such concepts more slack on that account. I also suspect that our minds also notice when other concepts are mainly tied to our concepts of goodness, and to similarly give them more slack.
For example, if you notice that your culture thinks people who act like Jesus are good, you will pay close attention to how Jesus was said to act, so you can act like that. But once you notice that the concept of Jesus mainly shows up connected to concepts of goodness, and is not much connected to more practical concepts like how to not crash your car, you will not think as critically about claims on the life or times of Jesus. After all, it doesn’t really matter to you if those are or could be true; what matters are the “morals” of the story of Jesus.
Today, a similar lack of attention to consistency or detail is probably associated with many aspects of things that are seen as good somewhat separately from if they are impressive or powerful. These may include what sorts of recycling or energy use is good for the planet, what sort of policies are good for the nation, what sort of music or art is good for your soul, and so on.
Since this analysis justified a lot of skepticism on concepts of and related to goodness, I am drawn toward a very cautious skeptical attitude in constructing and using such concepts. I want to start with the concepts where there is the least reason to doubt calling them good and well connected to reality, and want to try to go as far as I can with such concepts before adding in other less reliable concepts of good. It seems to me that giving people what they want is just about the least controversial element of good I can find, and thankfully economic analysis goes a remarkably long way with just that concept.
This analysis also suggests that, when doing policy analysis, one should spend as much time as possible doing neutral positive analysis of what is likely to happen if one does nothing, before proceeding to normative analysis of what actions would be best. This should help minimize the biases from our tendency toward wishful and good-based crazy thinking.
A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.
Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.
Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!
Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.
A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.
This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.
Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.
I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.
I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,
Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!
In The Philosophical Quarterly, ethicist Peter Singer reviews Ethics for a Broken World: Imagining Philosophy After Catastrophe:
Tim Mulgan’s first clever idea was to ask how Western moral and political philosophy might look to people living fifty or a hundred years from now if, during the interim, the basic necessities for supporting life become much more difficult to obtain than they are now. Climate change is the obvious way in which this might happen. … Mulgan’s second clever idea was to present his answer to the question he had posed in the form of a series of transcripts of a class held in the broken world on the history of philosophy. …
The affluent world was, by the standards of the broken world, astonishingly wasteful. A favourite leisure activity, for instance, was ‘to drive extremely inefficient carbon-fuelled vehicles around in circles’. In those days, philosophers just ‘took it for granted that everyone can survive.’ … The lectures begin with Nozick, who is taken to represent, ‘in an exaggerated form, the preoccupations and presuppositions of his age.’ … How could an initial acquirer in a pre-affluent world ever know whether the institution of private property will affect future people for the better or for the worse? To a philosopher of the affluent age this might seem obvious, but to the class in the broken world, it does not. …
The idea that utilitarianism leads to extremely demanding obligations to help those in great need was counter-intuitive in the affluent world, but is not in the broken world. So too was the view that it would be wrong for a sheriff to hang one innocent person if that is the only way to save several innocent people from being killed by rioters. … Those same utilitarians who said that we have extremely demanding obligations to the poor could also have pointed out that we have extremely demanding obligations to those who will exist in future. … In the broken world, liberty is not as highly valued as it was in the affluent world. Broken world people regret that affluent people were free to join ‘cults’ that denied climate change. …
The final lecture poses a challenge to affluent democracy on the grounds that, since governments make decisions that affect future generations, no democracy really has the consent of the governed, or of a majority. (more)
Since I also forecast a non-affluent future, I am also interested in how the morals and politics of non-affluent descendants will differ from ours. But I find the above pretty laughable as futurism. As described in this review, this book presents the morality and politics of future folk as overwhelmingly focused on what their ancestors (us) should have been doing for them, namely lots more.
But we have known lots of poor cultures around the world and through history, and their morality and politics has almost never focused on complaining that their ancestors did too little to help them. Most politics and morality has instead been focused on how people alive who interact often should treat each other. Which makes a lot of functional sense.
Wars have consistently caused vast destruction of resources could have gone to building roads, cities, canals, irrigation, etc. And most ancestors severely neglected innovation. Most everywhere in the globe, had ancestors prevented more wars and encouraged more innovation, their descendants would be richer. But almost no one complains about that today. Most discussion today of ancestors celebrates relative wins that suggest some of us are better than others of us, and to lament our ancestors’ backwardness, so we can feel superior by comparison.
The morality of our non-affluent descendants will likely also focus mostly on how they should treat each other, not on how we treated them. To the extent that they talk about us at all, they’ll mostly mention wins that suggest that some of them are better than others of them, and ways in which we seem backward, making them seem forward by comparison. And morality will probably return to be more like that of traditional farmers, relative to that of we rich forager-feeling industrialists of today.
It is a standard truism that discussion of the future is mostly a veiled discussion of today, especially on who today should be criticized or celebrated. The book Ethics for a Broken World seems an especially transparent example of this trend. It is almost all about which of us to blame, and almost none about actual future folk.
Added 1:30p: Interestingly, in Christianity the main bad guy is Satan, who supposedly obeys God, but not Adam and Eve, who disobeyed. If there were ever ancestors who should be blamed it would be Adam and Eve, but oddly Christians almost never complain about them, preferring to save their harsh words for Satan.
Quick, what is the best gift you ever got from a woman? From your parents? From a left-handed person? From a teacher? These aren’t easy questions to answer. But they seem easier than these questions: What is the total value of all the gifts you ever got from women? From your parents? From left-handed folks? From teachers?
For the first set of questions you can try to think of examples of particular people in those categories, and then think of particular gifts you got from those particular people. That can help you guess at the best gift from those categories. But to estimate the total value of gifts from people in categories, you’ll have to also estimate how many gifts you ever got from folks in each category.
Note that it also seems easy to estimate the average value of gifts from each category. To do this, you need only remember a few gifts that fit each category, and then average their values.
As another example, imagine you are looking at building entrance laid out in multi-colored tiles. Some tiles are blue, some red, some green, etc. You are looking at it from a distance, at an angle, in variable lighting. In this situation it will be much easier to estimate if there is more blue than red area in the tiles, than to estimate how many square inches of blue tile area is in that entrance. This later estimate requires you to additionally estimate distances to reference points, to estimate the total surface area.
These examples suggest that when we think in far mode, without a structured systematic representation of our topic, it is usually easier to average than to add values. So averaging is what we’ll tend to do. All of which I mention to introduce to a fascinating paper that I just noticed, even though it got a lot of publicity last December:
This analysis introduces the Presenter’s Paradox. Robust findings in impression formation demonstrate that perceivers’ judgments show a weighted averaging pattern, which results in less favorable evaluations when mildly favorable information is added to highly favorable information. Across seven studies, we show that presenters do not anticipate this averaging pattern on the part of evaluators and instead design presentations that include all of the favorable information available. This additive strategy (“more is better”) hurts presenters in their perceivers’ eyes because mildly favorable information dilutes the impact of highly favorable information. For example, presenters choose to spend more money to make a product bundle look more costly, even though doing so actually cheapened its value from the evaluators’ perspective. (more)
The authors attribute this to a near-far effect:
Presenters face many pieces of potentially relevant information and need to determine, in a bottom-up fashion, which ones to include in a presentation. This presumably draws attention to each individual piece of information as a discrete entity and a focus on piecemeal processing. If a given piece of information exceeds a neutrality threshold, the presenter will conclude that it is compatible with the message he or she seeks to convey and will include it. This results in presentations that would fare better under an adding rather than averaging rule. In contrast, evaluators’ primary task is to make a summary judgment of the overall presentation, which fosters a focus on holistic processing and the big picture and results in an averaging pattern as observed in many impression formation studies.
Additional experiments confirm this near-far interpretation. Those who prepare presentations and proposals tend to focus on them in detail, and so add part values in near mode style, while those who consume such presentations or proposals tend to pay much less attention, and so average their values in far mode style.
This result seems to me quite pregnant with interesting implications, none of which were mentioned in the dozen blog posts on the subject that have appeared since last December. So I guess it’s up to me.
First, this result predicts the usual academic advice to delete publications from low ranked journals from your vita. Yes those extra publications took extra work, and show more total intellectual contribution, but distracted readers evaluate you by averaging your publications, not adding them.
Second, this also predicts that academia will tend in general to neglect conclusions suggested by lots of weak clues, relative to conclusions based on a single strong theory or empirical comparison. People with a practical understanding of particular areas will correctly complain that academics tend too much to latch on to a few easy to explain and justify arguments, at the cost of lots of detail that practitioners appreciate.
Third, this predicts that in morality and politics, which are especially far sorts of topics, arguments tend to be won by those who push simple strong principles, even though people privately tend to choose actions that deviate from such principles. For example, while laws say no one can get medical advice from non-doctors, on the grounds that docs know best, but given a private choice most of us would often let other considerations convince us to listen to non-docs. While actions tend to be chosen in a near mode where lots of other weaker considerations get added, people know their best chance for winning an argument with a distracted audience is to focus on their one strongest point.
Fourth, this predicts Tetlock’s hedgehog vs. foxes result. Foreign policy is an especially far view sort of subject, and experts who focus on one strongest consideration get the most respect and attention, but experts who rely on many considerations, which are on average weaker, are more accurate.
Futurism is probably the most far view sort of topic, so I’d guess that all this holds there the most strongly. That is, while the most futurists who get the most attention from distracted audiences are those who harp endlessly on one clear plausible idea, the most accurate futurists are probably those who know and use hundreds of clues, many of them weak. Alas this is a problem for those of us who want to consider some aspect of the future in detail, since we quickly run out of strong principles, and then have to rely more on many weak clues.
Added Nov 25, 2012: This post gives data showing people donate money based more on the average than the total sympathy of the recipients. So you are better off asking for donations to help a particular especially sympathetic recipient, than to help many such folks.
Back in ’09 I posted on the 2D map of values from the World Values Survey, and how nations are distributed in that 2D space. A related 2D space of values is detailed in this new JPSP paper. Apparently 19 different values fall naturally on a circle:
Here are more detailed descriptions of these values:
Of course since they are based on surveys, these are probably mostly about values as seen in a far-view.
Added 21Aug: The upper values on the circle are those celebrated more by richer societies like ours, relative to poorer societies like our farmer ancestors. (Foragers were more in the middle.) In older societies, the upper values are also more celebrated by the rich. The left-side more-community-oriented are also more common in the “East,” which I’ve suggested were centrally located places more often conquered by invaders. The more peripheral “West” tended more to emphasize right-side family and individual values.
Added 24 Aug: Far mode emphasizes the positive over the negative, and the social over the personal. So the upper left area of the circle are the most far values, and the lower right the most near values. This also seems to map onto the (near) things that we actually want, and the (far) things we want others to think that we want.