Tag Archives: Morality

Testing Moral Progress

Mike Huemer just published his version of the familiar argument that changing moral views is evidence for moral realism. Here is the progress datum he seeks to explain:

Mainstream illiberal views of earlier centuries are shocking and absurd to modern readers. The trend is consistent across many issues: war, murder, slavery, democracy, women’s suffrage, racial segregation, torture, execution, colonization. It is difficult to think of any issue on which attitudes have moved in the other direction. This trend has been ongoing for millennia, accelerating in the last two centuries, and even the last 50 years, and it affects virtually every country on Earth. … All the changes are consistent with a certain coherent ethical standpoint. Furthermore, the change has been proceeding in the same direction for centuries, and the changes have affected nearly all societies across the globe. This is not a random walk.

Huemer’s favored explanation:

If there are objective ethical truths to which human beings have some epistemic access, then we should expect moral beliefs across societies to converge over time, if only very slowly.

But note three other implications of this moral-learning process, at least if we assume the usual (e.g., Bayesian) rational belief framework:

  1. The rate at which moral beliefs have been changing should track the rate at which we get relevant info, such as via life experience or careful thought. If we’ve seen a lot more change recently than thousands of years ago, we need a reason to think we’ve had a lot more thinking or experience lately.
  2. If people are at least crudely aware of the moral beliefs of others in the world, then they should be learning from each other much more than from their personal thoughts and experience. Thus moral learning should be a worldwide phenomena; it might explain average world moral beliefs, but it can’t explain much of belief differences at a time.
  3. Rational learning of any expected value via a stream of info should produce a random walk in those expectations, not a steady trend. But as Huemer notes, what we mostly see lately are steady trends.

For Age of Em, I read a lot about cultural value variation, and related factor analyses. One of the two main factors by which national values vary correlates strongly with average national wealth. At each point in time, richer nations have more of this factor, over time nations get more of it as they get richer, and when a nation has an unusual jump in wealth it gets an unusual jump in this factor. And this favor explains an awful lot of the value choices Huemer seeks to explain. All this even though people within a nation that have these values more are not richer on average.

The usual view in this field is that the direction of causation here is mostly from wealth to this value factor. This makes sense because this is the usual situation for variables that correlate with wealth. For example, if length of roads or number of TVs correlate with wealth, that is much more because wealth causes roads and TVs, and much less because roads and TV cause wealth. Since wealth is the main “power” factor of a society, this main factor tends to cause other small things more than they cause it.

This is as close as Hummer gets to addressing this usual view:

Perhaps there is a gene that inclines one toward illiberal beliefs if one’s society as a whole is primitive and poor, but inclines one toward liberal beliefs if one’s society is advanced and prosperous. Again, it is unclear why such a gene would be especially advantageous, as compared with a gene that causes one to be liberal in all conditions, or illiberal in all conditions. Even if such a gene would be advantageous, there has not been sufficient opportunity for it to be selected, since for almost all of the history of the species, human beings have lived in poor, primitive societies.

Well if you insist on explaining things in terms of genes, everything is “unclear”; we just don’t have good full explanations to take us all the way from genes to how values vary with cultural context. I’ve suggested that we industry folks are reverting to forager values in many ways with increasing wealth, because wealth cuts the fear that made foragers into farmers. But you don’t have to buy my story to find it plausible that humans are just built so that their values vary as their society gets rich. (This change need not at all be adaptive in today’s environment.)

Note that we already see many variables that change between rich vs. poor societies, but which don’t change the same way between rich and poor people within a society. For example rich people in a society save more, but rich societies don’t save more. Richer societies spend a larger fraction of income on medicine, but richer people spend a smaller fraction. And rich societies have much lower fertility even when rich people have about the same fertility.

Also not that “convergence” is about variance of opinion; it isn’t obvious to me that variance is lower now than it was thousands of years. What we’ve seen is change, not convergence.

Bottom line: the usual social science story that increasing wealth causes certain predictable value changes fits the value variation data a lot better than the theory that the world is slowly learning moral truth. Even if we accepted moral learning as explaining some of the variation, we’ll need wealth causes values to explain a lot of the rest of the variation. So why not let it explain all? Maybe someone can come up with variations on the moral learning theory that fit the data better. But at the moment, the choice isn’t even close.

GD Star Rating
Tagged as: , ,

Am I A Moralist?

Imagine that a “musicalist” is someone who makes good and persuasive musical arguments. One might define this broadly, by saying that any act is musical if it influences the physical world so as to change the distribution of sound, as most sound has musical elements. Here anyone who makes good and persuasive arguments that influence physical acts is a good “musicalist.”

Or one might try to define “musicalist” more narrowly, by requiring that the acts argued for have an especially strong effect on the especially musical aspects of the physical world, that musical concepts and premises often be central to the arguments. Far fewer people would be see as good “musicalists” here.

The concept of “moralist” can also be defined broadly or narrowly. Defined broadly, a “moralist” might be anyone who makes good and persuasive arguments about acts for which anyone thinks moral considerations to be relevant. This could be because the acts influence morally-relevant outcomes, or because the acts are encouraged or discouraged by some moral rules.

Defining narrowly, however, one might require that the acts influenced have especially strong moral impacts, and that moral concepts and premises often be central to the arguments. Far fewer people are good “moralists” by this definition.

Bryan Caplan recently praised me as a “moralist”:

Robin … excels as a moralist – in three distinct ways.

Robin often constructs sound original moral arguments.  His arguments against cuckoldry and for cryonics are just two that come to mind.  Yes, part of his project is to understand why most people are forgiving of cuckoldry and hostile to cryonics.  But the punchline is that the standard moral position on these issue is indefensible.

Second, Robin’s moral arguments actually persuade people.  I’ve met many of his acolytes in person, and see vastly more online.  This doesn’t mean, of course, that Robin’s moral arguments persuade most readers.  Any moral philosopher will tell you that changing minds is like pulling teeth.  My point is that Robin has probably changed the moral convictions of hundreds.  And that’s hundreds more than most moralists have changed.

Third, Robin takes some classical virtues far beyond the point of prudence.  Consider his legendary candor.

I accept (and am grateful for) Bryan’s praise relative to a broad interpretation of “moralist.” Yes, I try to create good and persuasive arguments on many topics relevant to actions, and according to many concepts of morality most acts have substantial moral impact. Since moral considerations are so ubiquitous, most anyone who is a good arguer must also be a good moralist.

But what if we define “moralist” narrowly, so that the acts must be unusually potent morally, and the concepts and premises invoked must be explicitly moral ones? In this case, I don’t see that I qualify, since I don’t focus much on especially moral concepts, premises, rules, or consequences.

Bryan gave two examples, and his readers gave two more. Here are quick summaries:

  • I argue that cryonics might work, that it only needs a >~5% of working to make sense, and that your wanting to do it triggers abandonment feelings in others exactly because they think you think it might work.
  • I argue that with simple precautions betting on terror acts won’t cause terror acts, but could help to predict and prevent such attacks.
  • I argue that the kinds of inequality we talk most about are only a small fraction of all inequality, but we talk about them most because they can justify us grabbing stuff that is more easily grabbed.
  • I argue that cuckoldry (which results in kids) causes many men great emotional and preference harm, plausibly comparable to the harm women get from being raped.

I agree that these arguments address actions about which many people have moral feelings. But I don’t see myself as focused on moral concepts or premises; I see my discussions as focused on other issues.

Yes, most people have moral wants. These aren’t all or even most of what people want, but moral considerations do influence what people (including me) want. Yes, these moral wants are relevant for many acts. But people disagree about the weight and even direction that moral considerations push on many of these acts, and I don’t see myself as especially good at or interested taking sides in arguments about such weights and directions. I instead mostly seek other simple robust considerations to influence beliefs and wants about acts.

Bryan seems to think that my being a good moralist by his lights argues against my “dealism” focus on identifying social policies that can get most everyone more of what they want, instead of taking sides in defined moral battles, wherein opposing sides make conflicting and often uncompromising demands. It seems to me that I in fact do work better by not aligning myself clearly with particular sides of established tug-o-wars, but instead seeking considerations that can appeal broadly to people on both sides of existing conflicts.

GD Star Rating
Tagged as: ,

Who/What Should Get Votes?

Alex T. asks Should the Future Get a Vote? He dislikes suggestions to give more votes to “civic organizations” who claim to represent future folks, since prediction markets could be more trustworthy:

Through a suitable choice of what is to be traded, prediction markets can be designed to be credibly motivated by a variety of goals including the interests of future generations. … If all we cared about was future GDP, a good rule would be to pass a policy if prediction markets estimate that future GDP will be higher with the policy than without the policy. Of course, we care about more than future GDP; perhaps we also care about environmental quality, risk, inequality, liberty and so forth. What Hanson’s futarchy proposes is to incorporate all these ideas into a weighted measure of welfare. … Note, however, that even this assumes that we know what people in the future will care about. Here then is the final meta-twist. We can also incorporate into our measure of welfare predictions of how future generations will define welfare. (more)

For example, we could implement a 2% discount rate by having official welfare be 2% times welfare this next year plus 98% times welfare however it will be defined a year from now. Applied recursively, this can let future folks keep changing their minds about what they care about, even future discount rates.

We could also give votes to people in the past. While one can’t change the experiences of past folks, one can still satisfy their preferences. If past folks expressed particular preferences regarding future outcomes, those preferences could also be given weight in an overall welfare definition.

We could even give votes to animals. One way is to make some assumptions about what outcomes animals seem to care about, pick ways to measure such outcomes, and then include weights on those measures in the welfare definition. Another way is to assume that eventually we’ll “uplift” such animals so that they can talk to us, and put weights on what those uplifted animals will eventually say about the outcomes their ancestors cared about.

We might even put weights on aliens, or on angels. We might just put a weight on what they say about what they want, if they ever show up to tell us. If they never show up, those weights stay set at zero.

Of course just because we could give votes to future folks, past folks, animals, aliens, and angels doesn’t mean we will ever want to do so.

GD Star Rating
Tagged as: , , ,

Moral Legacy Myths

Imagine that you decide that this week you’ll go to a different doctor from your usual one. Or that you’ll get a haircut from a different hairdresser. Ask yourself: by how much do you expect such actions to influence the distant future of all our descendants? Probably not much. As I argued recently, we should expect most random actions to have very little long term influence.

Now imagine that you visibly take a stand on a big moral question involving a recognizable large group. Like arguing against race-based slavery. Or defending the Muslim concept of marriage. Or refusing to eat animals. Imagine yourself taking a personal action to demonstrate your commitment to this moral stand. Now ask yourself: by how much do you expect these actions to influence distant descendants?

I’d guess that even if you think such moral actions will have only a small fractional influence on the future world, you expect them to have a much larger long term influence than doctor or haircut actions. Furthermore, I’d guess that you are much more willing to credit the big-group moral actions of folks centuries ago for influencing our world today, than you are willing to credit people who made different choices of doctors or hairdressers centuries ago.

But is this correct? When I put my social-science thinking cap on, I can’t find good reasons to expect big-group moral actions to have much stronger long term influence. For example, you might posit that moral opinions are more stable than other opinions and hence last longer. But more stable things should be harder to change by any one action, leaving the average influence about the same.

I can, however, think of a good reason to expect people to expect this difference: near-far (a.k.a construal level) theory. Acts based on basic principles seem more far than acts based on practical considerations. Acts identified with big groups seem more far than acts identified with small groups. And longer-term influence is also more strongly associated with a far view.

So I tentatively lean toward concluding that this expectation of long term influence from big-group moral actions is mostly wishful thinking. Today’s distribution of moral actions and the relations between large groups mostly result from a complex equilibrium of people today, where random disturbances away from that equilibrium are usually quickly washed away. Yes, sometimes they’ll be tipping points, but those should be rare, as usual, and each of us can only expect to have a small fraction influence on such things.

GD Star Rating
Tagged as: ,

Rejection Via Advice

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. So we try to hide this motive, and to pretend that other motives dominate our choices of associates.

This would be easier to do if status were very stable. Then we could take our time setting up plausible excuses for wanting to associate with particular high status folks, and for rejecting association bids by particular low status folks. But in fact status fluctuates, which can force us to act quickly. We want to quickly associate more with folks who rise in status, and to quickly associate less with those who fall in status. But the coincidence in time between their status change and our association change may make our status motives obvious.

Since association seems a good thing in general, trying to associate with anyone seems a “nice” act, requiring fewer excuses. In contrast, weakening an existing association seems less nice. So we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

If different associates offer different advice, then this person with fallen status simply must fail to follow most of that advice. Which then gives all those folks whose advice was not followed an excuse to distance themselves from this failure. And those whose advice was followed, well at least they get the status mark of power – a credibly claim that they have influence over others. Either way, the falling status person loses even more status.

Unless of course the advice followed is actually useful. But what are the chances of that?

Added 27Dec: A similar strategy would be useful if your status were to rise, and you wanted to drop associates in order make room for more higher status associates.

GD Star Rating
Tagged as: , , , ,

The ‘What If Failure?’ Taboo

Last night I heard a  group of smart pundits and wonks discuss Tyler Cowen’s new book Average Is Over. This book is a sequel to his last, The Great Stagnation, where he argued that wage inequality has greatly increased in rich nations over the last forty years, and especially in the last fifteen years. In this new book, Tyler says this trend will continue for the next twenty years, and offers practical advice on how to personally navigate this new world.

Now while I’ve criticized Tyler for overemphasizing automation as a cause of this increased wage inequality, I agree that most of the trends he discusses are real, and most of his practical advice is sound. But I can also see reasonable grounds to dispute this, and I expected the pundits/wonks to join me in debating that. So I was surprised to see the discussion focus overwhelmingly on if this increased inequality was acceptable. Didn’t Tyler understand that losers might be unhappy, and push the political system toward redistribution and instability?

Tyler quite reasonably said yes this change might not be good overall, and yes there might well be more redistribution, but it wouldn’t change the overall inequality much. He pointed out that most losers might be pretty happy with new ways to enjoy more free time, that our last peak of instability was in the 60’s when inequality was at a minimum, that since we have mostly accepted increased inequality for forty years it is reasonable to expect that to continue for another twenty, and that over history inequality has had only a weak correlation with redistribution and instability.

None of which seemed to dent the pundit/wonk mood. They seemed to hold fast to a simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I think we see something similar with other trends framed as negatives, like global warming, bigger orgs, or increased regulation. Once such a trend is framed as an official bad thing which public policy might conceivably reduce, it becomes (mildly) taboo to seem to just accept the change and analyze how to deal with its consequences.

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

GD Star Rating
Tagged as: , , , ,

Are War Critics Selfish?

The Americanization of Emily (1964) starred James Garner (as Charlie) and Julie Andrews (as Emily), both whom call it their favorite movie. Be warned; I give spoilers in this post. Continue reading "Are War Critics Selfish?" »

GD Star Rating
Tagged as: , ,

Imagine Farmer Rights

Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.

One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?

To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:

  1. Work in the open with fresh air and sun.
  2. See how all  food is grown and prepared.
  3. Nights outside are usually quiet and dark.
  4. Quickly get to a mile-long all-nature walk.
  5. All one meets are folks one knows, or known by them.
  6. Easily take apart devices, to see materials, mechanisms.
  7. Authorities with clear answers on cosmology, morality.
  8. Severe punishment of heretics who contradict authorities.
  9. Prior generations quickly make room for new generations.
  10. Rule by a king of our ethnicity, with clear inheritance.
  11. Visible deference from nearby authority-declared inferiors.
  12. More?

Would our lives today be better or worse because of such rights?

Added: I expect to hear this response:

Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.

GD Star Rating
Tagged as: , , , ,

Civilization Vs. Human Desire

A few years ago I posted on Kevin Kelly on the Unabomber:

The Unabomber’s manifesto … succinctly states … the view … that the greatest problems in the world are due not to individual inventions but to the entire self-supporting system of technology itself. … The technium also contains power to harm itself; because it is no longer regulated by either nature of humans, it could accelerate so fast as to extinguish itself. …

But … the Unabomber is wrong to want to exterminate it … [because] the machine of civilization offers use more actual freedoms than the alternative. … We willingly choose technology with its great defects and obvious detriments, because we unconsciously calculate its virtues. … After we’ve weighted downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. (more)

Lately I’ve been reading Against Civilization, on “the dehumanizing core of modern civilization,” and have been struck by the strength and universality of its passions; I agree with much of what they say. Yes, we humans pay huge costs because we were built for a different world than this one. Yes, we see gains, but mostly because we are culturally plastic – we let our culture tell us what we want and like, and thus what to do.

And yes, contrary to Kelly, we mostly do not choose how civilization changes, nor would we pick the changes that do happen if we could. As I reported a week ago, our usual main criteria in verbal evaluations of distant futures is if future folks will be caring and moral, and since moral standards change most would usually rate future morals as low. Also, high interest rates show that we try hard to transfer resources from the future to ourselves. And if we could, we’d also probably make future folks remember and honor us more, and not forget our favorite art, music, stories, etc.

So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.

And yet, it is our unique willingness and ability to let our civilization change and be selected by forces out of our control, and then to tell us that we like it, that has let our species dominate the Earth, and gives us a good chance to dominate the galaxy and more. While our descendants may be somewhat less happy than us, or than our distant ancestors, there may be trillions of trillions or more of them. I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.

By my lights, things would probably have gone badly had our ancestors chosen their collective futures, and I doubt things have changed much lately. Yes, our descendants may not share today’s moral sense, or remember us and our art as much as most of us might like. But they will want something, often get it, and there may be so so many of them. And that could be so very good, by my lights.

So I say let us venture on, out of control, into the great and perhaps terrible civilization that we may become. Yes, it might be even better if a few forward looking elites could at least steer civilization modestly away from total destruction. But I fear that once substantial steering-abilities exist, they may not stay modest.

GD Star Rating
Tagged as: ,

What About The Future Matters?

The future of 2050 might be different in many ways if, for example, climate change were mitigated, abortion laws relaxed, marijuana legalized, or the power of different religious groups changed. Which of the following types of differences matter most to you? To most people?

  • Dysfunction: murder, serious assault, disease, poverty, gender inequality, rape, homelessness, suicide, prostitution, corruption, burglary, fear of crime, forced immigration, gangs, terrorism, global warming.
  • Development: technological innovation, scientific progress, major scientific discoveries, volunteering, social welfare organizations, community groups, education standards, science education.
  • Warmth: warm, caring, considerate, insensitive, unfriendly, unsympathetic.
  • Morality: honest, trustworthy, sincere, immoral, deceitful, unfaithful.
  • Competence: capable, assertive, competent, independent, disorganized, lazy, unskilled.
  • Conservation: respect for tradition, self-discipline, obedience, social order, being moderate, national security, family security, being humble.
  • Self-transcendence: honesty, social justice, equality, helpful, protect environment, meaning in life.
  • Openness to change: independence, exciting life, enjoying life, freedom, a varied life, being daring, creativity,
  • Self-enhancement: social power, being successful, ambition, pleasure, wealth, social recognition.

In fact, most people can hardly be bothered to care about the distant future world as a whole, and to the extent they do care, a recent study (details below) suggests that the main thing they care about from the above list is how warm and moral future folks will be. That is, people hardly care at all about future poverty, freedom, suicide, terrorism, crime, poverty, homelessness, disease, skills, laziness, or sci/tech progress. They care a bit more about self-enhancement (e.g., success, pleasure, wealth). But mostly they care about benevolence (warmth & morality, e.g., honesty, sincerity, caring, and friendliness).

Now this study only looked at eight future changes, half of them religious, and I’m not that happy with the way they did their statistics. So there’s a slim hope better studies will get different results. But overall this is pretty sad; like us, future folks will actually care about many more things than their benevolence, and so they may well lament our priorities in helping them.

This result is what one should expect if people think about the far future in a very far mode, and if the main distinct function of far views is to make good social impressions. To the extend they have any opinions about the distant future, people focus overwhelmingly on showing their support for standard social norms of good behavior. They reassure their associates of their support for good norms by showing them that making people nicer according to such norms is the main thing they care about regarding the distant future.

Those promised details: Continue reading "What About The Future Matters?" »

GD Star Rating
Tagged as: , , ,