Tag Archives: Morality

Who/What Should Get Votes?

Alex T. asks Should the Future Get a Vote? He dislikes suggestions to give more votes to “civic organizations” who claim to represent future folks, since prediction markets could be more trustworthy:

Through a suitable choice of what is to be traded, prediction markets can be designed to be credibly motivated by a variety of goals including the interests of future generations. … If all we cared about was future GDP, a good rule would be to pass a policy if prediction markets estimate that future GDP will be higher with the policy than without the policy. Of course, we care about more than future GDP; perhaps we also care about environmental quality, risk, inequality, liberty and so forth. What Hanson’s futarchy proposes is to incorporate all these ideas into a weighted measure of welfare. … Note, however, that even this assumes that we know what people in the future will care about. Here then is the final meta-twist. We can also incorporate into our measure of welfare predictions of how future generations will define welfare. (more)

For example, we could implement a 2% discount rate by having official welfare be 2% times welfare this next year plus 98% times welfare however it will be defined a year from now. Applied recursively, this can let future folks keep changing their minds about what they care about, even future discount rates.

We could also give votes to people in the past. While one can’t change the experiences of past folks, one can still satisfy their preferences. If past folks expressed particular preferences regarding future outcomes, those preferences could also be given weight in an overall welfare definition.

We could even give votes to animals. One way is to make some assumptions about what outcomes animals seem to care about, pick ways to measure such outcomes, and then include weights on those measures in the welfare definition. Another way is to assume that eventually we’ll “uplift” such animals so that they can talk to us, and put weights on what those uplifted animals will eventually say about the outcomes their ancestors cared about.

We might even put weights on aliens, or on angels. We might just put a weight on what they say about what they want, if they ever show up to tell us. If they never show up, those weights stay set at zero.

Of course just because we could give votes to future folks, past folks, animals, aliens, and angels doesn’t mean we will ever want to do so.

GD Star Rating
loading...
Tagged as: , , ,

Moral Legacy Myths

Imagine that you decide that this week you’ll go to a different doctor from your usual one. Or that you’ll get a haircut from a different hairdresser. Ask yourself: by how much do you expect such actions to influence the distant future of all our descendants? Probably not much. As I argued recently, we should expect most random actions to have very little long term influence.

Now imagine that you visibly take a stand on a big moral question involving a recognizable large group. Like arguing against race-based slavery. Or defending the Muslim concept of marriage. Or refusing to eat animals. Imagine yourself taking a personal action to demonstrate your commitment to this moral stand. Now ask yourself: by how much do you expect these actions to influence distant descendants?

I’d guess that even if you think such moral actions will have only a small fractional influence on the future world, you expect them to have a much larger long term influence than doctor or haircut actions. Furthermore, I’d guess that you are much more willing to credit the big-group moral actions of folks centuries ago for influencing our world today, than you are willing to credit people who made different choices of doctors or hairdressers centuries ago.

But is this correct? When I put my social-science thinking cap on, I can’t find good reasons to expect big-group moral actions to have much stronger long term influence. For example, you might posit that moral opinions are more stable than other opinions and hence last longer. But more stable things should be harder to change by any one action, leaving the average influence about the same.

I can, however, think of a good reason to expect people to expect this difference: near-far (a.k.a construal level) theory. Acts based on basic principles seem more far than acts based on practical considerations. Acts identified with big groups seem more far than acts identified with small groups. And longer-term influence is also more strongly associated with a far view.

So I tentatively lean toward concluding that this expectation of long term influence from big-group moral actions is mostly wishful thinking. Today’s distribution of moral actions and the relations between large groups mostly result from a complex equilibrium of people today, where random disturbances away from that equilibrium are usually quickly washed away. Yes, sometimes they’ll be tipping points, but those should be rare, as usual, and each of us can only expect to have a small fraction influence on such things.

GD Star Rating
loading...
Tagged as: ,

Rejection Via Advice

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. So we try to hide this motive, and to pretend that other motives dominate our choices of associates.

This would be easier to do if status were very stable. Then we could take our time setting up plausible excuses for wanting to associate with particular high status folks, and for rejecting association bids by particular low status folks. But in fact status fluctuates, which can force us to act quickly. We want to quickly associate more with folks who rise in status, and to quickly associate less with those who fall in status. But the coincidence in time between their status change and our association change may make our status motives obvious.

Since association seems a good thing in general, trying to associate with anyone seems a “nice” act, requiring fewer excuses. In contrast, weakening an existing association seems less nice. So we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

If different associates offer different advice, then this person with fallen status simply must fail to follow most of that advice. Which then gives all those folks whose advice was not followed an excuse to distance themselves from this failure. And those whose advice was followed, well at least they get the status mark of power – a credibly claim that they have influence over others. Either way, the falling status person loses even more status.

Unless of course the advice followed is actually useful. But what are the chances of that?

Added 27Dec: A similar strategy would be useful if your status were to rise, and you wanted to drop associates in order make room for more higher status associates.

GD Star Rating
loading...
Tagged as: , , , ,

The ‘What If Failure?’ Taboo

Last night I heard a  group of smart pundits and wonks discuss Tyler Cowen’s new book Average Is Over. This book is a sequel to his last, The Great Stagnation, where he argued that wage inequality has greatly increased in rich nations over the last forty years, and especially in the last fifteen years. In this new book, Tyler says this trend will continue for the next twenty years, and offers practical advice on how to personally navigate this new world.

Now while I’ve criticized Tyler for overemphasizing automation as a cause of this increased wage inequality, I agree that most of the trends he discusses are real, and most of his practical advice is sound. But I can also see reasonable grounds to dispute this, and I expected the pundits/wonks to join me in debating that. So I was surprised to see the discussion focus overwhelmingly on if this increased inequality was acceptable. Didn’t Tyler understand that losers might be unhappy, and push the political system toward redistribution and instability?

Tyler quite reasonably said yes this change might not be good overall, and yes there might well be more redistribution, but it wouldn’t change the overall inequality much. He pointed out that most losers might be pretty happy with new ways to enjoy more free time, that our last peak of instability was in the 60’s when inequality was at a minimum, that since we have mostly accepted increased inequality for forty years it is reasonable to expect that to continue for another twenty, and that over history inequality has had only a weak correlation with redistribution and instability.

None of which seemed to dent the pundit/wonk mood. They seemed to hold fast to a simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I think we see something similar with other trends framed as negatives, like global warming, bigger orgs, or increased regulation. Once such a trend is framed as an official bad thing which public policy might conceivably reduce, it becomes (mildly) taboo to seem to just accept the change and analyze how to deal with its consequences.

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

GD Star Rating
loading...
Tagged as: , , , ,

Are War Critics Selfish?

The Americanization of Emily (1964) starred James Garner (as Charlie) and Julie Andrews (as Emily), both whom call it their favorite movie. Be warned; I give spoilers in this post. Continue reading "Are War Critics Selfish?" »

GD Star Rating
loading...
Tagged as: , ,

Imagine Farmer Rights

Yesterday I criticized proposals by George Dvorsky and Anders Sandberg to give rights to ems by saying that random rights are bad. That is, rights limit options, which is usually bad, so those who argue for specific rights should offer specific reasons why the rights they propose are exceptional cases where limiting options helps strategically. I illustrated this principle with the example of a diner’s bill of rights.

One possible counter argument is that these proposed em rights are not random; they tend to ensure ems can keep having stuff most of us now have and like. I agree that their proposals do fit this pattern. But the issue is whether rights are random with respect to the set of cases where strategic gains come by limiting options. Do we have reasons to think that strategic benefits tend to come from giving ems the right to preserve industry era lifestyle features?

To help us think about this, I suggest we consider whether we industry era folks would benefit had farmer era folks imposed farmer rights, i.e., rights to ensure that industry era folks could keep things most farmers had and liked. For example, imagine we today had “farmer rights” to:

  1. Work in the open with fresh air and sun.
  2. See how all  food is grown and prepared.
  3. Nights outside are usually quiet and dark.
  4. Quickly get to a mile-long all-nature walk.
  5. All one meets are folks one knows, or known by them.
  6. Easily take apart devices, to see materials, mechanisms.
  7. Authorities with clear answers on cosmology, morality.
  8. Severe punishment of heretics who contradict authorities.
  9. Prior generations quickly make room for new generations.
  10. Rule by a king of our ethnicity, with clear inheritance.
  11. Visible deference from nearby authority-declared inferiors.
  12. More?

Would our lives today be better or worse because of such rights?

Added: I expect to hear this response:

Farmer era folks were wrong about what lifestyles help humans flourish, while we industry era folks are right. This is why their rights would have been bad for us, but our rights would be good for ems.

GD Star Rating
loading...
Tagged as: , , , ,

Civilization Vs. Human Desire

A few years ago I posted on Kevin Kelly on the Unabomber:

The Unabomber’s manifesto … succinctly states … the view … that the greatest problems in the world are due not to individual inventions but to the entire self-supporting system of technology itself. … The technium also contains power to harm itself; because it is no longer regulated by either nature of humans, it could accelerate so fast as to extinguish itself. …

But … the Unabomber is wrong to want to exterminate it … [because] the machine of civilization offers use more actual freedoms than the alternative. … We willingly choose technology with its great defects and obvious detriments, because we unconsciously calculate its virtues. … After we’ve weighted downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. (more)

Lately I’ve been reading Against Civilization, on “the dehumanizing core of modern civilization,” and have been struck by the strength and universality of its passions; I agree with much of what they say. Yes, we humans pay huge costs because we were built for a different world than this one. Yes, we see gains, but mostly because we are culturally plastic – we let our culture tell us what we want and like, and thus what to do.

And yes, contrary to Kelly, we mostly do not choose how civilization changes, nor would we pick the changes that do happen if we could. As I reported a week ago, our usual main criteria in verbal evaluations of distant futures is if future folks will be caring and moral, and since moral standards change most would usually rate future morals as low. Also, high interest rates show that we try hard to transfer resources from the future to ourselves. And if we could, we’d also probably make future folks remember and honor us more, and not forget our favorite art, music, stories, etc.

So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.

And yet, it is our unique willingness and ability to let our civilization change and be selected by forces out of our control, and then to tell us that we like it, that has let our species dominate the Earth, and gives us a good chance to dominate the galaxy and more. While our descendants may be somewhat less happy than us, or than our distant ancestors, there may be trillions of trillions or more of them. I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.

By my lights, things would probably have gone badly had our ancestors chosen their collective futures, and I doubt things have changed much lately. Yes, our descendants may not share today’s moral sense, or remember us and our art as much as most of us might like. But they will want something, often get it, and there may be so so many of them. And that could be so very good, by my lights.

So I say let us venture on, out of control, into the great and perhaps terrible civilization that we may become. Yes, it might be even better if a few forward looking elites could at least steer civilization modestly away from total destruction. But I fear that once substantial steering-abilities exist, they may not stay modest.

GD Star Rating
loading...
Tagged as: ,

What About The Future Matters?

The future of 2050 might be different in many ways if, for example, climate change were mitigated, abortion laws relaxed, marijuana legalized, or the power of different religious groups changed. Which of the following types of differences matter most to you? To most people?

  • Dysfunction: murder, serious assault, disease, poverty, gender inequality, rape, homelessness, suicide, prostitution, corruption, burglary, fear of crime, forced immigration, gangs, terrorism, global warming.
  • Development: technological innovation, scientific progress, major scientific discoveries, volunteering, social welfare organizations, community groups, education standards, science education.
  • Warmth: warm, caring, considerate, insensitive, unfriendly, unsympathetic.
  • Morality: honest, trustworthy, sincere, immoral, deceitful, unfaithful.
  • Competence: capable, assertive, competent, independent, disorganized, lazy, unskilled.
  • Conservation: respect for tradition, self-discipline, obedience, social order, being moderate, national security, family security, being humble.
  • Self-transcendence: honesty, social justice, equality, helpful, protect environment, meaning in life.
  • Openness to change: independence, exciting life, enjoying life, freedom, a varied life, being daring, creativity,
  • Self-enhancement: social power, being successful, ambition, pleasure, wealth, social recognition.

In fact, most people can hardly be bothered to care about the distant future world as a whole, and to the extent they do care, a recent study (details below) suggests that the main thing they care about from the above list is how warm and moral future folks will be. That is, people hardly care at all about future poverty, freedom, suicide, terrorism, crime, poverty, homelessness, disease, skills, laziness, or sci/tech progress. They care a bit more about self-enhancement (e.g., success, pleasure, wealth). But mostly they care about benevolence (warmth & morality, e.g., honesty, sincerity, caring, and friendliness).

Now this study only looked at eight future changes, half of them religious, and I’m not that happy with the way they did their statistics. So there’s a slim hope better studies will get different results. But overall this is pretty sad; like us, future folks will actually care about many more things than their benevolence, and so they may well lament our priorities in helping them.

This result is what one should expect if people think about the far future in a very far mode, and if the main distinct function of far views is to make good social impressions. To the extend they have any opinions about the distant future, people focus overwhelmingly on showing their support for standard social norms of good behavior. They reassure their associates of their support for good norms by showing them that making people nicer according to such norms is the main thing they care about regarding the distant future.

Those promised details: Continue reading "What About The Future Matters?" »

GD Star Rating
loading...
Tagged as: , , ,

Why Good Is Crazy

My last post reminded me that the craziest beliefs ordinary folks endorse with a straight face are religious dogmas. And that seems an important clue to what situations break our minds. But to interpret this clue well, we need a sense for what is the key thing that “religions” have common. My last post suggested a hypothesis to me: compared to beliefs on who is dominant, impressive, or conformist, beliefs on who is “good” are the least connected to a constant reality. They and associated beliefs can thus be the most crazy.

Dominance is mostly about power via raw physical force and physical or legal resources. So it is relatively easy to discern, and we have strong incentives to avoid mistakes about it. And while prestige varies greatly by culture, the elements of prestige tend to be commonly impressive features. For example, the most popular sports vary by culture, but most sports show off a similar set of physical abilities. The most popular music genre varies by culture, but most music draws on a common set of musical abilities.

So while beliefs about the best sport or music may vary by culture, for the purpose of picking good mates or allies you can’t go too wrong by being impressed by whomever impresses folks from other cultures, and you have incentives not to make mistakes. For example, if you are mistakenly impressed by and mate with someone without real sport or music abilities, you who may end up with kids who lack those abilities, and fail to impress the next generation.

To discern who is a good conformist you do have to know something about the standards to which they conform. But if you want to associate with a conformist person, you can’t go too wrong by selecting people who are seen as conformist by their local culture. And if you mistakenly associate with someone who is less conformist than you thought, you may well suffer by being seen as non-conformist via your association with them.

Thus cultural variations in beliefs on dominance, prestige, or conformity are not huge obstacles to selecting and associating with people with desirable characteristics. That is to say, beliefs on such things tend to remain tied with strong personal incentives to important objective functional features of the world, ensuring they do not usually get very crazy.

Beliefs on goodness, however, are less tied to objective reality. Yes, beliefs on goodness can serve important functions for societies, encouraging people to do what benefits the society overall. The problem is that this isn’t functional in the same way for individuals. Each individual wants to seem to be good to others, to seem to praise others for being what is seen to be good, and to seem to approve when others praise others who seem to be good. But these are mostly pressures to go along with whatever the local cultures says is good, not to push for a concept of good that will in fact benefit society.

Thus concepts of what makes someone good are less tied to a constant reality than are concepts of what makes someone dominant, conformist, or prestigious. There may be weak slow group selection pressures that encourage cultures to see people as good who help that culture overall, but those pressures are much weaker than the pressures that encourage accurate assessment of who is dominant, conformist, or prestigious.

I suspect that our minds are built to notice that our concepts of goodness are less tied to reality, and so give such concepts more slack on that account. I also suspect that our minds also notice when other concepts are mainly tied to our concepts of goodness, and to similarly give them more slack.

For example, if you notice that your culture thinks people who act like Jesus are good, you will pay close attention to how Jesus was said to act, so you can act like that. But once you notice that the concept of Jesus mainly shows up connected to concepts of goodness, and is not much connected to more practical concepts like how to not crash your car, you will not think as critically about claims on the life or times of Jesus. After all, it doesn’t really matter to you if those are or could be true; what matters are the “morals” of the story of Jesus.

Today, a similar lack of attention to consistency or detail is probably associated with many aspects of things that are seen as good somewhat separately from if they are impressive or powerful. These may include what sorts of recycling or energy use is good for the planet, what sort of policies are good for the nation, what sort of music or art is good for your soul, and so on.

Since this analysis justified a lot of skepticism on concepts of and related to goodness, I am drawn toward a very cautious skeptical attitude in constructing and using such concepts. I want to start with the concepts where there is the least reason to doubt calling them good and well connected to reality, and want to try to go as far as I can with such concepts before adding in other less reliable concepts of good. It seems to me that giving people what they want is just about the least controversial element of good I can find, and thankfully economic analysis goes a remarkably long way with just that concept.

This analysis also suggests that, when doing policy analysis, one should spend as much time as possible doing neutral positive analysis of what is likely to happen if one does nothing, before proceeding to normative analysis of what actions would be best. This should help minimize the biases from our tendency toward wishful and good-based crazy thinking.

GD Star Rating
loading...
Tagged as: ,

Is Social Science Extremist?

I recently did two interviews with Nikola Danaylov, aka “Socrates”, who has so far done ~90 Singularity 1 on 1 video podcast interviews. Danaylov says he disagreed with me the most:

My second interview with economist Robin Hanson was by far the most vigorous debate ever on Singularity 1 on 1. I have to say that I have rarely disagreed more with any of my podcast guests before. … I believe that it is ideas like Robin’s that may, and often do, have a direct impact on our future. … On the one hand, I really like Robin a lot: He is that most likeable fellow … who like me, would like to live forever and is in support of cryonics. In addition, Hanson is also clearly a very intelligent person with a diverse background and education in physics, philosophy, computer programming, artificial intelligence and economics. He’s got a great smile and, as you will see throughout the interview, is apparently very gracious to my verbal attacks on his ideas.

On the other hand, after reading his book draft on the [future] Em Economy I believe that some of his suggestions have much less to do with social science and much more with his libertarian bias and what I will call “an extremist politics in disguise.”

So, here is the gist of our disagreement:

I say that there is no social science that, in between the lines of its economic reasoning, can logically or reasonably suggest details such as: policies of social discrimination and collective punishment; the complete privatization of law, detection of crime, punishment and adjudication; that some should be run 1,000 times faster than others, while at the same time giving them 1,000 times more voting power; that emulations who can’t pay for their storage fees should be either restored from previous back-ups or be outright deleted (isn’t this like saying that if you fail to pay your rent you should be shot dead?!)…

Suggestions like the above are no mere details: they are extremist bias for Laissez-faire ideology while dangerously masquerading as (impartial) social science. … Because not only that he doesn’t give any justification for the above suggestions of his, but also because, in principle, no social science could ever give justification for issues which are profoundly ethical and political in nature. (Thus you can say that I am in a way arguing about the proper limits, scope and sphere of economics, where using its tools can give us any worthy and useful insights we can use for the benefit of our whole society.) (more)

You might think that Danaylov’s complaint is that I use the wrong social science, one biased too far toward libertarian conclusions. But in fact his complaint seems to be mainly against the very idea of social science: an ability to predict social outcomes. He apparently argues that since 1) future social outcomes depend in many billions of individual choices, 2) ethical and political considerations are relevant to such choices, and 3) humans have free will to be influenced by such considerations in making their choices, that therefore 4) it should be impossible to predict future social outcomes at a rate better than random chance.

For example, if allowing some ems to run faster than others might offend common ethical ideals of equality, it must be impossible to predict that this will actually happen. While one might be able to use physics to predict the future paths of bouncing billiard balls, as soon as a human will free will enters the picture making a choice where ethics is relevant, all must fade into an opaque cloud of possibilities; no predictions are possible.

Now I haven’t viewed them, but I find it extremely hard to believe that out of 90 interviews on the future, Danaylov has always vigorously complained whenever anyone even implicitly suggested that they could any better than random chance in guessing future outcomes in any context influenced by a human choice where ethics or politics might have been relevant. I’m in fact pretty sure he must have nodded in agreement with many explicit forecasts. So why complain more about me then?

It seems to me that the real complaint here is that I forecast that human choices will in fact result in outcomes that violate the ethical principles Danaylov holds dear. He objects much more to my predicting a future of more inequality than if I had predicted a future of more equality. That is, I’m guessing he mostly approves of idealistic, and disapproves of cynical, predictions. Social science must be impossible if it would predict non-idealistic outcomes, because, well, just because.

FYI, I also did this BBC interview a few months back.

GD Star Rating
loading...
Tagged as: , , ,