Tag Archives: Morality

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
Tagged as: , ,

Alexander on Age of Em

If I ever have an executioner, I want him to be Scott Alexander. Alexander has such a winning way with words that I and his many fans enjoy him even when we disagree. I’d hardly notice my destination as his pleasing patter entranced me while we took the long way around to the gallows.

So I am honored that Alexander wrote a long review of Age of Em (9K words, 6% as long as the book), wherein he not only likes and recommends it, he also accepts pretty much all its claims within its main focus. That is, I present my book as being expert on the topic of what would actually happen if cheap ems were our next huge social change. Where Alexander disagrees is on two auxiliary topics, which I mention but on which I claim less expertise, namely how likely is this key scenario assumption, and how valuable is the resulting civilization I describe.

On the subject of value, Alexander leans forager (i.e., liberal) on the forager vs. farmer scale. He dislikes civilization evolving away from the behaviors and values of our forager ancestors, and today he partly blames this on capitalism. He doesn’t see our increase in numbers, comfort, and lifespan as sufficient compensation. (I think he’d like the book Against Civilization.) He says:

[Nick Land’s Ascended Economy] seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere. .. The Age of Em is an economy in the early stages of such a transformation. Instead of being able to replace everything with literal robots, it replaces them with humans who have had some aspects of their humanity stripped away. Biological bodies. The desire and ability to have children normally. ..

I envision a spectrum between the current world of humans and Nick Land’s Ascended Economy. Somewhere on the spectrum we have ems who get leisure time. A little further on the spectrum we have ems who don’t get leisure time. But we can go further. .. I expect [greatly reduced sex desire] would happen about ten minutes after the advent of the Age of Em .. Combine that with the stimulant use mentioned above, and you can have people who will never have nor want to have any thought about anything other than working on the precise task at which they are supposed to be working at any given time. ..

I see almost no interesting difference between an em world with full use of these tweaks and an Ascended Economy world. Yes, there are things that look vaguely human in outline laboring in the one and not the other, but it’s not like there will be different thought processes or different results. I’m not even sure what it would mean for the ems to be conscious in a world like this – they’re not doing anything interesting with the consciousness. .. If we get ems after all, I expect them to be lobotomized and drugged until they become effectively inhuman, cogs in the Ascended Economy that would no more fall in love than an automobile would eat hay and whinny.

Alexander seems to strongly endorse the usual forager value of leisure over work, so much so that he can’t see people focused on their work as human, conscious, or of any moral value. Creatures only seem valuable to him to the extent that they have sex, leisure time, minds wandering away from work, and desires to do things other than work.

This seems ironic because Scott Alexander is one of the most human and productive workers I know. He has a full time job as a psychiatrist, an especially demanding job, and in addition finds time to write frequent long careful analyses of many topics. I find it hard to see where he has that much time for leisure, and doubt he would in fact be substantially more productive overall if he took drugs to make him forget sex, mentally wander less, and focus more on his immediate tasks. He is exactly the sort of person an em economy would want many copies of, pretty much just as he is. Yet if we are to believe him, he only sees value in his brief leisure hours.

I see Alexander as having too little respect for the functionality of human behaviors and mind design. Yes, maximally competitive em-era behaviors and minds won’t be exactly like current ones. But that doesn’t necessarily mean one wants to throw out most existing behaviors and brain modules wholesale and start over from scratch. As these behaviors and modules all arose because they helped our ancestors be more competitive in some prior context, it makes more sense to try to repair, reform, and repurpose them.

For example, the robust productivity gains observed from workers who take breaks don’t seem to depend much on worker motivation. Breaks aren’t just about motivation; they are a deeply entrenched part of being productive. Similarly, wandering minds may take away from the current immediate task, but they help one to search for hidden problems and opportunities. Also, workers today who focus on just doing immediate tasks often lose out to others who attend more to building and managing social relations, as well as office politics. Love and sex can be very helpful in forming and maintaining relations.

Of course I’m not trying to offer any long term assurances, and it is quite reasonable to worry about what we will lose along with what we will gain. But since today most of the people we most respect and celebrate tend to be workaholics, I just can’t buy the claim that most of us today can’t find value in similarly productive and work-focused ems. And I just can’t see thoughtless workers being the most productive in the early em era of my book.

GD Star Rating
Tagged as: , ,

Problem, No Solution Taboo?

Three years ago I described the “What if Failure Taboo”:

A simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I suggested this could be an issue with my book Age of Em:

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

This week I talked on my book to a sharp lively group organized by Azeem Azhar (author of the futurist newsletter Exponential View), and learned that this taboo may be worse than I thought. I tried to present the situation as something that you might consider to be a problem, but that while my analysis should enable better problem solving, I’ve personally focused on just describing this situation. Mixing up normative and positive discussions risks the positive being overshadowed by the normative, and positive claims seeming less reliable when mixed up with more disputable normative claims.

Even with this reframing, several people saw me as still violating the key taboo. Apparently it isn’t just taboo to assume that we’ll fail to solve a problem; it can also be taboo to merely describe a problem without recommending a solution. At least when the problem intersects with many strong feelings and moral norms. To many, neutral analysis just seems cold and uncaring, and suspiciously like evil.

GD Star Rating
Tagged as: ,

Testing Moral Progress

Mike Huemer just published his version of the familiar argument that changing moral views is evidence for moral realism. Here is the progress datum he seeks to explain:

Mainstream illiberal views of earlier centuries are shocking and absurd to modern readers. The trend is consistent across many issues: war, murder, slavery, democracy, women’s suffrage, racial segregation, torture, execution, colonization. It is difficult to think of any issue on which attitudes have moved in the other direction. This trend has been ongoing for millennia, accelerating in the last two centuries, and even the last 50 years, and it affects virtually every country on Earth. … All the changes are consistent with a certain coherent ethical standpoint. Furthermore, the change has been proceeding in the same direction for centuries, and the changes have affected nearly all societies across the globe. This is not a random walk.

Huemer’s favored explanation:

If there are objective ethical truths to which human beings have some epistemic access, then we should expect moral beliefs across societies to converge over time, if only very slowly.

But note three other implications of this moral-learning process, at least if we assume the usual (e.g., Bayesian) rational belief framework:

  1. The rate at which moral beliefs have been changing should track the rate at which we get relevant info, such as via life experience or careful thought. If we’ve seen a lot more change recently than thousands of years ago, we need a reason to think we’ve had a lot more thinking or experience lately.
  2. If people are at least crudely aware of the moral beliefs of others in the world, then they should be learning from each other much more than from their personal thoughts and experience. Thus moral learning should be a worldwide phenomena; it might explain average world moral beliefs, but it can’t explain much of belief differences at a time.
  3. Rational learning of any expected value via a stream of info should produce a random walk in those expectations, not a steady trend. But as Huemer notes, what we mostly see lately are steady trends.

For Age of Em, I read a lot about cultural value variation, and related factor analyses. One of the two main factors by which national values vary correlates strongly with average national wealth. At each point in time, richer nations have more of this factor, over time nations get more of it as they get richer, and when a nation has an unusual jump in wealth it gets an unusual jump in this factor. And this favor explains an awful lot of the value choices Huemer seeks to explain. All this even though people within a nation that have these values more are not richer on average.

The usual view in this field is that the direction of causation here is mostly from wealth to this value factor. This makes sense because this is the usual situation for variables that correlate with wealth. For example, if length of roads or number of TVs correlate with wealth, that is much more because wealth causes roads and TVs, and much less because roads and TV cause wealth. Since wealth is the main “power” factor of a society, this main factor tends to cause other small things more than they cause it.

This is as close as Hummer gets to addressing this usual view:

Perhaps there is a gene that inclines one toward illiberal beliefs if one’s society as a whole is primitive and poor, but inclines one toward liberal beliefs if one’s society is advanced and prosperous. Again, it is unclear why such a gene would be especially advantageous, as compared with a gene that causes one to be liberal in all conditions, or illiberal in all conditions. Even if such a gene would be advantageous, there has not been sufficient opportunity for it to be selected, since for almost all of the history of the species, human beings have lived in poor, primitive societies.

Well if you insist on explaining things in terms of genes, everything is “unclear”; we just don’t have good full explanations to take us all the way from genes to how values vary with cultural context. I’ve suggested that we industry folks are reverting to forager values in many ways with increasing wealth, because wealth cuts the fear that made foragers into farmers. But you don’t have to buy my story to find it plausible that humans are just built so that their values vary as their society gets rich. (This change need not at all be adaptive in today’s environment.)

Note that we already see many variables that change between rich vs. poor societies, but which don’t change the same way between rich and poor people within a society. For example rich people in a society save more, but rich societies don’t save more. Richer societies spend a larger fraction of income on medicine, but richer people spend a smaller fraction. And rich societies have much lower fertility even when rich people have about the same fertility.

Also not that “convergence” is about variance of opinion; it isn’t obvious to me that variance is lower now than it was thousands of years. What we’ve seen is change, not convergence.

Bottom line: the usual social science story that increasing wealth causes certain predictable value changes fits the value variation data a lot better than the theory that the world is slowly learning moral truth. Even if we accepted moral learning as explaining some of the variation, we’ll need wealth causes values to explain a lot of the rest of the variation. So why not let it explain all? Maybe someone can come up with variations on the moral learning theory that fit the data better. But at the moment, the choice isn’t even close.

GD Star Rating
Tagged as: , ,

Am I A Moralist?

Imagine that a “musicalist” is someone who makes good and persuasive musical arguments. One might define this broadly, by saying that any act is musical if it influences the physical world so as to change the distribution of sound, as most sound has musical elements. Here anyone who makes good and persuasive arguments that influence physical acts is a good “musicalist.”

Or one might try to define “musicalist” more narrowly, by requiring that the acts argued for have an especially strong effect on the especially musical aspects of the physical world, that musical concepts and premises often be central to the arguments. Far fewer people would be see as good “musicalists” here.

The concept of “moralist” can also be defined broadly or narrowly. Defined broadly, a “moralist” might be anyone who makes good and persuasive arguments about acts for which anyone thinks moral considerations to be relevant. This could be because the acts influence morally-relevant outcomes, or because the acts are encouraged or discouraged by some moral rules.

Defining narrowly, however, one might require that the acts influenced have especially strong moral impacts, and that moral concepts and premises often be central to the arguments. Far fewer people are good “moralists” by this definition.

Bryan Caplan recently praised me as a “moralist”:

Robin … excels as a moralist – in three distinct ways.

Robin often constructs sound original moral arguments.  His arguments against cuckoldry and for cryonics are just two that come to mind.  Yes, part of his project is to understand why most people are forgiving of cuckoldry and hostile to cryonics.  But the punchline is that the standard moral position on these issue is indefensible.

Second, Robin’s moral arguments actually persuade people.  I’ve met many of his acolytes in person, and see vastly more online.  This doesn’t mean, of course, that Robin’s moral arguments persuade most readers.  Any moral philosopher will tell you that changing minds is like pulling teeth.  My point is that Robin has probably changed the moral convictions of hundreds.  And that’s hundreds more than most moralists have changed.

Third, Robin takes some classical virtues far beyond the point of prudence.  Consider his legendary candor.

I accept (and am grateful for) Bryan’s praise relative to a broad interpretation of “moralist.” Yes, I try to create good and persuasive arguments on many topics relevant to actions, and according to many concepts of morality most acts have substantial moral impact. Since moral considerations are so ubiquitous, most anyone who is a good arguer must also be a good moralist.

But what if we define “moralist” narrowly, so that the acts must be unusually potent morally, and the concepts and premises invoked must be explicitly moral ones? In this case, I don’t see that I qualify, since I don’t focus much on especially moral concepts, premises, rules, or consequences.

Bryan gave two examples, and his readers gave two more. Here are quick summaries:

  • I argue that cryonics might work, that it only needs a >~5% of working to make sense, and that your wanting to do it triggers abandonment feelings in others exactly because they think you think it might work.
  • I argue that with simple precautions betting on terror acts won’t cause terror acts, but could help to predict and prevent such attacks.
  • I argue that the kinds of inequality we talk most about are only a small fraction of all inequality, but we talk about them most because they can justify us grabbing stuff that is more easily grabbed.
  • I argue that cuckoldry (which results in kids) causes many men great emotional and preference harm, plausibly comparable to the harm women get from being raped.

I agree that these arguments address actions about which many people have moral feelings. But I don’t see myself as focused on moral concepts or premises; I see my discussions as focused on other issues.

Yes, most people have moral wants. These aren’t all or even most of what people want, but moral considerations do influence what people (including me) want. Yes, these moral wants are relevant for many acts. But people disagree about the weight and even direction that moral considerations push on many of these acts, and I don’t see myself as especially good at or interested taking sides in arguments about such weights and directions. I instead mostly seek other simple robust considerations to influence beliefs and wants about acts.

Bryan seems to think that my being a good moralist by his lights argues against my “dealism” focus on identifying social policies that can get most everyone more of what they want, instead of taking sides in defined moral battles, wherein opposing sides make conflicting and often uncompromising demands. It seems to me that I in fact do work better by not aligning myself clearly with particular sides of established tug-o-wars, but instead seeking considerations that can appeal broadly to people on both sides of existing conflicts.

GD Star Rating
Tagged as: ,

Who/What Should Get Votes?

Alex T. asks Should the Future Get a Vote? He dislikes suggestions to give more votes to “civic organizations” who claim to represent future folks, since prediction markets could be more trustworthy:

Through a suitable choice of what is to be traded, prediction markets can be designed to be credibly motivated by a variety of goals including the interests of future generations. … If all we cared about was future GDP, a good rule would be to pass a policy if prediction markets estimate that future GDP will be higher with the policy than without the policy. Of course, we care about more than future GDP; perhaps we also care about environmental quality, risk, inequality, liberty and so forth. What Hanson’s futarchy proposes is to incorporate all these ideas into a weighted measure of welfare. … Note, however, that even this assumes that we know what people in the future will care about. Here then is the final meta-twist. We can also incorporate into our measure of welfare predictions of how future generations will define welfare. (more)

For example, we could implement a 2% discount rate by having official welfare be 2% times welfare this next year plus 98% times welfare however it will be defined a year from now. Applied recursively, this can let future folks keep changing their minds about what they care about, even future discount rates.

We could also give votes to people in the past. While one can’t change the experiences of past folks, one can still satisfy their preferences. If past folks expressed particular preferences regarding future outcomes, those preferences could also be given weight in an overall welfare definition.

We could even give votes to animals. One way is to make some assumptions about what outcomes animals seem to care about, pick ways to measure such outcomes, and then include weights on those measures in the welfare definition. Another way is to assume that eventually we’ll “uplift” such animals so that they can talk to us, and put weights on what those uplifted animals will eventually say about the outcomes their ancestors cared about.

We might even put weights on aliens, or on angels. We might just put a weight on what they say about what they want, if they ever show up to tell us. If they never show up, those weights stay set at zero.

Of course just because we could give votes to future folks, past folks, animals, aliens, and angels doesn’t mean we will ever want to do so.

GD Star Rating
Tagged as: , , ,

Moral Legacy Myths

Imagine that you decide that this week you’ll go to a different doctor from your usual one. Or that you’ll get a haircut from a different hairdresser. Ask yourself: by how much do you expect such actions to influence the distant future of all our descendants? Probably not much. As I argued recently, we should expect most random actions to have very little long term influence.

Now imagine that you visibly take a stand on a big moral question involving a recognizable large group. Like arguing against race-based slavery. Or defending the Muslim concept of marriage. Or refusing to eat animals. Imagine yourself taking a personal action to demonstrate your commitment to this moral stand. Now ask yourself: by how much do you expect these actions to influence distant descendants?

I’d guess that even if you think such moral actions will have only a small fractional influence on the future world, you expect them to have a much larger long term influence than doctor or haircut actions. Furthermore, I’d guess that you are much more willing to credit the big-group moral actions of folks centuries ago for influencing our world today, than you are willing to credit people who made different choices of doctors or hairdressers centuries ago.

But is this correct? When I put my social-science thinking cap on, I can’t find good reasons to expect big-group moral actions to have much stronger long term influence. For example, you might posit that moral opinions are more stable than other opinions and hence last longer. But more stable things should be harder to change by any one action, leaving the average influence about the same.

I can, however, think of a good reason to expect people to expect this difference: near-far (a.k.a construal level) theory. Acts based on basic principles seem more far than acts based on practical considerations. Acts identified with big groups seem more far than acts identified with small groups. And longer-term influence is also more strongly associated with a far view.

So I tentatively lean toward concluding that this expectation of long term influence from big-group moral actions is mostly wishful thinking. Today’s distribution of moral actions and the relations between large groups mostly result from a complex equilibrium of people today, where random disturbances away from that equilibrium are usually quickly washed away. Yes, sometimes they’ll be tipping points, but those should be rare, as usual, and each of us can only expect to have a small fraction influence on such things.

GD Star Rating
Tagged as: ,

Rejection Via Advice

We get status in part from the status of our associates, which is a credible signal of how others see us. Because of this, we prefer to associate with high status folks. But it looks bad to be overt about this. So we try to hide this motive, and to pretend that other motives dominate our choices of associates.

This would be easier to do if status were very stable. Then we could take our time setting up plausible excuses for wanting to associate with particular high status folks, and for rejecting association bids by particular low status folks. But in fact status fluctuates, which can force us to act quickly. We want to quickly associate more with folks who rise in status, and to quickly associate less with those who fall in status. But the coincidence in time between their status change and our association change may make our status motives obvious.

Since association seems a good thing in general, trying to associate with anyone seems a “nice” act, requiring fewer excuses. In contrast, weakening an existing association seems less nice. So we mainly need good excuses for pushing away those whose status has recently fallen. Such opportunistic rejection, just when our associates most need us, seems especially wrong and mean. So how do we manage it?

One robust strategy is to offer random specific advice. You acknowledge their problems, express sympathy, and then take extra time to “help” them by offering random specific advice about how to prevent or reverse their status fall. Especially advice that will sound good if quoted to others, but is hard for them to actually follow, and is unlikely to be the same as what other associates advise.

If different associates offer different advice, then this person with fallen status simply must fail to follow most of that advice. Which then gives all those folks whose advice was not followed an excuse to distance themselves from this failure. And those whose advice was followed, well at least they get the status mark of power – a credibly claim that they have influence over others. Either way, the falling status person loses even more status.

Unless of course the advice followed is actually useful. But what are the chances of that?

Added 27Dec: A similar strategy would be useful if your status were to rise, and you wanted to drop associates in order make room for more higher status associates.

GD Star Rating
Tagged as: , , , ,

The ‘What If Failure?’ Taboo

Last night I heard a  group of smart pundits and wonks discuss Tyler Cowen’s new book Average Is Over. This book is a sequel to his last, The Great Stagnation, where he argued that wage inequality has greatly increased in rich nations over the last forty years, and especially in the last fifteen years. In this new book, Tyler says this trend will continue for the next twenty years, and offers practical advice on how to personally navigate this new world.

Now while I’ve criticized Tyler for overemphasizing automation as a cause of this increased wage inequality, I agree that most of the trends he discusses are real, and most of his practical advice is sound. But I can also see reasonable grounds to dispute this, and I expected the pundits/wonks to join me in debating that. So I was surprised to see the discussion focus overwhelmingly on if this increased inequality was acceptable. Didn’t Tyler understand that losers might be unhappy, and push the political system toward redistribution and instability?

Tyler quite reasonably said yes this change might not be good overall, and yes there might well be more redistribution, but it wouldn’t change the overall inequality much. He pointed out that most losers might be pretty happy with new ways to enjoy more free time, that our last peak of instability was in the 60’s when inequality was at a minimum, that since we have mostly accepted increased inequality for forty years it is reasonable to expect that to continue for another twenty, and that over history inequality has had only a weak correlation with redistribution and instability.

None of which seemed to dent the pundit/wonk mood. They seemed to hold fast to a simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I think we see something similar with other trends framed as negatives, like global warming, bigger orgs, or increased regulation. Once such a trend is framed as an official bad thing which public policy might conceivably reduce, it becomes (mildly) taboo to seem to just accept the change and analyze how to deal with its consequences.

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

GD Star Rating
Tagged as: , , , ,

Are War Critics Selfish?

The Americanization of Emily (1964) starred James Garner (as Charlie) and Julie Andrews (as Emily), both whom call it their favorite movie. Be warned; I give spoilers in this post. Continue reading "Are War Critics Selfish?" »

GD Star Rating
Tagged as: , ,