Tag Archives: Morality

Spaceship Earth Explores Culture Space

Space: the final frontier. These are the voyages of the starship Enterprise. Its five-year mission: to explore strange new worlds, to seek out new life and new civilizations, to boldly go where no man has gone before. (more)

Many love science fiction stories of brave crews risking their lives to explore strange new spaces, stories much like the older adventure stories about European explorers risking their lives centuries ago to explore new places on Earth. (Yes, often to conquer and enslave the locals.) Many lament that we don’t have as many real such explorer stories today, and they say that we should support more human space exploration now in order to create such real heroic exploration stories. Even though human space exploration is crazy expensive now, and offers few scientific, economic, or humanity-survival gains anytime soon. They say the good stories will be worth all that cost.

Since Henry George first invoked it in 1879, many have used the metaphor of Spaceship Earth to call attention to our common vulnerability and limited resources:

Spaceship Earth … is a world view encouraging everyone on Earth to act as a harmonious crew working toward the greater good. … “we must all cooperate and see to it that everyone does his fair share of the work and gets his fair share of the provisions” … “We travel together, passengers on a little space ship, dependent on its vulnerable reserves of air and soil.” (more)

In this post, I want to suggest that Spaceship Earth is in fact a story of a brave crew risking much to explore a strange new territory. But the space we explore is more cultural than physical.

During the industrial era, the world economy has doubled roughly every fifteen years. Each such doubling of output has moved us into new uncharted cultural territory. This growth has put new pressures on our environment, and has resulted in large and rapid changes to our culture and social organization.

This growth results mostly from innovation, and most innovations are small and well tested against local conditions, giving us little reason to doubt their local value. But all these small changes add up to big overall moves that are often entangled with externalities, coordination failures, and other reasons to doubt their net value.

So humanity continues to venture out into new untried and risky cultural spaces, via changes to cultural conditions with which we don’t have much experience, and which thus risk disaster and destruction. The good crew of Spaceship Earth should carefully weigh these risks when considering where and how fast to venture.

Consider seven examples:

  1. While humans seem to be adapting reasonably well to global warming, we risk big lumpy disruptive changes to Atlantic currents and Antarctic ice. Ecosystems also seem to be adapting okay, but we are risking big collapses to them as well.
  2. While ancient societies gave plenty of status and rewards to fertility, today high fertility behaviors are mostly seen as low status. This change is entwined with complex changes in gender norms and roles, but one result is that human fertility is falling toward below replacement in much of the world, and may fall much further. Over centuries this might produce a drastic decrease in world population, and productivity-threatening decreases in the scale of world production.
  3. While the world has become much more peaceful over the last century, this has been accompanied by big declines in cultural support for military action and tolerance for military losses. Is the world now more vulnerable to conquest by a new military power with more local cultural support and tolerance for losses?
  4. Farmer era self-control and self-discipline has weakened over time, in part via weaker religion. This has weakened cultural support for work and cultural suspicion of self-indulgence in sex, drugs, and media. So we now see less work and more drug addiction. How far will we slide?
  5. Via new media, we are exploring brave new worlds of how to make friends, form identities, achieve status, and learn about the world. As many have noted, these new ways risk many harms to happiness and social capital.
  6. Innovation was once greatly aided by tinkering, i.e., the ability to take apart and change familiar devices. Such tinkering is much less feasible in modern devices. Increasing regulation and risk aversion is also interfering with innovation. Are we as a result risking cultural support for innovation?
  7. Competition between firms has powered rapid growth, but winning bets on intangible capital is allowing leading firms to increasingly dominate industries. Does this undermine the competition that we’ve relied on so far to power growth?

The most common framing today for such issues is one of cultural war. You ask yourself which side feels right to you, commiserate with your moral allies, then puff yourself up with righteous indignation against those who see things differently, and go to war with them. But we might do better to frame these as reasonable debates on how much to risk as we explore culture space.

In a common scene from exploration stories, a crew must decide if to take a big risk. Or choose among several risks. Some in the crew see a risk as worth the potential reward, while others want to search longer for better options, or retreat to try again another day. They may disagree on the tradeoff, but they all agree that both the risks and the rewards are real. It is just a matter of tradeoff details.

We might similarly frame key “value” debates as reasonable differing judgements on what chances to take as spaceship Earth explores culture space. Those who love new changes could admit that we are taking some chances in adopting them so quickly, with so little data to go on, while those who are suspicious of recent changes could admit that many seem to like their early effects. Rather than focus on directly evaluating changes, we might focus more on setting up tracking systems to watch for potential problems, and arranging for repositories of old culture practices that might help us to reverse changes if things go badly. And we might all see ourselves as part of a grand heroic adventure story, wherein a mostly harmonious crew explores a great strange cosmos of possible cultures.

GD Star Rating
loading...
Tagged as: , ,

Sanctimonious Econ Critics

The New Yorker review of Elephant in the Brain raved about Cents and Sensibility, by Gary Morson and Morton Shapiro, a book said to confirm that “intellectual overextension is often found in economics.” Others have similarly raved. But I don’t care much for this book, so let me explain why. (Be warned: this post is LONG.)

In its first sentence, the book declares its aim:

This book creates a dialogue between two fields that rarely have anything to say to each other: economics and the humanities. We mean to show how that dialogue could be conducted and why it has a great deal to contribute. (p.1)

Morson and Shapiro seem to want the sort of “dialogue” where one side talks and the other just listens. All but one chapter elaborates how economists should listen to the humanities, and the one remaining chapter is on how some parts of the humanities should listen to another part, not to economists. There’s only a two page section near the end on “What Humanists Can Learn From Economists,” which even then can’t resist talking more about what economists can learn:

Economists could learn from humanists the complexity of ethical issues, the need for stories, the importance of empathy, and the value of unformalizable good judgement. But humanists could also learn from economists how to think about scarce resources, about the nature of efficiency, and the importance of rational decision making. (p.261)

So what exactly can we economists learn? Continue reading "Sanctimonious Econ Critics" »

GD Star Rating
loading...
Tagged as: , ,

Today, Ems Seem Unnatural

The main objections to “test tube babies” weren’t about the consequences for mothers or babies, they were about doing something “unnatural”:

Given the number of babies that have now been conceived through IVF — more than 4 million of them at last count — it’s easy to forget how controversial the procedure was during the time when, medically and culturally, it was new. … They weren’t entirely sure how IVF was different from cloning, or from the “ethereal conception” that was artificial insemination. They balked at the notion of “assembly-line fetuses grown in test tubes.” … For many, IVF smacked of a moral overstep — or at least of a potential one. … James Watson publicly decried the procedure, telling a Congressional committee in 1974 that … “All hell will break loose, politically and morally, all over the world.” (more)

Similarly, for most ordinary people, the problem with ems isn’t that the scanning process might kill the original human, or that the em might be an unconscious zombie due to their new hardware not supporting consciousness. In fact, people more averse to death have fewer objections to ems, as they see ems as a way to avoid death. The main objections to ems are just that ems seem “unnatural”:

In four studies (including pilot) with a total of 952 participants, it was shown that biological and cultural cognitive factors help to determine how strongly people condemn mind upload. … Participants read a story about a scientist who successfully transfers his consciousness (uploads his mind) onto a computer. … In the story, the scientist injects himself with nano-machines that enter his brain and substitute his neurons one-by-one. After a neuron has been substituted, the functioning of that neuron is copied (uploaded) on a computer; and after each neuron has been copied/uploaded the nano-machines shut down, and the scientist’s body falls on the ground completely limp. Finally, the scientist wakes up inside the computer.

The following variations made NO difference:

[In Study 1] we modified our original vignette by changing the target of mind upload to be either (1) a computer, (2) an android body, (3) a chimpanzee, or (4) an artificial brain. …

[In Study 2] we changed the story in a manner that the scientist merely ingests the nano-machines in a capsule form. Furthermore, we used a 2 × 2 experimental set-up to investigate whether the body dying on a physical level [heart stops or the brain stops] impacts the condemnation of the scientist’s actions. We also investigated whether giving participants information on how the transformation feels for the scientist once he is in the new platform has an impact on the results.

What did matter:

People who value purity norms and have higher sexual disgust sensitivity are more inclined to condemn mind upload. Furthermore, people who are anxious about death and condemn suicidal acts were more accepting of mind upload. Finally, higher science fiction literacy and/or hobbyism strongly predicted approval of mind upload. Several possible confounding factors were ruled out, including personality, values, individual tendencies towards rationality, and theory of mind capacities. (paper; summary; HT Stefan Schubert)

As with IVF, once ems are commonplace they will probably also come to seem less unnatural; strange never-before-seen possibilities evoke more fear and disgust than common things, unless those common things seem directly problematic.

GD Star Rating
loading...
Tagged as: ,

Automatic Norm Lessons

Pity the modern human who wants to be seen as a consistently good person who almost never breaks the rules. For our distant ancestors, this was a feasible goal. Today, not so much.To paraphrase my recent post:

Our norm-inference process is noisy, and gossip-based convergence isn’t remotely up to the task given our huge diverse population and vast space of possible behaviors. Setting aside our closest associates and gossip partners, if we consider the details of most people’s behavior, we will find rule-breaking fault with a lot of it. As they would if they considered the details of our behavior. We seem to live in a Sodom and Gomorrah of sin, with most people getting away unscathed with most of it. At the same time, we also suffer so many overeager busybodies applying what they see as norms to what we see as our own private business where their social norms shouldn’t apply.

Norm application isn’t remotely as obvious today as our evolved habit of automatic norms assumes. But we can’t simply take more time to think and discuss on the fly, as others will then see us as violating the meta-norm, and infer that we are unprincipled blow-with-the-wind types. The obvious solution: more systematic preparation.

People tend to presume that the point of studying ethics and norms is to follow them more closely. Which is why most people are not interested for themselves, but think it is good for other people. But in fact such study doesn’t have that effect. Instead, there should be big gains to distinguishing which norms to follow more versus less closely. Whether for purely selfish purposes, or for grand purposes of helping the world, study and preparation can help one to better identify the norms that really matter, from the ones that don’t.

In each area of life, you could try to list many possibly relevant norms. For each one, you can try to estimate how it expensive it is to follow, how much the world benefits from such following, and how likely others are to notice and punish violations. Studying norms together with others is especially useful for figuring out how many people are aware of each norm, or consider it important. All this can help you to prioritize norms, and make a plan for which ones to follow how eagerly. And then practice your plan until your new habits become automatic.

As a result, instead of just obeying each random rule that pops into your head in each random situation that you encounter, you can actually only follow the norms that you’ve decided are worth the bother. And if variation in norm following is an big part of variation in success, you may succeed substantially more.

GD Star Rating
loading...
Tagged as: ,

“Human” Seems Low Dimensional

Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.

Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.

Now consider the question of how “human-like” something is. Many indicators may be relevant to judging this, and one may draw many implications from such a judgment. In principle this concept of “human-like” could be high dimensional, so that there are many separate packages of indicators relevant for judging matching packages of implications. But anecdotally, humans seem to have a tendency to “anthropomorphize,” that is, to treat non-humans as if they were somewhat human in a simple low-dimensional way that doesn’t recognize many dimensions of difference. That is, things just seem more or less human. So the more ways in which something is human-like, the more you can reasonably guess that it will be human like in other ways. This tendency appears in a wide range of ordinary environments, and its targets include plants, animals, weather, planets, luck, sculptures, machines, and software. Continue reading "“Human” Seems Low Dimensional" »

GD Star Rating
loading...
Tagged as: , ,

On Homo Deus

Historian Yuval Harari’s best-selling book Sapiens mostly talked about history. His new book, Homo Deus, won’t be released in the US until February 21, but I managed to find a copy at the Istanbul airport – it came out in Europe last fall. This post is about the book, and it is long and full of quotes; you are warned. Continue reading "On Homo Deus" »

GD Star Rating
loading...
Tagged as: ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
loading...
Tagged as: , ,

Alexander on Age of Em

If I ever have an executioner, I want him to be Scott Alexander. Alexander has such a winning way with words that I and his many fans enjoy him even when we disagree. I’d hardly notice my destination as his pleasing patter entranced me while we took the long way around to the gallows.

So I am honored that Alexander wrote a long review of Age of Em (9K words, 6% as long as the book), wherein he not only likes and recommends it, he also accepts pretty much all its claims within its main focus. That is, I present my book as being expert on the topic of what would actually happen if cheap ems were our next huge social change. Where Alexander disagrees is on two auxiliary topics, which I mention but on which I claim less expertise, namely how likely is this key scenario assumption, and how valuable is the resulting civilization I describe.

On the subject of value, Alexander leans forager (i.e., liberal) on the forager vs. farmer scale. He dislikes civilization evolving away from the behaviors and values of our forager ancestors, and today he partly blames this on capitalism. He doesn’t see our increase in numbers, comfort, and lifespan as sufficient compensation. (I think he’d like the book Against Civilization.) He says:

[Nick Land’s Ascended Economy] seems to me the natural end of the economic system. Right now it needs humans only as laborers, investors, and consumers. But robot laborers are potentially more efficient, companies based around algorithmic trading are already pushing out human investors, and most consumers already aren’t individuals – they’re companies and governments and organizations. At each step you can gain efficiency by eliminating humans, until finally humans aren’t involved anywhere. .. The Age of Em is an economy in the early stages of such a transformation. Instead of being able to replace everything with literal robots, it replaces them with humans who have had some aspects of their humanity stripped away. Biological bodies. The desire and ability to have children normally. ..

I envision a spectrum between the current world of humans and Nick Land’s Ascended Economy. Somewhere on the spectrum we have ems who get leisure time. A little further on the spectrum we have ems who don’t get leisure time. But we can go further. .. I expect [greatly reduced sex desire] would happen about ten minutes after the advent of the Age of Em .. Combine that with the stimulant use mentioned above, and you can have people who will never have nor want to have any thought about anything other than working on the precise task at which they are supposed to be working at any given time. ..

I see almost no interesting difference between an em world with full use of these tweaks and an Ascended Economy world. Yes, there are things that look vaguely human in outline laboring in the one and not the other, but it’s not like there will be different thought processes or different results. I’m not even sure what it would mean for the ems to be conscious in a world like this – they’re not doing anything interesting with the consciousness. .. If we get ems after all, I expect them to be lobotomized and drugged until they become effectively inhuman, cogs in the Ascended Economy that would no more fall in love than an automobile would eat hay and whinny.

Alexander seems to strongly endorse the usual forager value of leisure over work, so much so that he can’t see people focused on their work as human, conscious, or of any moral value. Creatures only seem valuable to him to the extent that they have sex, leisure time, minds wandering away from work, and desires to do things other than work.

This seems ironic because Scott Alexander is one of the most human and productive workers I know. He has a full time job as a psychiatrist, an especially demanding job, and in addition finds time to write frequent long careful analyses of many topics. I find it hard to see where he has that much time for leisure, and doubt he would in fact be substantially more productive overall if he took drugs to make him forget sex, mentally wander less, and focus more on his immediate tasks. He is exactly the sort of person an em economy would want many copies of, pretty much just as he is. Yet if we are to believe him, he only sees value in his brief leisure hours.

I see Alexander as having too little respect for the functionality of human behaviors and mind design. Yes, maximally competitive em-era behaviors and minds won’t be exactly like current ones. But that doesn’t necessarily mean one wants to throw out most existing behaviors and brain modules wholesale and start over from scratch. As these behaviors and modules all arose because they helped our ancestors be more competitive in some prior context, it makes more sense to try to repair, reform, and repurpose them.

For example, the robust productivity gains observed from workers who take breaks don’t seem to depend much on worker motivation. Breaks aren’t just about motivation; they are a deeply entrenched part of being productive. Similarly, wandering minds may take away from the current immediate task, but they help one to search for hidden problems and opportunities. Also, workers today who focus on just doing immediate tasks often lose out to others who attend more to building and managing social relations, as well as office politics. Love and sex can be very helpful in forming and maintaining relations.

Of course I’m not trying to offer any long term assurances, and it is quite reasonable to worry about what we will lose along with what we will gain. But since today most of the people we most respect and celebrate tend to be workaholics, I just can’t buy the claim that most of us today can’t find value in similarly productive and work-focused ems. And I just can’t see thoughtless workers being the most productive in the early em era of my book.

GD Star Rating
loading...
Tagged as: , ,

Problem, No Solution Taboo?

Three years ago I described the “What if Failure Taboo”:

A simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I suggested this could be an issue with my book Age of Em:

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

This week I talked on my book to a sharp lively group organized by Azeem Azhar (author of the futurist newsletter Exponential View), and learned that this taboo may be worse than I thought. I tried to present the situation as something that you might consider to be a problem, but that while my analysis should enable better problem solving, I’ve personally focused on just describing this situation. Mixing up normative and positive discussions risks the positive being overshadowed by the normative, and positive claims seeming less reliable when mixed up with more disputable normative claims.

Even with this reframing, several people saw me as still violating the key taboo. Apparently it isn’t just taboo to assume that we’ll fail to solve a problem; it can also be taboo to merely describe a problem without recommending a solution. At least when the problem intersects with many strong feelings and moral norms. To many, neutral analysis just seems cold and uncaring, and suspiciously like evil.

GD Star Rating
loading...
Tagged as: ,

Testing Moral Progress

Mike Huemer just published his version of the familiar argument that changing moral views is evidence for moral realism. Here is the progress datum he seeks to explain:

Mainstream illiberal views of earlier centuries are shocking and absurd to modern readers. The trend is consistent across many issues: war, murder, slavery, democracy, women’s suffrage, racial segregation, torture, execution, colonization. It is difficult to think of any issue on which attitudes have moved in the other direction. This trend has been ongoing for millennia, accelerating in the last two centuries, and even the last 50 years, and it affects virtually every country on Earth. … All the changes are consistent with a certain coherent ethical standpoint. Furthermore, the change has been proceeding in the same direction for centuries, and the changes have affected nearly all societies across the globe. This is not a random walk.

Huemer’s favored explanation:

If there are objective ethical truths to which human beings have some epistemic access, then we should expect moral beliefs across societies to converge over time, if only very slowly.

But note three other implications of this moral-learning process, at least if we assume the usual (e.g., Bayesian) rational belief framework:

  1. The rate at which moral beliefs have been changing should track the rate at which we get relevant info, such as via life experience or careful thought. If we’ve seen a lot more change recently than thousands of years ago, we need a reason to think we’ve had a lot more thinking or experience lately.
  2. If people are at least crudely aware of the moral beliefs of others in the world, then they should be learning from each other much more than from their personal thoughts and experience. Thus moral learning should be a worldwide phenomena; it might explain average world moral beliefs, but it can’t explain much of belief differences at a time.
  3. Rational learning of any expected value via a stream of info should produce a random walk in those expectations, not a steady trend. But as Huemer notes, what we mostly see lately are steady trends.

For Age of Em, I read a lot about cultural value variation, and related factor analyses. One of the two main factors by which national values vary correlates strongly with average national wealth. At each point in time, richer nations have more of this factor, over time nations get more of it as they get richer, and when a nation has an unusual jump in wealth it gets an unusual jump in this factor. And this favor explains an awful lot of the value choices Huemer seeks to explain. All this even though people within a nation that have these values more are not richer on average.

The usual view in this field is that the direction of causation here is mostly from wealth to this value factor. This makes sense because this is the usual situation for variables that correlate with wealth. For example, if length of roads or number of TVs correlate with wealth, that is much more because wealth causes roads and TVs, and much less because roads and TV cause wealth. Since wealth is the main “power” factor of a society, this main factor tends to cause other small things more than they cause it.

This is as close as Hummer gets to addressing this usual view:

Perhaps there is a gene that inclines one toward illiberal beliefs if one’s society as a whole is primitive and poor, but inclines one toward liberal beliefs if one’s society is advanced and prosperous. Again, it is unclear why such a gene would be especially advantageous, as compared with a gene that causes one to be liberal in all conditions, or illiberal in all conditions. Even if such a gene would be advantageous, there has not been sufficient opportunity for it to be selected, since for almost all of the history of the species, human beings have lived in poor, primitive societies.

Well if you insist on explaining things in terms of genes, everything is “unclear”; we just don’t have good full explanations to take us all the way from genes to how values vary with cultural context. I’ve suggested that we industry folks are reverting to forager values in many ways with increasing wealth, because wealth cuts the fear that made foragers into farmers. But you don’t have to buy my story to find it plausible that humans are just built so that their values vary as their society gets rich. (This change need not at all be adaptive in today’s environment.)

Note that we already see many variables that change between rich vs. poor societies, but which don’t change the same way between rich and poor people within a society. For example rich people in a society save more, but rich societies don’t save more. Richer societies spend a larger fraction of income on medicine, but richer people spend a smaller fraction. And rich societies have much lower fertility even when rich people have about the same fertility.

Also not that “convergence” is about variance of opinion; it isn’t obvious to me that variance is lower now than it was thousands of years. What we’ve seen is change, not convergence.

Bottom line: the usual social science story that increasing wealth causes certain predictable value changes fits the value variation data a lot better than the theory that the world is slowly learning moral truth. Even if we accepted moral learning as explaining some of the variation, we’ll need wealth causes values to explain a lot of the rest of the variation. So why not let it explain all? Maybe someone can come up with variations on the moral learning theory that fit the data better. But at the moment, the choice isn’t even close.

GD Star Rating
loading...
Tagged as: , ,