Civilization Vs. Human Desire

A few years ago I posted on Kevin Kelly on the Unabomber:

The Unabomber’s manifesto … succinctly states … the view … that the greatest problems in the world are due not to individual inventions but to the entire self-supporting system of technology itself. … The technium also contains power to harm itself; because it is no longer regulated by either nature of humans, it could accelerate so fast as to extinguish itself. …

But … the Unabomber is wrong to want to exterminate it … [because] the machine of civilization offers use more actual freedoms than the alternative. … We willingly choose technology with its great defects and obvious detriments, because we unconsciously calculate its virtues. … After we’ve weighted downsides and upsides in the balance of our experience, we find that technology offers a greater benefit, but not by much. (more)

Lately I’ve been reading Against Civilization, on “the dehumanizing core of modern civilization,” and have been struck by the strength and universality of its passions; I agree with much of what they say. Yes, we humans pay huge costs because we were built for a different world than this one. Yes, we see gains, but mostly because we are culturally plastic – we let our culture tell us what we want and like, and thus what to do.

And yes, contrary to Kelly, we mostly do not choose how civilization changes, nor would we pick the changes that do happen if we could. As I reported a week ago, our usual main criteria in verbal evaluations of distant futures is if future folks will be caring and moral, and since moral standards change most would usually rate future morals as low. Also, high interest rates show that we try hard to transfer resources from the future to ourselves. And if we could, we’d also probably make future folks remember and honor us more, and not forget our favorite art, music, stories, etc.

So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.

And yet, it is our unique willingness and ability to let our civilization change and be selected by forces out of our control, and then to tell us that we like it, that has let our species dominate the Earth, and gives us a good chance to dominate the galaxy and more. While our descendants may be somewhat less happy than us, or than our distant ancestors, there may be trillions of trillions or more of them. I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.

By my lights, things would probably have gone badly had our ancestors chosen their collective futures, and I doubt things have changed much lately. Yes, our descendants may not share today’s moral sense, or remember us and our art as much as most of us might like. But they will want something, often get it, and there may be so so many of them. And that could be so very good, by my lights.

So I say let us venture on, out of control, into the great and perhaps terrible civilization that we may become. Yes, it might be even better if a few forward looking elites could at least steer civilization modestly away from total destruction. But I fear that once substantial steering-abilities exist, they may not stay modest.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • IMASBA

    “Even I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.”

    It’s never out of the control of anyone, yes there are group processes at work here, but there are always people who have much more influence on the outcome than most of us. They could be kings, emperors, religious leaders or billionaires, but one thing they all have in common: power, and power is the ability to shape the future.

    Even so, “the people” have had an influence on development through democracy and laws: we choose not to have a lottery to select people who would be sacrificed for medical experiments, we chose to stop eugenics, we chose not to use nuclear weapons, stop their spread and reduce their numbers.

    On a side note, I believe morals will continue to change (200 years from now no one will be upset by gay marriage), but that does not mean ethics will change as much (we’re very unlikely to become a Randian society for example, people would violently resist that and regions would seccede) and I believe that in a sufficiently democratic and educated world economic dogmas will be sacrificed before ethical dogmas: ethics won’t be changed/eliminated to suit capitalism, capitalism will be changed/eliminated to suit ethics.

    • Stephen Diamond

      Robin is correct to emphasize that you can’t assume the coordination problem will be solved to exercise social control over the far future. We mostly haven’t solved that problem. Elites may be very powerful in the present yet conflict in such a manner that they’re powerless to control the future (as, for example, their inability, due to international competitive capitalism, to make headway against global warming or, universally, the fact of ultimately self-defeating wars between powerful elites).

      But, I suspect, Robin has invented a technology—where the genie need merely get out of the box—that puts the greatest possible strain on the human capacity to coordinate if it is to be prevented. I suspect he invented it because it mocks our ability to coordinate.

      • IMASBA

        “as, for example, their inability, due to international competitive capitalism, to make headway against global warming”

        This implies they’re actually trying. Treaties can ensure a level playing field for all, capitalism or not, but it’s these same elites that are the driving forces behind oppostion to such treaties, and it works, so they ARE successfully determining our future here!

        Of course group processes where conflict erases the will of all the participating elites does occur, but they certainly have the power to decide a lot about the future.

        “But, I suspect, Robin has invented a technology—where the genie need merely get out of the box—that puts the greatest possiblestrain on the human capacity to coordinate if it is to be prevented.”

        Only given certain legal and economic systems, there are ways of neutralizing the threat without having to resort to AI genocide.

        “I suspect he invented it because it mocks our ability to coordinate.”

        Some men just want to watch the world burn I guess.

      • Alexander White

        “Elites may be very powerful in the present yet conflict in such a manner that they’re powerless to control the future”

        Quite.

        “Robin has invented a technology”

        I’m a newb here, so what is this technology? Do you mean brain emulations? I’m curious why you think this puts the greatest possible strain on our ability to coordinate versus other things.

  • Stephen Diamond

    The fact that people use a one-sided far-mode to evaluate a future they can’t control doesn’t mean they would do the same if they could control the future. The ability to manipulate engages near-mode.

    • IMASBA

      Yes, that’s actually a brilliant argument!

      • http://overcomingbias.com RobinHanson

        It might be a bit more near when choosing than when talking, but there are plenty of near-far experiments that show choices being made in far mode.

      • Stephen Diamond

        A bit more is just right. Not everyone thinks the nearer the better.

  • Jim Stone

    Yes. And, because many features of the modern world undermine our needs for autonomy, competence and relatedness, Self-Determination Theory (Deci and Ryan), would also predict modern unhappiness.

    But it might be a coordination problem we can largely solve. If we come to understand our core psychological needs better, we might be able to tweak the modern world (or at least tweak our place in it) to better match our stone-age minds.

    A good deal of thought went into this essay that takes up the point: http://www.workwithflow.com/blog/stop-setting-goals-that-dont-make-you-happy/

    • IMASBA

      “But it might be a coordination problem we can largely solve. If we come to understand our core psychological needs better, we might be able to tweak the modern world (or at least tweak our place in it) to better match our stone-age minds.”

      Yes, that’s key. Unlike Robin I believe we’re still stuck in the second age of mankind. We’re still farmers: we still think we shouldn’t marry until we have our own farm (first real job, or business, even if we have to wait until we’re 35), we’re still bound by religious morals (separate from ethics) and we still have some form of capitalism where the tiniest variation or even sheer chance decide whether you’ll be a billionaire or a bum. However we might be on the treshold to a third age that will combine the mindset and freedoms of the first age with the technology of the second age.

      In the third age poverty and large economic differences won’t exist anymore, people will have a lot of personal freedom (organized religion will fade), a lot of leisure time, economics won’t be based on transferable currency anymore and there will be plenty of room for personal development. This coincides with technological breakthroughs in nanotech, gentech and spaceflight.

      Well that or we get some sad dystopia of environmental degradation and extreme inequalities.

      • Doug

        “In the third age poverty and large economic differences won’t exist anymore, people will have a lot of personal freedom (organized religion will fade), a lot of leisure time, economics won’t be based on transferable currency anymore and there will be plenty of room for personal development. This coincides with technological breakthroughs in nanotech, gentech and spaceflight.”

        This is a Star Trek fantasy. Current trends indicate moving in the opposite of this direction. Groups with strong organized religion are very quickly outbreeding those without. Economic regions of the world with inequality and low leisure are fast outgrowing their counterparts. For example East Asia vis-a-vis Western Europe. Economic inequality is rattling back up from its mid-20th century historical nadir. Economic resources are becoming increasingly concentrated among a class that relative to the general population is far more disciplined, hard-working, and competitive.

        On your final pillar, 20th century history ran an experiment pitting economic systems that were highly based on transferable currency against economic systems that were significantly less so. The latter economies all collapsed and were absorbed by the former in a half a century or less. Going further back one of the largest themes of history over the past three millennium has been the victory and hegemony of transferable currency economies and states over.

        Furthermore nano tech, genetic engineering and spaceflight, if anything would push away from your desired outcomes. These are expensive technologies that require large capital commitments that offer substantial returns. Cultures that value hard work, saving, future time orientation and sacrifice will reap the outsized gains from early adoption, and marginalize those that don’t.

        The only way your dream could come to fruition is by some sort of global coordination to halt or limit competition. Without global coordination evolutionary pressure will always exist on at least some level, be it individuals, groups, cultures, firms or nations. You may get temporary punctuations of equilibrium that provide heavy abundance that ameliorate competitive pressures. But everything we know about the mechanics and history of evolution tell us that periods like this are lulls, not asymptotes.

      • IMASBA

        Those trends are untenable in the long run (how long will populations put up with increasing inequality? how long can any country remain a low wage country when it’s growing rapidly?), but I did say “Well that or we get some sad dystopia of environmental degradation and extreme inequalities”, didn’t I?

        “Economic resources are becoming increasingly concentrated among a class that relative to the general population is far more disciplined, hard-working, and competitive.”

        You mean born with a silver spoon and more shrewed? It’s like I said, current economics are like chaos theory: the tiniest initial difference or even sheer luck mean the difference between becoming a billionaire or a bum, or do you belief that someone who makes 100x more moeny than you works 100x harder (maybe they work 600 hours per day?)

      • http://twitter.com/AlexeiSadeski Alexei Sadeski

        “how long will populations put up with increasing inequality?”

        Slavery and feudalism lasted a long, long time.

      • IMASBA

        Let’s hope the genie is out of the bottle (people being used to not living in slavery and the wrongness of slavery now being an established global meme). In any case the slave revolts (from Spartacus to Haiti) show us slaves never really accepted their fate lying down.

      • Ryan W.

        It also showed us that oppression wasn’t what caused revolts. They didn’t happen when things were worst for the oppressed, but when things started to get better.

      • IAMSBA

        I believe that’s a fallacy: what historians mean by “better” is usually higher AVERAGE wealth, but in a society with a legal underclass that usually means greater economic inequality. You don’t revolt if the king is just as poor as you are, because if even the king is poor then that means there’s simply very little to go around (this is far more likely than the entire elite abstaining from wealth voluntarily), which means even poor people are getting their fair share in return for their contribution to society. Only when the elites start becoming richer do the poor revolt because then they can make the case that they are not getting their fair share and only then there is something to be gained from a revolt. Rome as awhole became fabulously wealthy through plunder, but that wealth didn’t trickle down to the slaves.

      • VV

        This is a Star Trek fantasy.

        Actually, that’s Communism, although Star Trek TNG does arguably portray a communist utopia.

        Anyway, I don’t think that totalitarian world government with planned economy and Malthusian cut-throat competition are the only two possible outcomes: people above some level of wealth tend to voluntarily limit their fertility, which means that the world population may stabilize at a non-Malthusian level without the need for global coordination and coercion.

      • IMASBA

        Communism? Planned economy? Totalitarian world government?

        Are you sure it was my comment you were reading? It mentioned none of those things, but then again people see what they want to see.

        You are right though that there is a third possible future. A cyclical system, a bit like we have now: inherently unstable, but it can be prevented from self-destructing if someone (like an elected government that keeps changing and/or a central bank) keeps adjusting the dials all the time. Still, future technologies would eventually force such a system to one side or another, it definitely can’t stay like it is now when stuff like EMs gets invented.

      • VV

        How do you run an economy without transferable currency? From each according to his ability, to each according to his need? Where did I hear that?

        Anyway, don’t hold your breath for sci-fi things like brain emulation or magical nanotech. Regardless what Hanson and the other wide-eyed futurists say, they are not inevitable and quite possibly not even likely.

      • IMASBA

        “How do you run an economy without transferable currency?”

        You still echange goods and services: you just can’t save currency, own it in large amounts or pass it on to heirs. Private ownership of land, waterbodies or airspace (but these things can certainly be leased).

        The government creates new currency every cycle (which could be a year), probably proportional to the total energy production. There is still a derivative of a free market, where you transfer currency to a business if you make use of their goods or services, but there is a limit to how much of this currency can end up in any one person’s pockets (“maximum wage”). The remainder has to be invested into goods and services by the business before the end of the current cycle or transferred to a fixed number of competing stateowned (but allowed to fail) banks which fulfill the role of shareholders. In addition there’s a “guaranteed minimum income” that replaces things like social security, food stamps, scholarships, etc… and is available to everyone on top of the salary they receive for work on the semi-free market economy.

        This way businesses can still compete and pay their personnel and management well (providing an incentive for work, but never making it a necessity so this society can handle advanced automation), but at the same time wealth doesn’t build up in the hands of a small elite and poverty won’t exist. It has more traits of technocracy and capitalism than communism to be honest. Anyway, that’s just one possibility. Just because capitalism and communism are so well known doesn’t mean there’s nothing else out there, perhaps waiting to be invented a century from now.

      • IMASBA

        “Anyway, don’t hold your breath for sci-fi things like brain emulation or magical nanotech. Regardless what Hanson and the other wide-eyed futurists say, they are not inevitable and quite possibly not even likely.”

        Nature locked human intelligence up in a computer the size of a large grapefruit (less if there are no muscles to control), so we definitely know small sized AIs, including EMs are possible. As for nanotech, we don’t necessarily require molecular assemblers or anything as fancy as that to greatly improve manufacturing speed and efficiency, which is what it’s all about in any society. Just fully utilizing the genes of bacteria can get us pretty far.

      • Ryan W.

        “In the third age poverty and large economic differences won’t exist anymore”

        Why? It seems like the contrary will happen; a few bright individuals will be capable of creating an enormous amount of value while the value of manual labor will continue to decrease in value and perhaps fade entirely in the face of automation. Likely starvation, disease and material deprivation will disappear. But that can still happen in the face of increasing economic inequality. That’s not automatically a dystopia.

        People have been predicting more leisure time since the industrial revolution. It hasn’t panned out yet.

        One of the nice things about the modern age is that income seems at least a little more associated with ability. The rise of venture capital, IPOs, etc. has meant that one didn’t need an aristocratic background to leverage a large business to success.

        I could see religion changing radically since much religion was crafted in response to extreme political repression. But psychologists have predicted the extinction of religion for nearly a century, based on the atheism of their colleagues. So far, if it’s happened it’s happened very slowly. It’s precarious to assume that everyone needs what our circle of friends need to be psychologically healthy.

      • IMASBA

        “Why? It seems like the contrary will happen; a few bright individuals will be capable of creating an enormous amount of value”

        Will they, or will this line of thinking be unmasked as a series of fallacies? When even Forbes Magazine admits tournament theory plays a role in executive compensation and mathematicians write papers claiming management successes are usually down to luck, you know something is wrong. Anyway, what’s our idea of labor value based on anyway? If it takes three times as much work to attach a lead component to a machine as it does to attach a gold component, does that mean that the guy who attaches the gold components is more valuable? The guy attaching the lead components works just as hard and smart and you need both to make the machine work.

        “People have been predicting more leisure time since the industrial revolution. It hasn’t panned out yet.”

        It has in many countries, but it took government regulations and unions to get there. The point is that the sentiments that gave rise to those regulations and unions will not go away in the future: when people notice their share of the wealth goes down faster than their share of the work they will get angry.

        “One of the nice things about the modern age is that income seems at least a little more associated with ability. The rise of venture capital, IPOs, etc. has meant that one didn’t need an aristocratic background to leverage a large business to success.”

        All aristocracies started out with commoner “entrepeneurs”. Trust fund kids are just aristocracy under a different name. Even soicalized education can’t stop aristocracies from forming, only a limit to the amount of wealth families can own and inherit could do that.

        “I could see religion changing radically since much religion was crafted in response to extreme political repression. But psychologists have predicted the extinction of religion for nearly a century, based on the atheism of their colleagues. So far, if it’s happened it’s happened very slowly.”

        It seems you do not live in Europe (or China). Organized religion is almost gone over here, the United States are actually the exception.

  • Brian Matthews

    A really thoughtful piece Robin. Fantastic post!

    • Doug

      I agree, one of your best points.

  • michael vassar

    This isn’t a crazy perspective, but it’s one I disagree with, largely, I think, because I am focused on larger scale changes. I might take 10X the current population of Mongols or Aztecs over the current population, for aggregation utilitarian reasons. Hell, I would probably take the current population of Periclean Athenians, which indicates that it’s not just time or distance of divergence, but I wouldn’t take 10,000X the current population of monkeys, 2X of Mongols or Aztecs, or 1,000,000,000,000,000,000,000,000X zooplankton

    • komponisto

      Are you taking into account future-directing ability (among other things)? If you really think 60 billion people living in Aztec or Mongol conditions would be a net improvement over the status quo, that would seem to suggest very different (even diametrically opposite) policies from the ones you seem to be pursuing. (It seems like the kind of thing that could actually be achieved by imposing extreme regulation of technology, highly conservative morality, large-scale wealth redistribution and possibly other unpleasant-sounding things that already have substantial support bases in current society.)

      Aggregate utilitarianism seems utterly insane to me, and essentially nobody acts as if they believe it. It’s one of those ethical theories that seem to imply that no one should ever be allowed to have any fun, ever.

      • michael vassar

        Aggregate utilitarianism is insane, but so is the assertion that numbers shouldn’t aggregate at all. You and I seem to have very different intuitions about what can be achieved and how. My intuition is much more friendly, BTW, to the Mongols than to the Aztecs.

      • VV

        Does it involve subjugating other people and grabbing their wealth without producing anything significant on your own?

      • michael vassar

        No, both groups do that, which is why they are used as examples of value drift, but Mongols seem close to default human values, while we and the Aztecs look like orthogonal divergences from default human values.

      • VV

        I’d say the contrary: Aztecs had an advanced civilization for their time, with an agricultural economy. Yes, they subjugating other people and demanded tributes, including human sacrifices, but overall they were net producers.

        Genghis Khan-era Mongols, on the other hand, had an economy that consisted in stealing from subjugated people and little else.

      • komponisto

        Utility should depend on number, but in a manner so complex that discussing number per se is misleading. It’s really dependent on other things that are correlated with number but typically absent in large-number thought experiment scenarios. (BTW I’m starting to think of “utilitarianism” as a catch-all term for simplicity-of-value theories — all of which are subject to Pascal’s mugging, “torture over dust specks” being a special case of the latter, if you think about it.)

        I would expect the ideal to be something like “foragers with technology”. It seems much easier to get there from “farmers with technology” (now) than from “more farmers without technology” (Aztecs) or “more foragers without technology” (Mongols, though not literally), and that surely counts for a lot in the utility calculation, doesn’t it?

      • michael vassar

        Case largely made, not to the point of my being confident in it, but to the point of my ceding the point.

    • Alexander White

      “Hell, I would probably take the current population of Periclean Athenians”

      Wait, what?

  • Stephen Diamond

    By my lights, things would probably have gone badly had our ancestors chosen their collective futures, and I doubt things have changed much lately.

    Perhaps a religious remnant. “Blind” causality, utterly indifferent to human welfare, produces a better outcome than one where conscious, planning beings exercise a measure of control??? Seems highly implausible, unless cosmic causality isn’t blind after all.

    • Ryan W.

      I don’t think it was ‘blind’ causality which produced a better outcome than conscious planning. It was adherence to fairly simple rules which allowed individuals to act autonomously, to be ‘responsible’ for their actions, to see that exchanges benefited both individuals, etc. This may not create an optimal situation, but it does create a situation which is remarkably dynamic, resilient, robust and capable of generating wealth rather than being incented to spend all advantages immediately.

      At the end of the day, this is the root of the power of good religion; It encourages people to abide by such rules.

  • Lord

    Morals change, but usually improve, still we may not wish to face our own behavior under future standards as we may feel unable to live up to them.

    I agree we are more controlled by the natural world and technology than controlling it. If we disperse through the universe in a way that civilization fragments without close contact and exchange, this could end up very badly, for rather than out of control, we are in the control of civilization and if civilization fragments, divergence and conflict will rule.

    • Doug

      “Morals change, but usually improve, still we may not wish to face our own behavior under future standards as we may feel unable to live up to them.”

      The great myth of Whig history is that morals improve throughout time. Of course from your current vantage point as someone with the dominant culture and morals of 2013, it looks to you like the morals of today are better than the morals of 1900 and the morals of 1900 are better than 1750, and 1750 better than 400 B.C.

      The simple reason is that culturally, and hence morally, we are closer to the people of 1900 than we are to the people of 400 B.C. So if our moral judgement is the simple distance function of a person’s moral to ours or our cultures morals, then the past does look like a monotonic increase in morality.

      If you believe in some sort of actual process of a historical zeitgeist that’s slowly increasing the morals of the human population, to some some continued upward trajectory based on some objective morality. That’s not entirely impossible, its certainly the case that its happening with technology. But then again technology is almost certainly objective, whereas you’d have a hard time convincing me that morality is.

      Of course if morals are subjective and relative and its simply a distance function, then the future will be a monotonic decrease in morals. The people of 2050 will be more like us than the people of 2200, and them more so than the people of 3000. The simple suggestion is that the people of 2050 will look less moral than the people of today, and the people of 3000 even less so.

      As you allude to in your post one way to decide is to think about what the people of the past would think about our moral standards. If the former is the case then the revelation of our moral standards to the past should cause them to change their ways. If the latter is the case then such a revelation would be met with disgust and rejection.

      We can’t actually carry out this experiment. But say we had a time machine to send back copies of Uncle Tom’s Cabin, Avatar, The God Delusion, The Awakening and a back catalog of Mother Jones to Imperial Spain circa 1500. I’m pretty sure the only thing that would be Enlightened would be the stake the time traveler gets burned on.

      • Lord

        If you don’t believe they improve, then being burned on the stake isn’t worse, just different. To be fair, one can’t simply confront those of the past with modern morals and ask them to judge, but must educate them and being of that time may refuse to learn just as we might if thrust into the future, though this is not anything I would hope anyone would aspire to. It may be they would still prefer their own times as we may prefer ours, but they and we shouldn’t believe that is due to our moral superiority.

      • Stephen Diamond

        If you don’t believe they improve, then being burned on the stake isn’t worse, just different.

        Couldn’t you see the case being made that condemning a person to an entire lifetime in prison is worse than burning him on the stake? What’s to say? If, with clairvoyance, you gave a random sample a choice between a life in prison and a life free until it comes time to die, at which point they are burned at the stake, do you have complete confidence about how any person would choose?

      • dmytryl

        But that’s not how being burned works; you either get burned, starting now, or you get life-imprisoned, starting now. Getting a second lifetime of lifetime imprisonment after your natural lifespan would still be preferred to being burned at stake.

        (A case can certainly be made if you subscribe to one of the vile doctrines where some sort of summation done by one person is supposed to represent interests of another person more accurately than their preferences, burning people is better, but that’s not quite it)

      • Stephen Diamond

        Make the “vile case” properly, if you would.

        But that’s not how being burned works; you either get burned, starting now, or you get life-imprisoned, starting now.

        What I think my hypothetical shows is that if you reverse the time discounting, you could (arguably) get the opposite result. (Your counter-example augments the time discounting effect in favor of imprisonment.)

        One should prefer, of course, to equate the time discounting effects, but I don’t see how that can be done. But if the preference for imprisonment does depend on time discounting, the ethical case favoring imprisonment is not on the order of a whole different kind of thing.

        Perhaps part of the “vile doctrine” is disregarding time discounting, which is a central part of one’s preferences. Perhaps you consider time discounting a “preference” whereas I consider it a bias. (An interesting difference I’m not sure how to resolve.) But this, it seems to me, lends greater support to the contention that what we think of as moral progress is (largely) subjective. Could you not imagine a relatively rational culture concluding that life imprisonment is far, far worse than people today recognize, worse than burning at the stake, and we just don’t realize it, even victims don’t realize it, because it is so spread out in time?

      • dmytryl

        > One should prefer, of course, to equate the time-discounting effects,

        Suppose you are granted 100x life span, then 1 normal life span of life imprisonment, or you are granted 100x life span, followed by being burned at a stake. Here the effect of discounting should be nearly eliminated; the moments in prison are discounted about as much as the moments of being burned.

        > But this, it seems to me, lends greater support to the contention that
        what we think of as moral progress is (largely) subjective.

        Well, what we can see is that it is moving in direction of satisfaction of preferences.

        > Could you not imagine a relatively rational culture concluding that
        life imprisonment is far, far worse than people today recognize, worse
        than burning at the stake, and we just don’t realize it, even victims
        don’t realize it, because it is so spread out in time?

        This would have to be based on the notion that some sort of calculation of value is objectively correct, where the value is subjective, the choice of how to calculate is subjective and arbitrary, and so on. This society would be wrong, not necessarily about the subject matter but about how it sees objectivity where there’s none.

        There’s other interesting comparison – some people would probably
        prefer to be transported into 100 000 years ago, over being burned at
        stake, but would prefer to spend lifetime in prison over being
        transported into 100 000 years ago or similar “hardships”.

      • Stephen Diamond

        The price of controlling for time discounting in your 100x hypothetical is that the signals are diminished so much that it’s impossible (at least for me) to obtain a clear intuition of which is preferred.

        Well, what we can see is that it is moving in direction of satisfaction of preferences.

        I accept your measure, satisfaction of preferences (but only within the tolerances of “thin utilitarianism” ( http://tinyurl.com/bfcm89e ), but what reason is there to think subsequent moralities are more adapted to satisfying preferences than prior moralities? There are two questions here that might be important to distinguish. The less relevant one is the anthropological point that, in fact, moralities haven’t gotten steadily more permissive but their development is U shaped (as per Robin Hanson, rediscovering Morgan and Engels on “foragers” and “farmers”). The second is the philosophical question of whether, during any historical segment, morality can clearly be said to have been better for satisfying preferences.

        The main problem with maintaining that there is a direction to moral change is the absence of a mechanism that might explain it. While I think we may hope for better in the human future, it remains true that in the past, man has not chosen his future. Only to the extent that better scientific knowledge leads directly to better morals progress has occurred. (One reason for not burning people at the stake is that it’s rationale regarding the effects on the afterlife are no longer accepted for reasons that are partly scientific. But that effect is limited, and it isn’t clearly directional overall.)

        This would have to be based on the notion that some sort of calculation of value is objectively correct, where the value is subjective, the choice of how to calculate is subjective and arbitrary, and so on. This society would be wrong, not necessarily about the subject matter but about how it sees objectivity where there’s none.

        The future society (by hypothesis, as “relatively rational”) wouldn’t see morality as objective: its citizens would actually choose burning over imprisonment. Its people would be sensitized to what’s bad about prolonged confinement and sacrifice of personal autonomy. These things would seem to them horrendous, both to inflict on others or to experience themselves. At the same time, they might be inured to the experience of brief intense pain and come to consider it trivial by comparison.

        There’s other interesting comparison – some people would probably prefer to be transported into 100 000 years ago, over being burned at stake, but would prefer to spend lifetime in prison over being transported into 100 000 years ago or similar “hardships”.

        Yes, but then how do you measure “moral progress”?: there’s no consistent standard.

      • dmytryl

        > The future society (by hypothesis, as “relatively rational”) wouldn’t
        see morality as objective: its citizens would actually choose burning
        over imprisonment.

        But what’s the problem then? This society could have moral progress from imprisonment to burning (provided that they started with imprisonment due to how horrid they seen it to be).

        Given that we prefer imprisonment to being burned at stake (and did prefer that back in the day), isn’t it a clear cut case of progress – in terms of existing preferences getting satisfied – that the punishment has shifted from burning to imprisonment?

        When there’s two moral systems, over time, mankind tends to end up choosing one over the other; then another variation happens and so on. Something we consider devoid of morality would eventually develop greater and greater hypocrisy as one could get considerable pleasure from that; a highly hypocritical moral system will value consistency and disvalue hypocrisy; somewhat less hypocritical system will win over the hypocritical one according to hypocritical system’s own values; and so on and so forth (it’ll also be assisted by clashes where one culture loses to the other). One could build a preference measure out of this, assigning the moralities some value, on basis of preferences, that ends up generally increasing as preferences are actualized.

      • Stephen Diamond

        But what’s the problem then?

        That there’s no reason to think morality progresses if those moral preferences by which we judge our morality superior to those of the past are really the dependent variable. (Burning had been offered by a poster as a particularly clear example of how morality has advanced; the example depends on our unequivocal agreement that burning was worse than modern kinds of punishments.)

        Given that we prefer imprisonment to being burned at stake (and did prefer that back in the day),…

        I don’t know that they did back in the day. Do you? Life imprisonment hadn’t been implemented, certainly not in “modern” prisons.

        It’s moral progress only if humanity abandoned burnings because they’re morally inferior. If they didn’t clearly think they were worse in their day–or even if people’s dread of burnings wasn’t the cause of their abolition–you can’t speak of morality inherently progressing.

        Something we consider devoid of morality would eventually develop greater and greater hypocrisy as one could get considerable pleasure from that; a highly hypocritical moral system will value consistency and disvalue hypocrisy; somewhat less hypocritical system will win over the hypocritical one according to hypocritical system’s own values; and so on and so forth.

        What makes you think moral hypocrisy interferes with effective functioning in cultural competition? Hypocritical societies (say Victorian England) have performed well.

        Why should societies that are less ideal ethically be more hypocritical? A less “moral” society will, at least by some measures, be less hypocritical: the greater the moral pretensions (and the greater the actual sway of morality) the more benefit will accrue to hypocrisy and the more hypocrisy will exist. (For example, today’s antiracist norms occasion more rather than less hypocrisy because people conceal racist views they otherwise would have expressed.) The willingness of many people to be hypocritical about race represents moral improvement.

      • dmytryl

        The point is that there’s a certain sequence of changes. Non moral societies have no qualms about false claims or pretences, inclusive of hypocrisy, and so they naturally get replaced by hypocritical societies – its totally free to invent morals and not follow them and thumb your nose at everyone else, people love doing that. Hypocritical societies, among other things, end up valuing consistency and valuing non-hypocrisy, and small decreases in hypocrisy win over – a little non hypocrisy ends up having very small cost but big signalling advantage in the sea of complete hypocrites. And so on.

        It is no different from any other progress, really. Progress in the stone tools for example – tools that are less preferred by humans end up replacing with the tools that are more preferred by humans, in a gradual fashion of course (because nobody’s going to be replacing their stone axe with an iphone, but an iphone 1 with iphone 2 they would).

        Picture a ball rolling downhill. The ball doesn’t know where it’s rolling, it acts strictly based on the local conditions. We can look at it from the god’s eye view, and say, ohh, this ball is rolling downhill (except occasionally it bounces uphill and so on). When we are that ball, we can’t do this, we only know local conditions, we don’t know where downhill is, but there’s still a metric you can build out of state transitions, where we are rolling in some direction, we just don’t know it.

      • Stephen Diamond

        It is no different from any other progress, really. Progress in the stone tools for example – tools that are less preferred by humans end up replacing with the tools that are more preferred by humans…

        I think I understand our difference. You see human societies as expressing a collective interest in developing their moralities. I see individuals adopting moralities that help them survive at a given level of technological development. These moralities have nothing to do with societal optimality: the sum of the most advantageous moralities for individuals will as often be a societally disadvantageous morality as an advantageous morality. There is no mechanism for the existence of an automatic harmony between the moral requirements for individual survival and those for societal flourishing.

        Put another way, in morality the externalities completely dominate. Whereas, the same tool developments are individually and socially advantageous (to a very rough first approximation).

        Technological progress is constant; moral progress–so far in history–has been sporadic, historically accidental, and as likely to be reversed as to continue (Western civilization being a tiny slice of the human picture.) Societies built on slavery are a recent human development.

      • dmytryl

        But same could be said about technology – the technologies are adopted by individuals, the progress of any note is very sporadic, and as for coincidence between socially and individually advantageous, the society is made of individuals who gain advantage over individuals from the other society.

        I think the difference is that you are speaking of progress in the direction which you deem moral. I’m speaking of progression where the next step is determined by the human nature and the like; in so much as it is not completely random, there is progress along a path.

        edit: also there’s definitely some progress in how well moral principles can be processed. We look at witch burning, and we think, ohh, how immoral that was. Or even earlier, look at human sacrifices for the sake of good fortune. Look, these folks had some excuses to what they were doing, all the way back. Is it a stretch that as the languages and reasoning improves, the underlying moral principles, which stay largely the same, are followed through better ?

      • Stephen Diamond

        But same could be said about technology…

        “Could be said” in the sense of logically possible; not in the sense of empirically adequate. Technology is discrete; morality is diffuse. This isn’t a logical truth; perhaps it’s a biological truth. Technology is a near-mode product; morality a far-mode product.

        I think the difference is that you are speaking of progress in the direction which you deem moral. I’m speaking of progression where the next step is determined by the human nature and the like; in so much as it is not completely random, there is progress along a path.

        But note that this ambiguity doesn’t pertain to technology, which distinction shows how it differs from morality.

      • Lord

        I do think morals are something we afford, so as we have become more prosperous we have afforded more but that is not a given. Should we become poorer in the future, we could see fewer of them.

      • Stephen Diamond

        The “moral decay” of the Roman Empire occurred when it was prosperous, and it occurred among the rich.

        Higher morality, it is true, is possible only with wealth. But wealth doesn’t necessarily produce morality, and it hasn’t so far in history.

      • Lord

        I agree. It offers the possibility rather than the inevitability. It is more likely if wealth is increasing, and less if decreasing even if the level is high.

      • Muga Sofer

        “The great myth of Whig history is that morals improve throughout time. Of course from your current vantage point as someone with the dominant culture and morals of 2013, it looks to you like the morals of today are better than the morals of 1900 and the morals of 1900 are better than 1750, and 1750 better than 400 B.C”

        That cultures nearer to us look better is in fact evidence *for* the claim that morals improve over time. If history were a random walk, we would expect cultures to be as likely to move away from us as toward us – there would be no consistent direction. You could argue that the direction we’re moving in has passed some ideal point and is now moving away from moral perfection, but that would be attributed to nostalgia.

      • Tony

        This presupposes there is only one dimension.

      • Muga Sofer

        I’m pretty sure multiple dimensions don’t make random walks any less random.

      • http://overcomingbias.com RobinHanson

        There are other theories besides random walk. My view is that we have different inbuilt moral tendencies for different levels of wealth. A constant wealth trend then makes for a constant morals trend.

      • Muga Sofer

        Well sure, there has to be some underlying cause – I was responding specifically to Doug.

      • Ryan W.

        “But then again technology is almost certainly objective, whereas you’d have a hard time convincing me that morality is.”

        If morality is in any way related to truth and we’ve demonstrated some ideas to be false which weren’t previously known to be false have we become more moral?

    • Alexander White

      “If we disperse through the universe in a way that civilization fragments without close contact and exchange, this could end up very badly”

      This is something I’ve been wondering about. Would a singleton be possible in a world where humanity inhabits several star systems?

  • robertwiblin

    A competitive future likely means high population but low incomes, such that quality of life could plausibly be below zero. Or outright collapse due to the inability to universally regulate dangerous technologies (esp ones which offer the local area some competitive advantage). A non-competitive/singleton future risks an sub-optimally small and self-serving population, or outright stagnation. Count me worried either way.

    • Stephen Diamond

      Count me worried either way.

      You should know that (granting the technological assumptions), this is all a projective test regarding near-mode and far-mode preferences, starting with a preference for the averaging or adding version of utilitarianism.

      As Robin pointed out at least once, near-mode adds and far-mode averages. (See my Avoiding irrelevance and dilution: Construal-level theory, the endowment effect, and the art of omissionhttp://tinyurl.com/9sw54v8 — for elaboration)

      Assuming your major field of study reflects your preferred construal level, you’re intermediate.

      • VV

        If population size is equal to one, then there is no difference between total and average utilitarianism.

    • Alexander White

      Great comment. I agree that there are risks involved with each type of future. The ideal case seems to me a situation where the major global powers voluntarily cede control on *certain* issues to a singleton — but not *all* issues. The nearest precedents that come to my mind are the European Union or the Catholic Church in medieval Europe. (Does anyone with more grasp of history want to add to this?) What we should want is the minimally necessary control by a singleton for a decent future but for decisions on irrelevant issues to remain competitive. Obviously “decent” is subjective but I would certainly include the survival of homo sapiens as a necessary condition!

      How likely reaching such a minimal singleton? If the singleton is formed by one government conquering the others then obviously it’s not going to happen. But at this point one government doing that seems pretty unlikely. If the singleton was formed voluntarily then I don’t see why it couldn’t be kept sufficiently loose if that was considered a priority.

  • http://www.facebook.com/marc.geddes.108 Marc Geddes

    The ‘game of thrones’ does not stop because you turn the other away. The possibility of a Singleton future exists. Simply pretending it doesn’t exist just cedes the throne to others. There have always been those born to rule. I would have no qualms about siezing power (i.e AGI) and simply steering the future where I believe it should go. Slytherin house for the win!

  • Wei Dai

    >But they will want something, often get it, and there may be so so many of them. And that could be so very good, by my lights.

    I’m very uncertain that this is sufficient for the future to be very good, by my lights. If it is, then what about a future filled with very simple programs that all want and repeatedly get something also very simple, like the next bit of their input being 1 instead of 0? It seems that merely wanting and getting is not sufficient for something to be of moral value. Something else is needed and although I don’t know what that is, it seems reasonable to doubt if it is present in the future you expect, absent substantial steering abilities.

    • http://overcomingbias.com RobinHanson

      Much more reasonable to fear its absence than to expect it. Humans today are extremely far from being such simple programs, so there would have to be an enormous change that we have no particular reason to expect.

      • VV

        Heroin addicts are pretty much as close as unengineered biological humans can get to wireheads.

      • Wei Dai

        If you don’t think that “they will want something, often get it” is enough to make a future very good, then what is enough? Until I understand that and can expect an out-of-control future to have that property, I wouldn’t push the future towards being more out-of-control, like you’re advocating in this post. It seems a better bet to work out the answer to this question first, or failing that, try to make sure the future will be controlled by some entity both capable of answering such questions (i.e., capable of moral philosophy) and motivated by the answers.

    • Ryan W.

      That’s a good question. I realize that there are parts of this question that I can’t answer because I’m too close to the issue. But I think some portion of the response include things like power, survival and those things which are associated with these traits. We may not care if the future likes our music or honors us, but some might care if our family tree died out. This is not the ONLY thing that is important. I do think it’s part of the equation. Many people are willing to defer gratification for the sake of their offspring (or maybe even strangers) having these things.

  • http://www.facebook.com/beachhouseguy Alexander Gabriel

    But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.”

  • Alexander White

    Great post.

    This I agree with:

    “So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want…This is mostly out of anyone’s control, and yes it could end very badly.”

    This I do not:

    “I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.”

    I think without the ability to strongly coordinate our future–and in particular, through the singleton structure that Bostrom has described–that homo sapiens and the world we know will likely be ruined during the 21st century.

    There’s no reason to expect that more capable intelligences will share the fundamental values of homo sapiens or even grant us moral standing. Everything we know about the development and history of complex organisms and humanity militates against them doing that. I think Hugo de Garis makes a short and sweet case here, common sense an even shorter one and Nicholas Agar a somewhat longer one.

    So the most likely scenario in which homo sapiens continues to prosper, I think, is one in which we make sure that posthumans do not come into existence.

    Of course, one might actually not care about homo sapiens. Perhaps we fancy our preferences more cosmopolitan. Maybe humans go extinct and that’s an acceptable price for more capable conscious minds dominating the galaxy. I certainly do not agree with that but I think many transhumanists might. What we define as “badly” depends entirely on our frame of reference.

    I actually find it curious that nobody prominent has stepped into the void and strongly called for the kind of global coordination that would be necessary for humanity to meaningfully choose its future. I mean among the people who are aware of the threat of unfriendly AI and so forth, there seems to be this virtually unanimous and lockstep assumption that humanity planning its future would be either bad or impossible. It seems like a tremendous unfilled intellectual niche exists! Why has no one seized it?

    • Alexander White

      Looking over what I wrote last night, I should be clearer. I don’t think everybody thinks planning our future is bad or impossible. Bostrom takes at least a neutral and maybe mildly positive stance on this. What I mean is that everybody discussing these topics thinks that global coordination *for abstinence from broad areas of scientific research* is impossible or very unlikely. With that said, it’s less obviously a weird situation. But it seems to me that if you believe global coordination or steering is possible at all, then it’s not such a leap to believe that broad abstinence is possible too.

      • Alexander White

        Well, I thought this comment was going to be deleted. Anyway, I edited the post.

  • Gene

    “Forward Looking Elites”

    Even if such Elites existed and were capable of doing “for the greater good” how can we encourage them not to assume that their good is the greater good? What is to prevent them from even selling “regulation” in the guise of “greater good” which is really for their own interests?

    IE – I’ve never met a person who wanted “Population Reduction” who was willing to include themselves in the group “to be culled”. Quite a few such Elites also have large families. Sounds a bit Eugenic to me.

    I feel that we need more self regulation with downward distribution of knowledge needed to make decisions and less regulation by persons who ASSUME that they know what is best for all based upon incomplete knowledge, distortion or even…. narratives and ideology.