Meh Transhumanism

I was going to title this “Against Transhumanism,” but then realized I’m more indifferent than against; it distracts some folks from what I think important, but probably attracts others; I can’t really tell if there is much overall effect.

The history of the future is the rise and fall of groups claiming to advocate for the future, with advice that just happens to also raise the social status of their affiliate groups.

When I was very young there was still lots of cold war futurism, about how the future would be ours because our side was more industrious, smart, moral, etc.  Each of us could help make a bright future for the world by by helping our side win, via full support of our heroic governments and big corporations.  Unity yeah, divisiveness boo.

Then there was enviro-futurism, about how the world will go to hell in a handbasket unless we followed their lead and cared less about greedy materialistic dominance, and more about peaceful submissive less-kids-n-stuff harmony with nature. Flowers n drugs yes, SUVs no. There is still lots of this around.

Then arose techno-optimism, about how everything depends on our brilliant engineers, especially heroic ones who risked personal failure to fight incomprehension and entrenched interests to bring us innovative new techs.  We should get government off their backs, celebrate their genius, and maybe sleep with them once in a while.  Those old pesky problems of war, environment, jealosy, etc. will be swept away in a tsunami of new tech changing all.

Then there was transhumanism, a clever appropriation of the reigning academic storyline of defending minorities oppressed by a reigning majority.  Here the minority is not an ethnicity or sexual orientation, but imagined future tech-modified people.  Conservatives who accepted other kinds of diversity could be goaded into opposing this kind, allowing advocates to heroically defend against such prejudice, and get tenure in the process. Rah disliked future folk.

Yes I’d mostly favor letting future folks change themselves, yes tech is powerful, yes environmental problems are real, and yes it mattered who won the cold war.  I’m not so much against the main claims of these groups as I am against their concept of themselves as the main folks who care about the future.  These just won’t be the central issues when the future arrives.  Yet when the media reports on the future, reporters pretty much only ever quote these sort of futurists, who have hijacked the future to support their side of certain current disputes.

Truth be told, folks who analyze the future but don’t frame their predictions or advice in terms of standard ideological categories are largely ignored, because few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice.  It would be great if those who really cared more directly about the future could find each other and work together, but alas too many others want to pretend to be these folks to make this task anything but very hard.

FYI, I’ll be on this futurism radio show tonight at 10-11p EST, in preparation for speaking at this Foresight conference Jan. 17. I’ll also debate Mencius Moldbug there on futarchy Jan 16.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Nancy Lebovitz

    Truth be told, folks who analyze the future but don’t frame their predictions or advice in terms of standard ideological categories are largely ignored,

    Could you list the ones you respect?

  • Sperling

    Mencius is going to kick your autistic ass. Be prepared to be embarrassed, Hanson.

  • Eric Johnson

    You cis-humanists are deplorable bathers in stagnant mire, empty of the glorious Will to Power which is the source of all value and meaning. I certainly deserve high status for making this known.

    I’m looking forward to your dialectical collision with Mencius. Only one of these intractably contrary value systems can erupt in victory, as the merciless cosmic agon of the wills turns the page on the other for the last time.

    At least, I’m pretty sure thats all true. Theres a few points I would have to re-check in my philosophy books before I would claim to be absolutely certain.

  • Jeffrey Soreff

    There are at least three more (two with with considerable overlap with cold ware futurism): atomic-age and space-age futurism, and, prior to those, and socialist futurism (which has roots well older than the cold war).

    I’d think that a lot of these are just
    “When the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.” – no one has the compute capacity,
    memory, information, or communications ability to think through a
    future scenario with anything like the complexity that the present has
    or the real future will have. They all _have_ to fixate on some subset
    of the possibilities and their distinguishing features.

    • TranshumanReflector

      Well said. We simply don’t have the capacity, as non-augmented humans, to consider all of the variables effectively.

      There might be individuals or groups, who have chanced upon insights which explain some of this complexity. Wolfram comes to mind. But it’s impossible to find out who they are, amid the sea of other other voices.

  • Ed

    It seems way less likely that people do transhumanism in order to talk about minority oppression than that they do it because mind-enhancment is scary as shit. Transhumanism is more similar to flat-earthism than to the other three you examples you gave. Its emotional appeal lies in its turning something generally percieved as a threat into a good thing.

  • Bill

    Does fear, rather than hope, work more in motivating people to follow the futurists who claims to know the path for the future and that we should follow that path.

    Unfortunately, I think I know.

    Futurists who claim bad things unless we act are more likely to have followers than futurists who claim we could have an even better society if we acted.

    Just ask Thaler why.

    • TranshumanReflector

      Very, very good point. Which ‘distracts’ us more from making progress – an overly optimistic view of future possibilities, or a dark, depressing attitude which perceives every person as a potential terrorist, every new development as potentially life-ending?

  • Eliezer Yudkowsky

    That’s a pretty restrictive concept of “transhumanism”. The title of this post should be “Meh Transhuman Rights”, a meh I’d be happy to agree with.

    • Carl Shulman


    • Ben Goertzel

      Eli’s comment seems right-on to me….

      The “transhumanism” you critique seems to have little to do with the transhumanism I’m involved with — and I’m pretty deeply involved with the transhumanist community…. Advocacy for transhuman rights is a small % of what transhumanism is about.

      Also, I don’t think that transhumanists are the only ones who CARE about the future, but they seem to be ALMOST the only ones who spend a lot of time and energy trying to rationally THINK about the future and the potential that it’s going to be radically dramatically different from the past…

      To me this post drives home the general bad-ness of the term “transhumanism” though, at least in English. The “-ism” seems to turn people off and lead to a lot of wrong interpretations, like the one in this post 😉

    • Robin Hanson

      Issues around whether humans should change into transhumans dominate academic transhuman publications. I’d be interested to see some other breakdown of conversations about future modified folks.

      • TranshumanReflector

        Ah, ok. So should we understand your statements about Transhumanists, to refer to ‘professionals’?

        In other words, you aren’t making general statements about any blogger who happens to enjoy sharing their optimism. You’re talking about the people who are directly involved in anti-aging research, or in designing more efficient affordable neural implants, who show up at the conferences and tell us everything is going to be ok?

        And you are making the claim that most or all of these professional Transhumanists are there primarily because of the prestige?

        If that’s your intent, Robin, perhaps that’s true. It’s not unheard of, that well-known professionals are into hero worship, directed at themselves.

    • Thom Blake

      I’d have to agree with EY.

      I’ve actually seen a lot more folks worrying about non-transhuman rights than the other way around. I usually assume if ordinary humans get in the way of posthumans they will be crushed beneath steely boots.

      • TranshumanReflector

        Respectfully disagree, TB. Posthumans won’t have steely boots; the boots will consist mainly of carbon nanotubes.

  • mitchell porter

    “transhumanism, a clever appropriation of the reigning academic storyline of defending minorities oppressed by a reigning majority”

    This is a weird statement coming from someone who was on or near the scene. North America has always been the center of transhumanism, and you live there and I don’t, but I thought the wellsprings of the movement were (1) psychological factors like not wanting to die and wanting to one day be superhuman (2) new material realities like biotechnology and computers everywhere, and that the culture war with “bioconservatives” was a late and decidedly secondary development.

    • Ben Goertzel

      Mitchell — Robin lives in the DC metro area, like I do. Trust me, this is not really the nexus of radical transhumanism ;-D

  • mjgeddes

    The conceptions of the ‘future’ promoted by ‘futurists’ are mostly laughable bullshit anyway. The idea is to raise status by *sounding* impressive (social signalling), throw in lots of jargon, impressive sounding math, mention prestigous people, organizations, awards, twitter profiles etc.

    As I pointed out in another post, I believe that every single deep insight can be described in 50 words or less (seriously!) – every single new deep insight, no matter how radical or advanced, should obey the ’50-word’ rule. Skeptical? Watch:

    Transhuman kid watches apple falling, 1 millisecond later says:

    ’The speed of light is independent of the speed of the source, the laws of physics are the same in all reference frames and gravity is locally equivalent to acceleration’. (Relativity Theory in 30 words)

    Remember: to a transhuman it really is ALL that simple, it really is possible toss off every single insight about ALL the secrets of the universe in chunks of 50-words or less, simply by picking a suitably high level of abstraction. Lets bear that in mind every time we hear someone trying to sound impressive for the sake of signal signalling.

    • clayton

      using the term social signaling is social signaling

      (unless it’s meta)

    • Tim Tyler

      That kid sure has some pretty sensitve sensory equipment! It is probably a machine – rather than any sort of human variant.

      • mjgeddes

        The transhuman kid deduced it. Now the kid will say something that will immediately induce a major ‘crisis of faith’ in Bayesians!

        Transhuman kid stares at a couple of humans playing dice. They make one roll and in another millisecond he says:

        ‘A mind’s models of reality are like a sort of space. The ‘objects’ are like predicates – symbolic representations of states, the ‘forces’ are like the strengths of the relations between predicates – probability distributions – and the ‘geometry’ is like the concepts or categories. The apparent probability distributions are actually just special cases of curvatures (categorizations) in the geometry of mind space

        He’s a smart kid this transhuman kid.

  • TranshumanReflector

    >>but then realized I’m more indifferent than against<>it distracts some folks from what I think important<>I can’t really tell if there is much overall effect.<>The history of the future is the rise and fall of groups claiming to advocate for the future, with advice that just happens to also raise the social status of their affiliate groups.<>It would be great if those who really cared more directly about the future could find each other and work together<<

    You mean, like donating to centers of cancer research, or attending Transhumanism conferences? Perhaps I'm misunderstanding Robin's statement here, but these things are clearly happening.

    • TranshumanReflector

      This post was missing much of the original text. It looks like the blogging software misunderstood some of the brackets as HTML, rather than quotes. Following is the original intended response to Robin’s post:

      “but then realized I’m more indifferent than against”

      9 paragraphs of indifference. If Robin were actually for or against Transhumanism, this particular article could be a short book.

      “it distracts some folks from what I think important”

      Well, we can’t have that.

      “I can’t really tell if there is much overall effect.”

      Can’t argue with that, if I understand it correctly. It’s difficult to tell if the listeners are the choir, or if formerly disinterested parties are being moved to action or interest, as a result of futuristic pronouncements. Perhaps some surveys will be implemented, some day, to try and find out.

      “The history of the future is the rise and fall of groups claiming to advocate for the future, with advice that just happens to also raise the social status of their affiliate groups.”

      Robin and proponents of the current blog entry,

      Since we can’t read the minds of the Kurzweils and other prophets of good times ahead, I can speak only from introspection, and can claim only as much honesty as self-awareness allows.

      When I speak to friends or to the web about the future, it is with an attitude of optimism. It’s a way of saying, “Cheer up. There are no guarantees, but there are plenty of reasons to hope.”

      No doubt, there’s an ego side with many of us, akin to the thrill of being able to predict the results of an election or a horse race or the status of next year’s economy.

      There’s a degree of self-delusion going on, as well; we’d all like to convince ourselves that unexpected large-impact events won’t figure into any of our equations, even though they seem to be part of the fabric of reality. We like to believe we can both control, and predict.

      However – as far as I can tell, the primary motive for speaking about the future in a Transhumanist tone, is to help instill a spirit of optimism and hope.

      I could be wrong, but that seems to be the case.

      “It would be great if those who really cared more directly about the future could find each other and work together”

      You mean, like donating to centers of cancer research, or attending Transhumanism conferences? Perhaps I’m misunderstanding Robin’s statement here, but these things are clearly happening.

    • Robin Hanson

      It seems to me you are talking more about techno-optimisms than about modifying people to be beyond human. But whatever you call it, such circles are dominated by people who work in the technology areas which are claimed to be the “reasons for hope.” I think it unlikely people chose those jobs mainly to feel such hope; far more plausible that given their jobs they want everyone to more celebrate their contribution.

      • TranshumanReflector

        “It seems to me you are talking more about techno-optimisms than about modifying people to be beyond human”

        Yes, that’s right.

        “But whatever you call it, such circles are dominated by people who work in the technology areas which are claimed to be the ‘reasons for hope.'”

        You mean, the circles of techno-optimistic pundits? And you’re saying that these T-O circles are primarily in technology fields, as opposed to retail or county government or trucking, e.g.? Sure, that makes sense.

        “I think it unlikely people chose those jobs mainly to feel such hope.”

        That’s probably correct. But the urge to share one’s optimism with the rest of the world, isn’t always about one’s job. One might hate one’s job, but still enjoy sharing one’s view of the future.

        “far more plausible that given their jobs they want everyone to more celebrate their contribution.”

        Well, that is probably one of our main disagreements. However, notice that many of us remain anonymous, or try to be. It isn’t all about social status or recognition.

        Since taking surveys would be unreliable in this case, because of the ego factor, it’s unclear how we would settle this question. I just hope that people like you are open to the possiblity that, occasionally at least, Transhumanists have altruistic motives. Whether they are effective, is a different question.

        Not that it matters what Transhumanist opponents think. The web and the world is too open now, to shut them all up.

        Thanks for the response, Mr. Hanson. Also, that was a terrific talk on FastForward Radio last night. Comments to follow, no doubt, after we re-listen to the podcast.

  • Will Brown


    Have you included in your considerations that the examples you cite are all successful story telling venues? Might it be that they recieve attention because they are such readily understandable story concepts instead of being the most plausable or likely developments?

    I submit a better definition of transhuman is: that state of being which sustainably exceeds the capabilities of the unaltered human? Thus, an exobrain is transformative but a present-tech tattoo is not.

    I’ll be listening to the FFR blogcast with interest.

  • Pingback: uberVU - social comments

  • Aron

    These posts devoted to signalling are the least interesting.

    • Liron

      I think this is a great post. There’s no upvote functionality so I’m just putting it out there.

  • Tim Tyler

    Robin is a better futurist than most – because he really cares about the future, whereas others only pretend to do so? It seems like shameless self-promotion.

    If my international stereotypes are right, that is dandy in America (where Robin is), but shows a lack of modesty in England (where I am).

  • Robert Wiblin

    Apart from uploads and AI, what central point about the future are those groups missing?

  • washbash

    This entry exemplifies the typical incoherent babble I’ve been reading ever since i got google to alert me whenever “transhumanism” is mentioned on the internet.

    As others have noted, transhuman-rights is not the main definition of transhumanism, only a byproduct. You can actually make any argument about civil rights. People who think the sky is yellow might enter into a civil rights battle for their freedom if blue-skiers started to oppress them. However, one would never argue that yellow-skiers are in it to bitch about their rights. Their rights to be who they are is besides their main argument, which is the sky is yellow.

    This is the weakest argument I’ve read so far. You say you’re indifferent, but then compare transhumanism to powerful social movements that one can not afford to ignore. Did you just crank this out because you had to write something for December 22nd, 2009?

  • haig

    So Robin, if someone labeled you a transhumanist would you correct them, and if so, what would you label yourself as in regard to an overall philosophy, if any?

  • Transhumanism

    Sorry, but this article is complete drivel. Came here from a search engine about transhuman augmentation, enhancement, etc. What is this doing near the top results?

    • Marius Fusariu

      it is well known that bing is against transhumanism

  • TranshumanReflector

    …FYI, I’ll be on this futurism radio show tonight at 10-11p EST…

    The podcast, from 12/22, was content-rich. Robin started out with comments about decision and prediction markets, but it really started getting interesting at about 32:30 in the podcast. Following is a transcript of most of Robin’s comments from that point. I and hopefully other listeners will inject their opinions shortly, either here or on the Speculist blog. Robin and Brian Wong were the guests on the show:

    “I find it fascinating that people get so worked up about this ‘Is the future optimistic or pessimistic’ thing, as if it’s like taking sides on your favorite football team, or something.

    You ask yourself, why does it matter how optimistic we are about the future? I mean, say we thought the future wasn’t gonna come quite as fast as other people thought. Still, over the long run, we’ll still have this wonderful future, but it won’t get here quite as fast.

    The rate of improvement, the exact rate of improvement, if it’s changing by a factor of two – that seems a lot less important, than that we get there eventually.”

    “I think you should take seriously the various things that can go wrong, and say, ‘yeah, we probably won’t hit those things, but if we did, that would be really terrible, and what can we do about that?’ Or what’s the best thing to do about that? And then you get to the question of how is it best to avoid the various things that could go wrong? Then you get to people saying, ‘well, if we just slowed everything down, maybe we could deal with these problems better’. And then you’re getting on the other side, people saying, ‘well…’, like me really saying, ‘if you slow things down, you’re just gonna cause a whole bunch of more problems.’

    But I mean, that seems to be the place to have the conversation. Not about overall optimism or pessimism, but the wisdom or prudence of being more slower, careful – or going gung-ho all the way.”

    “…our universe is 14 billion years old. So, this era we’re in now, of rapid growth, is a very small fraction of the overall history of the past and of the future. So obviously, we’re really taken with it. And this is our world, but it just can’t last a long time, on a cosmic time scale.”

    “…and Drexler understood that. And lots of people understood that. And so it’s interesting to think about ‘well, what will things have to be like, when you reach those fundamental limits.”

    “…, and to see that growth rates would have to slow, and we’d have to be more at the limits of our capacity, and if we’re in a competitive world – which I think is likely – you know, if we’re evolving and competing, then we would be more closely adapted to our world, in the sense of finding it hard to have different behavior that would really have a competitive advantage”

    “We’re clearly not very well adapted to our current world. That is, we’re apes who suddenly are thrust into this amazing growth spurt. And, we’ve had some selection and some adaptation over that period. But, for the large part, we are not adapted to this world. We’re doing all sorts of things apes would do: looking at the world around us, and imagining this was the jungle or the forest, and dealing with things that way, but we’re not making long-term plans, we’re not trying to sort of optimize the future of the universe.”

    “…That’s not the kind of creatures we were, and still are.”

    A short exchange with Brian, and then:

    “Well, I think there are two big issues that we can think about now. One big issue, is ‘do we make it at all?’ If there’s a substantial chance of some big disaster we were talking about, that would just destroy it all, then that’s something that we could have some leverage over. To try to figure out how to avoid that. That’s the existential risk story, so even if you think the chances are only 1 percent, that could be our one leverage on the future, to make sure that one percent doesn’t happen.”

    “…One thing we could do about the long term future, is try to make sure we can make it there, by looking at whatever the risks are, and trying to minimize them.”

    “And the second big thing that we can do about the long term future, is to consider how much we want to have central coordination. It’s a sensitive, dangerous thing to consider, but it is one of the things that will have an influence over the distant future. If somehow we make a world government, and it ends up being strong, then it can end up controlling all of the colonization that goes out from here, and could have a permanent mark on that, if it was powerful enough. I’m not sure that’s a good idea, but it definitely is one of the big ways, we may have a mark on the future.”

    In response to “what if the big government does it wrong?”,

    Hanson replied,

    “Absolutely. First of all, I want to say, it’s a question that we should think long and careful about.”

    Responding to Hanson’s suggestion that we think about World Government, Brian Wang asked why would we expect the people in power to give it up?

    “Well, we’re only a couple of people here, out of billions. So we should realize that our influence may be limited. But still, if we want to think about the question, that’s the kind of question to ask. It could be, for example, that sometime in the next century, we will have a tentative world government, and then if that does badly, after that people say, ‘no more of that, never again.’ And that’s how the influence will go, via this very formative, memorable example of how it didn’t go very well.”

    The host asked Robin whether he sometimes thought of a singleton advanced AI as the world government, as opposed to human beings.
    “Well, I think that’s part of the range of options to keep in mind. But I think people vastly overestimate how easy it might be. ”

    “… but they underestimate how hard it is, to actually manage central coordination. We humans have had large amounts of experience trying to coordinate cities and states and nations, and larger things. And we’ve seen a lot about how it gets expensive, and how it gets hard. And so, you can call it an AI that’s in charge of anything, but it’s not clear that just calling it that, makes all these problems go away. I mean, it has to have some internal structure, and it has to have an internal way it’s organized. And the question is, how does it manage those problems, and how does it overcome the great costs and conflicts that we’ve had, that we’ve tried to coordinate on a large scale. ”

    “I’m not gonna say anything is entirely clear, but for example, some people say, ‘well, if you just have a bunch of clones of the same thing, and the entire government is run by clones of the same creature, then they won’t have any internal conflicts, therefore they will all have peace and coordination.’, or something like that.”

    Brian injected that the trend seemed to be that more numerous nations tend to be formed, rather than a trend toward consolidation, i.e. world government. Mr. Hanson responded.

    “Over the centuries, the trend is more toward central government. No question, over the longer time scale.”

    “Nations have had more centralized government, nations have been taxing a larger fraction of income.”

    “They’re doing more actions on a national level, rather than a regional or metropolitan area level. There just, over the last century, clearly more government. ”

    In response to Brian commenting on how much better life is with more options, and individuals have more control over their lives,

    “I would say in the past that we’ve had government that were too big and too small, and a wide range of variety. I would say that one of the things that governments that were too big did, is they got involved in too many things. And one of the lessons that people learned is to back off on certain kinds of things. On the other hand, the government got involved in certain kinds of things, and they (people) liked it. And they kept doing more of it. ”

    The hosts asked about the role of the futurist in these things, and about what they (Brian and Robin) will be doing at the ForeSight conference.

    “So the actual futurist, most business futurists, are focused on a relatively short time scale, about 3-10 years, or not much longer than that. So clearly most demand for futurism, that’s sort of practical, is in that time scale.”
    “But I’m most interested in the longer time scale, that you know after 20-100 years or something, and out there most of the people who do that kind of futurism, are basically entertainers, unfortunately. That’s the kind of mode they’re in, science fiction, inspirational speakers, whatever else it is.”

    “And, I’m an academic, I’m a professor, and I know how much people love to see sort of odd, creative, contrarian points of view, but honest, I think what the future most needs, what understanding the future most needs, is just to take the standard points of view from our various academic fields, and then combine them. Not necessarily to be creative and contrarian, but just to take what computer scientists usually think that’s sort of the most straightforward, conservative things. What computer scientists think, combine that with economists think, for example, and put those views together, to make our best estimate of what is likely to happen. And honestly, that doesn’t happen today.”

    “That doesn’t happen today, because when an economist looks at the future, when he thinks about computers, he doesn’t use what computer scientists think about computers. He uses what he has read in the newspaper, about computers. So each academic discipline takes their own expert field, and they combine that with their amateur image of other fields. And when computer scientists talk about the future of artificial intelligence, or whatever, they don’t go talk to economists about what they think. They make up their own economics, like most people do. They make up their own social science that seems intuitively right to them. And then they use that to make forecasts.”

    “…and that’s basically how futurism fails, is that we don’t combine expert (something) from multiple fields. That’s the kind of thing I want to talk about, and describe some basic insights from.”

    • TranshumanReflector

      The following responses to Robin’s comments in the podcast, should show up on the Speculist blog, assuming the moderator thinks they are appropriate. In any case, they are reproduced below.

      [2200 words deleted here – we have a 500 word comment length limit. RH]

  • Richard Hollerith

    Robin writes, few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice. It would be great if those who really cared more directly about the future could find each other and work together, but alas too many others want to pretend to be these folks to make this task anything but very hard.

    I am currently working on personal goals, like my economic security, but when I decide to work on the far-future, I will be able to find those who really care, no problem. I have thought a lot about how to do that, and am willing to share my thinking with anyone who emails me.

  • Pingback: Overcoming Bias : On Futurism

  • Pingback: So I’ve been flipping through The Transhumanist Reader…