38 Comments

Worth noting in this context is https://secure.wikimedia.or...

So Robin: *would* you describe yourself as a transhumanist and futurologist?

Expand full comment

The transhuman kid deduced it. Now the kid will say something that will immediately induce a major 'crisis of faith' in Bayesians!

Transhuman kid stares at a couple of humans playing dice. They make one roll and in another millisecond he says:

'A mind's models of reality are like a sort of space. The 'objects' are like predicates - symbolic representations of states, the 'forces' are like the strengths of the relations between predicates - probability distributions - and the 'geometry' is like the concepts or categories. The apparent probability distributions are actually just special cases of curvatures (categorizations) in the geometry of mind space'

He's a smart kid this transhuman kid.

Expand full comment

The responses to Robin's comments on the podcast are at

http://www.blog.speculist.c...

Refer to the 4th comment, dated December 26, at 6:42 pm. That is the final version.

It is recommended that, if possible, you also listen to the podcast, at

http://www.blogtalkradio.co...

. This will help you to determine whether Mr. Hanson's comments were interpreted in an appropriate context. Especially when working off transcripts, devoid of the comments of other participants, it's possible to take things out of context.

Thanks to Mr. Hanson for the opportunity to engage in this discussion.

Expand full comment

The following responses to Robin's comments in the podcast, should show up on the Speculist blog, assuming the moderator thinks they are appropriate. In any case, they are reproduced below.

[2200 words deleted here - we have a 500 word comment length limit. RH]

Expand full comment

Robin writes, few folks actually care much about the future except as a place to tell morality tales about who today is naughty vs. nice. It would be great if those who really cared more directly about the future could find each other and work together, but alas too many others want to pretend to be these folks to make this task anything but very hard.

I am currently working on personal goals, like my economic security, but when I decide to work on the far-future, I will be able to find those who really care, no problem. I have thought a lot about how to do that, and am willing to share my thinking with anyone who emails me.

Expand full comment

...FYI, I’ll be on this futurism radio show tonight at 10-11p EST...

The podcast, from 12/22, was content-rich. Robin started out with comments about decision and prediction markets, but it really started getting interesting at about 32:30 in the podcast. Following is a transcript of most of Robin's comments from that point. I and hopefully other listeners will inject their opinions shortly, either here or on the Speculist blog. Robin and Brian Wong were the guests on the show:

"I find it fascinating that people get so worked up about this 'Is the future optimistic or pessimistic' thing, as if it's like taking sides on your favorite football team, or something.

You ask yourself, why does it matter how optimistic we are about the future? I mean, say we thought the future wasn't gonna come quite as fast as other people thought. Still, over the long run, we'll still have this wonderful future, but it won't get here quite as fast.

The rate of improvement, the exact rate of improvement, if it's changing by a factor of two - that seems a lot less important, than that we get there eventually."

"I think you should take seriously the various things that can go wrong, and say, 'yeah, we probably won't hit those things, but if we did, that would be really terrible, and what can we do about that?' Or what's the best thing to do about that? And then you get to the question of how is it best to avoid the various things that could go wrong? Then you get to people saying, 'well, if we just slowed everything down, maybe we could deal with these problems better'. And then you're getting on the other side, people saying, 'well...', like me really saying, 'if you slow things down, you're just gonna cause a whole bunch of more problems.'

But I mean, that seems to be the place to have the conversation. Not about overall optimism or pessimism, but the wisdom or prudence of being more slower, careful - or going gung-ho all the way."

"...our universe is 14 billion years old. So, this era we're in now, of rapid growth, is a very small fraction of the overall history of the past and of the future. So obviously, we're really taken with it. And this is our world, but it just can't last a long time, on a cosmic time scale."

"...and Drexler understood that. And lots of people understood that. And so it's interesting to think about 'well, what will things have to be like, when you reach those fundamental limits."

"..., and to see that growth rates would have to slow, and we'd have to be more at the limits of our capacity, and if we're in a competitive world - which I think is likely - you know, if we're evolving and competing, then we would be more closely adapted to our world, in the sense of finding it hard to have different behavior that would really have a competitive advantage"

"We're clearly not very well adapted to our current world. That is, we're apes who suddenly are thrust into this amazing growth spurt. And, we've had some selection and some adaptation over that period. But, for the large part, we are not adapted to this world. We're doing all sorts of things apes would do: looking at the world around us, and imagining this was the jungle or the forest, and dealing with things that way, but we're not making long-term plans, we're not trying to sort of optimize the future of the universe."

"...That's not the kind of creatures we were, and still are."

A short exchange with Brian, and then:

"Well, I think there are two big issues that we can think about now. One big issue, is 'do we make it at all?' If there's a substantial chance of some big disaster we were talking about, that would just destroy it all, then that's something that we could have some leverage over. To try to figure out how to avoid that. That's the existential risk story, so even if you think the chances are only 1 percent, that could be our one leverage on the future, to make sure that one percent doesn't happen."

"...One thing we could do about the long term future, is try to make sure we can make it there, by looking at whatever the risks are, and trying to minimize them."

"And the second big thing that we can do about the long term future, is to consider how much we want to have central coordination. It's a sensitive, dangerous thing to consider, but it is one of the things that will have an influence over the distant future. If somehow we make a world government, and it ends up being strong, then it can end up controlling all of the colonization that goes out from here, and could have a permanent mark on that, if it was powerful enough. I'm not sure that's a good idea, but it definitely is one of the big ways, we may have a mark on the future."

In response to "what if the big government does it wrong?",

Hanson replied,

"Absolutely. First of all, I want to say, it's a question that we should think long and careful about."

Responding to Hanson's suggestion that we think about World Government, Brian Wang asked why would we expect the people in power to give it up?

"Well, we're only a couple of people here, out of billions. So we should realize that our influence may be limited. But still, if we want to think about the question, that's the kind of question to ask. It could be, for example, that sometime in the next century, we will have a tentative world government, and then if that does badly, after that people say, 'no more of that, never again.' And that's how the influence will go, via this very formative, memorable example of how it didn't go very well."

The host asked Robin whether he sometimes thought of a singleton advanced AI as the world government, as opposed to human beings."Well, I think that's part of the range of options to keep in mind. But I think people vastly overestimate how easy it might be. "

"... but they underestimate how hard it is, to actually manage central coordination. We humans have had large amounts of experience trying to coordinate cities and states and nations, and larger things. And we've seen a lot about how it gets expensive, and how it gets hard. And so, you can call it an AI that's in charge of anything, but it's not clear that just calling it that, makes all these problems go away. I mean, it has to have some internal structure, and it has to have an internal way it's organized. And the question is, how does it manage those problems, and how does it overcome the great costs and conflicts that we've had, that we've tried to coordinate on a large scale. "

"I'm not gonna say anything is entirely clear, but for example, some people say, 'well, if you just have a bunch of clones of the same thing, and the entire government is run by clones of the same creature, then they won't have any internal conflicts, therefore they will all have peace and coordination.', or something like that."

Brian injected that the trend seemed to be that more numerous nations tend to be formed, rather than a trend toward consolidation, i.e. world government. Mr. Hanson responded.

"Over the centuries, the trend is more toward central government. No question, over the longer time scale."

"Nations have had more centralized government, nations have been taxing a larger fraction of income."

"They're doing more actions on a national level, rather than a regional or metropolitan area level. There just, over the last century, clearly more government. "

In response to Brian commenting on how much better life is with more options, and individuals have more control over their lives,

"I would say in the past that we've had government that were too big and too small, and a wide range of variety. I would say that one of the things that governments that were too big did, is they got involved in too many things. And one of the lessons that people learned is to back off on certain kinds of things. On the other hand, the government got involved in certain kinds of things, and they (people) liked it. And they kept doing more of it. "

The hosts asked about the role of the futurist in these things, and about what they (Brian and Robin) will be doing at the ForeSight conference.

"So the actual futurist, most business futurists, are focused on a relatively short time scale, about 3-10 years, or not much longer than that. So clearly most demand for futurism, that's sort of practical, is in that time scale.""But I'm most interested in the longer time scale, that you know after 20-100 years or something, and out there most of the people who do that kind of futurism, are basically entertainers, unfortunately. That's the kind of mode they're in, science fiction, inspirational speakers, whatever else it is."

"And, I'm an academic, I'm a professor, and I know how much people love to see sort of odd, creative, contrarian points of view, but honest, I think what the future most needs, what understanding the future most needs, is just to take the standard points of view from our various academic fields, and then combine them. Not necessarily to be creative and contrarian, but just to take what computer scientists usually think that's sort of the most straightforward, conservative things. What computer scientists think, combine that with economists think, for example, and put those views together, to make our best estimate of what is likely to happen. And honestly, that doesn't happen today."

"That doesn't happen today, because when an economist looks at the future, when he thinks about computers, he doesn't use what computer scientists think about computers. He uses what he has read in the newspaper, about computers. So each academic discipline takes their own expert field, and they combine that with their amateur image of other fields. And when computer scientists talk about the future of artificial intelligence, or whatever, they don't go talk to economists about what they think. They make up their own economics, like most people do. They make up their own social science that seems intuitively right to them. And then they use that to make forecasts."

"...and that's basically how futurism fails, is that we don't combine expert (something) from multiple fields. That's the kind of thing I want to talk about, and describe some basic insights from."

Expand full comment

Sorry, but this article is complete drivel. Came here from a search engine about transhuman augmentation, enhancement, etc. What is this doing near the top results?

Expand full comment

So Robin, if someone labeled you a transhumanist would you correct them, and if so, what would you label yourself as in regard to an overall philosophy, if any?

Expand full comment

This entry exemplifies the typical incoherent babble I've been reading ever since i got google to alert me whenever "transhumanism" is mentioned on the internet.

As others have noted, transhuman-rights is not the main definition of transhumanism, only a byproduct. You can actually make any argument about civil rights. People who think the sky is yellow might enter into a civil rights battle for their freedom if blue-skiers started to oppress them. However, one would never argue that yellow-skiers are in it to bitch about their rights. Their rights to be who they are is besides their main argument, which is the sky is yellow.

This is the weakest argument I've read so far. You say you're indifferent, but then compare transhumanism to powerful social movements that one can not afford to ignore. Did you just crank this out because you had to write something for December 22nd, 2009?

Expand full comment

Ah, ok. So should we understand your statements about Transhumanists, to refer to 'professionals'?

In other words, you aren't making general statements about any blogger who happens to enjoy sharing their optimism. You're talking about the people who are directly involved in anti-aging research, or in designing more efficient affordable neural implants, who show up at the conferences and tell us everything is going to be ok?

And you are making the claim that most or all of these professional Transhumanists are there primarily because of the prestige?

If that's your intent, Robin, perhaps that's true. It's not unheard of, that well-known professionals are into hero worship, directed at themselves.

Expand full comment

Very, very good point. Which 'distracts' us more from making progress - an overly optimistic view of future possibilities, or a dark, depressing attitude which perceives every person as a potential terrorist, every new development as potentially life-ending?

Expand full comment

Well said. We simply don't have the capacity, as non-augmented humans, to consider all of the variables effectively.

There might be individuals or groups, who have chanced upon insights which explain some of this complexity. Wolfram comes to mind. But it's impossible to find out who they are, amid the sea of other other voices.

Expand full comment

"It seems to me you are talking more about techno-optimisms than about modifying people to be beyond human"

Yes, that's right.

"But whatever you call it, such circles are dominated by people who work in the technology areas which are claimed to be the 'reasons for hope.'"

You mean, the circles of techno-optimistic pundits? And you're saying that these T-O circles are primarily in technology fields, as opposed to retail or county government or trucking, e.g.? Sure, that makes sense.

"I think it unlikely people chose those jobs mainly to feel such hope."

That's probably correct. But the urge to share one's optimism with the rest of the world, isn't always about one's job. One might hate one's job, but still enjoy sharing one's view of the future.

"far more plausible that given their jobs they want everyone to more celebrate their contribution."

Well, that is probably one of our main disagreements. However, notice that many of us remain anonymous, or try to be. It isn't all about social status or recognition.

Since taking surveys would be unreliable in this case, because of the ego factor, it's unclear how we would settle this question. I just hope that people like you are open to the possiblity that, occasionally at least, Transhumanists have altruistic motives. Whether they are effective, is a different question.

Not that it matters what Transhumanist opponents think. The web and the world is too open now, to shut them all up.

Thanks for the response, Mr. Hanson. Also, that was a terrific talk on FastForward Radio last night. Comments to follow, no doubt, after we re-listen to the podcast.

Expand full comment

Respectfully disagree, TB. Posthumans won't have steely boots; the boots will consist mainly of carbon nanotubes.

Expand full comment

Apart from uploads and AI, what central point about the future are those groups missing?

Expand full comment

I'd have to agree with EY.

I've actually seen a lot more folks worrying about non-transhuman rights than the other way around. I usually assume if ordinary humans get in the way of posthumans they will be crushed beneath steely boots.

Expand full comment