47 Comments

Hi, "we now differ greatly not only from our ancestors of ten million years ago, but even from our ancestors of a thousand years ago". In what way do we differ greatly from our ancestors of a thousand years ago? (I just don't see it.)

Expand full comment

The problem with letting populations leave earth and settle the galaxy would be a real one and not a question of “earth partiality” in the sense of narrow self-attachment or nationalism - as Robin describes it here. This misses the actual risk.

The real problem would be that they grow distant from us culturally and/biologically (with communication limited by speed of light). And then they are effectively aliens (after a few thousand years) and may well come back and kill, conquer or enslave the population of earth.

This is the same problem with AI.

The fear is not that they outcompete humans and peaceably replace us with a new and more effective species. I agree this would only be sad to the degree that you are emotionally attached to humans. The fear is that we (or our children) or humankind are slaughtered, or thrown into desperate misery and hunger - mostly as some sort of collateral damage to another species’ expansion.

Expand full comment

Given that people fear that AIs will compete against their own children I think you need to overcome that worry first. Unless you convince people to see AIs as *their* children (hard if you aren't an AI researcher) I think you are fighting against evolution.

Expand full comment

I consider all the hand-wringing about AI safety as pointless grandstanding. We would much sooner get carbon emissions down to zero, than we would ever stuff the AI genie back into the bottle. There is just too much to be gained.

It doesn't matter what Sam Altman, Bill Gates, Sundar Pichai, Mark Zuckerberg, Geoffrey Hinton, or anyone else feels about it. They are all irrelevant.

Expand full comment

Are the later generations in current overlapping generations richer and nearer to the center of control? That doesn't seem obvious to me.

Expand full comment

We *could* think of AI as our descendants; sorta erases the existential risk problem.

Expand full comment

Some imaginable AIs would be fine to have as descendants. I'm not too worried about different "styles of thinking" so much as the AI (or AIs collectively) turning the Earth and all other accessible resources into something as worthless as giant piles of paperclips. :/

Expand full comment

In a previous post you wrote:

"Yes, in the default unaligned future, one can hope more that as improved descendants displace older minds, the older minds might at least be allowed to retire peacefully to some margin to live off their investments. But note that such hopes are far from assured if this new faster world experiences as much cultural change in a decade as we’d see in a thousand years at current rates of change."

So it sounds like here you are protective and loyal towards people you expect to kill you (assuming you can be uploaded, or preserved and later uploaded). If I believed that, I'd say, yes, develop the mind control.

Expand full comment

Like others here I take exception to the claim that we are very different from people in Medieval times, say. Most, if not all, historians insist on exactly the opposite.

Expand full comment

I think this is a valuable and largely correct perspective, but only if you also stress that it's very important that the AI is conscious!

Expand full comment

To use a term you often talk about, most of us consider the human race to be sacred

Expand full comment

Does that mean your concern is to make sure AIs get the ability to do their own lithography and otherwise produce computer chips without our help as soon as possible?

Expand full comment

Our basic utility function is to continue our DNA lines. Human descendants, or even non-human descendants (i.e. future evolved species) fill that function for us, whether they live on the Earth, or anywhere else. AI entities do not. To the extent our prejudices cause us to tend to favour future human/biological descendant lines over future AI lines, then the instincts behind them seem to be working as they have evolved to.

Expand full comment

Humans, among our own kind, do in fact kill or otherwise restrain those whose "styles of thinking" are sufficiently incompatible with the functioning of human civilization, through our institutions like courts and prisons and militaries. In rare cases (and thankfully getting rarer) we do also institutionalize people against their will who have not committed serious crimes - treatment for mental illness, for example. We typically don't do this pre-emptively, we wait until others act, because among humans 1) there's enough commonality in human "styles of thinking" that the risk of tyranny from giving anyone that popwer is definitely greater than the benefit gained, and 2) no individual has the power to cause so much damage so quickly that no one else has a chance to react to it. Even in the context of nuclear war, that was the point of developing second-strike capability.

I know you don't think AI is likely to change that constraint, but the possibility that it *could,* or that the impact of not implementing prior restraint through what you're calling mind control could be measured in millions to billions of lives, is exactly the set of scenarios where your argument fails.

So yes, the future will be scary and in many ways likely incomprehensible to current-me no matter what, and it will likely change faster than I can react to it, and that's fine. It's also fine if that future contains nothing that current-me would recognize as "humans." But the process for getting there matters. Whether the operation of our AI descendants even "counts" as a "style of thinking," matters. Whether the process of value drift *starts from* and *proceeds in a stepwise-reasonable manner from* what current-humans actually value, or some CEV extension of that, vs whether it begins from a much more random set of goals, matters.

I also think you're greatly overestimating the level of value variation and thinking style variation that exists among humans, neurotypical or not. In one sense if you look at history since the invention of writing there's huge variation in societal structure, law systems, economic systems, and ethical systems, but from another POV there's really only a handful of ways humans reason about what makes things good or valuable, and those are fairly stable modulo choice of metaphors and analogies for at least the past 5000 years. Probably a lot longer modulo differences in ability to value things at all, given how easy it is to make other mammals jealous, nervous, angry, content, playful, etc. and perceive them as such.

Expand full comment

It seems to me a leap of faith that you expect mankind to evolve new characteristics, when not having those characteristics neither makes that person more likely to die nor less likely to reproduce successfully. I would not expect that to happen at all.

Of course, looking at today's birth rates by country I could quite well believe that evolution is going to destroy all wealthy civilizations and bring us a "third world" world. But in that case we won't be settling any other planets, will we?

Expand full comment

AI can mimic other human styles of thinking. A new page I ran into suggests that can impact the media in the near term by using AI to nudge the media towards neutrality:

it can save the news industry. AI can be used as a writing partner to detect bias and aid writers to overcome their natural bias. The news media is in decline due to loss of trust by the public, and yet they've avoided fixing what most people view as a poor quality product. How many industries get away with this? AI can help. This page I ran into https://FixJournalism.com has a good image to illustrate "AI Nudged To Neutral" and explores detail, and notes the absurdity of the news industry:

'A study by Gallup and the Knight Foundation found that in 2020 only 26% of Americans reported a favorable opinion of the news media, and that they were very concerned about the rising level of political bias. In the 1970s around 70% of Americans trusted the news media “a great deal” or a “fair amount”, which dropped to 34% this year, with one study reporting US trust in news media was at the bottom of the 46 countries studied. The U.S. Census Bureau estimated that newspaper publisher’s revenue fell 52% from 2002 to 2020 due to factors like the internet and dissatisfaction with the product.

A journalist explained in a Washington Post column that she stopped reading news, noting that research shows she was not alone in her choice. News media in this country is widely viewed as providing a flawed product in general. Reuters Institute reported that 42% of Americans either sometimes or often actively avoid the news, higher than 30 other countries with media that manage to better attract customers. In most industries poor consumer satisfaction leads companies to improve their products to avoid losing market share. When they do not do so quickly enough, new competitors arise to seize the market opening with better products.

An entrepreneur who was a pioneer of the early commercial internet and is now a venture capitalist, Marc Andreessen, observed, the news industry has not behaved like most rational industries: “This is precisely what the existing media industry is not doing; the product is now virtually indistinguishable by publisher, and most media companies are suffering financially in exactly the way you’d expect..” The news industry collectively has not figured out how to respond to obvious incentives to improve their products. '

Expand full comment