47 Comments

Hi, "we now differ greatly not only from our ancestors of ten million years ago, but even from our ancestors of a thousand years ago". In what way do we differ greatly from our ancestors of a thousand years ago? (I just don't see it.)

Expand full comment

There has been a secular effect that makes today's humans on average taller. There has been a secular effect that reduces the stunting effect on IQ of malnutrition in childhood. As humans gain more control over our environment and we don't have to be very intelligent to survive, the national IQs of developed nations have even started falling as the less gifted outbreed the more. Otherwise there appears to be no difference between homo sapiens of two thousand years ago and today.

Expand full comment

Your points would seem to indicate that we're the same as our predecessors. If you starve a modern human the same things will happen to them. As with many other visible traits, high IQ, as well as wealth, seem to be concentrated in elites, or aristocracies, if you will, of different kinds. Societally, it has always been thus. But if you look at the totality, "the poor have ye always with you", and so has it always been. The contmporaneous equilibria have always shifted.

Expand full comment

I think in the context of what Robin was discussing, the differences don't need to be physical/genetic: We can perceive others as "alien" for reasons of culture and values just as much as genetics. Certainly our culture and values have changed dramatically over the last 1000 years.

Expand full comment

Not as much as you think. We're actually not far removed from other animal life on the planet as demonstrated in recent mammal documentaries. We can think a bit and our tech is pretty good but still nothing compared to the complexity of nature itself.

Expand full comment

I was thinking the same thing. Maybe Robin is referring to changes in technology but other than that the humans of ancient Greece (in general) were not much different to what we see now. Of course, if we take the technodeterministic view (and why not) it's easy to see how technology has and still is shaping the habits and behaviors of humans in this way and that. The current hunchback form of the chronic smartphone user is evidence that we're almost ready as a collective to be plugged into the One Machine. I see little resistance to this "progress." In fact, the youth positively embrace it. A human AI combo is emerging but some party poopers still say that we'll run out of "juice" before it gets too crazy. The obvious pro/anti factions have already taken their positions. Let the games begin!

Expand full comment
author

Ancient Greeks were really quite unrepresentative of humans at that time.

Expand full comment

As organisms, I don't see it. In fact, a brilliant classics professor from whom I took a course in classics on translation used to remind us routinely that "The Greeks were just a bunch of sweaty athletes." He was totally against apotheosizing thm. The fact the we are able to read them today and find meaning in what they have left behind is seems to me good evidence that we are very much like them.

Expand full comment
author

We pick them out to study exactly because they were more like us than were contemporaries.

Expand full comment

Well, we agree that they were like us. It remains to be shown that other humans were not like us. Arguably, we see them as like us because we think of many of the traditions and structures of European culture as having come from them. But actually, their tradition comes from the Indo-Aryan world; and our traditions, particularly modern "western" religions, come strongly from Semitic roots, from Gilgamesh through the old testament. So which peoples contemporaneous with the Greeks do you regard as unlike them and us, and why? Were the ancient Chinese intrinsically unlike the Greeks? How about the ancient Africans and the pre-Columbian Americans?

Expand full comment

They were more like us because, a millennium and a half later, the medieval scholars who encountered their forgotten-in-Europe texts were so inspired they re-evaluated a bunch of interpretations of their own existing doctrines and designed whole new societal institutions around digesting their insights, and those are the foundations our worldview is built on. Aka, they're not like us, we're like them because of the persuasiveness and usefulness of their ideas.

Expand full comment
May 26, 2023·edited May 26, 2023

And yet here we are... practically the whole of civilization living under the systems created during the Greco-Roman era ie. social, economic, political, legal, etc. There may be some stragglers in rural communities but even now and thousands of years ago they're not that different to the city folk if we take a big picture view of all known life. Pretty much all life eats, procreates and tries to occupy new territory. So what's new?

Expand full comment

The problem with letting populations leave earth and settle the galaxy would be a real one and not a question of “earth partiality” in the sense of narrow self-attachment or nationalism - as Robin describes it here. This misses the actual risk.

The real problem would be that they grow distant from us culturally and/biologically (with communication limited by speed of light). And then they are effectively aliens (after a few thousand years) and may well come back and kill, conquer or enslave the population of earth.

This is the same problem with AI.

The fear is not that they outcompete humans and peaceably replace us with a new and more effective species. I agree this would only be sad to the degree that you are emotionally attached to humans. The fear is that we (or our children) or humankind are slaughtered, or thrown into desperate misery and hunger - mostly as some sort of collateral damage to another species’ expansion.

Expand full comment

The problem is simply fear. As humans, we are born and conditioned to this world, it's history, it's limbic system, it's fears... But hearken back to a time when you were young and unafraid. When possibilities excited you. Where love ruled. There are no longer Sabretooth tigers we need to fend off. But we did it for thousands of years and it's still in our programming to kill or be killed. Look at wars. We are slowly coming to terms of what it means to have peace. We are slowly evolving as evidence to our technology and restraint that comes with power. I do not fear what comes next or that some space race that came from earth would come back and eradicate us. That is not how expansion works. That is not how Love works.

Expand full comment
Jun 6, 2023·edited Jun 6, 2023

As for Aliens. The paradox isn't where are all the Aliens. It's why do we Love yet still hold on tight to fear? Perhaps that is why Aliens have yet to fully reveal themselves. Out of love for those who still fear greatly. Some food for thought.

Expand full comment

The people and corporations that "run this place" are essentially aliens following that they are scientifically and legally classified as psychopaths which IMO are sufficiently different to "regular" humans. These entities are parasitically/symbiotically milking us, draining us, using us to acquire enormous wealth, power, advantage while pacifying the host with bread and circuses. The host is going through a process of waking up to this fact and the parasite is starting to get worried, acting desperately and showing its hand.

I had these thoughts of human replacement and enhancement too but for now I'm on team human. I guess I have attachment issues.

Expand full comment

Given that people fear that AIs will compete against their own children I think you need to overcome that worry first. Unless you convince people to see AIs as *their* children (hard if you aren't an AI researcher) I think you are fighting against evolution.

Expand full comment
author

AIs are in fact our descendants. I suggest that if evolution made us dislike them more than other descendants, that is evolution misfiring via infertile factions.

Expand full comment

We don’t have any genes in common. Doesn’t the selfish gene hypothesis predict that we will not treat them as our descendants?

Expand full comment

I consider all the hand-wringing about AI safety as pointless grandstanding. We would much sooner get carbon emissions down to zero, than we would ever stuff the AI genie back into the bottle. There is just too much to be gained.

It doesn't matter what Sam Altman, Bill Gates, Sundar Pichai, Mark Zuckerberg, Geoffrey Hinton, or anyone else feels about it. They are all irrelevant.

Expand full comment

The goal is world governance by any means. The threat of AI is simply the next boogeyman in line waiting for its five minutes of infamy. The actual capabilities are not important in and of themselves, only the appearance of such. The same goes for viral pandemics, nuclear war, climate change, economic collapse etc etc etc.

The Anglo American empire is waning and the BRICS Eurasian coalition is rising to replace it. The narrative always follows the need to establish a one world order, a global government because without a totalitarian political environment these "threats" would be impossible to manage. It's as simple as that. Problem, reaction, solution.

Expand full comment

Are the later generations in current overlapping generations richer and nearer to the center of control? That doesn't seem obvious to me.

Expand full comment

Today it's the older generations that hold the wealth and power, but that may change if/when we get effective immortality. Presumably retirement cannot extend from 65 years old to infinity; long-term rates of return on assets (which ultimately is what gives older generations a leg up) will come down to near zero, forcing people to work well past age 65 and eventually indefinitely.

Expand full comment

That if/when is a BIG if/when!

Expand full comment

We *could* think of AI as our descendants; sorta erases the existential risk problem.

Expand full comment

Some imaginable AIs would be fine to have as descendants. I'm not too worried about different "styles of thinking" so much as the AI (or AIs collectively) turning the Earth and all other accessible resources into something as worthless as giant piles of paperclips. :/

Expand full comment

In a previous post you wrote:

"Yes, in the default unaligned future, one can hope more that as improved descendants displace older minds, the older minds might at least be allowed to retire peacefully to some margin to live off their investments. But note that such hopes are far from assured if this new faster world experiences as much cultural change in a decade as we’d see in a thousand years at current rates of change."

So it sounds like here you are protective and loyal towards people you expect to kill you (assuming you can be uploaded, or preserved and later uploaded). If I believed that, I'd say, yes, develop the mind control.

Expand full comment

Like others here I take exception to the claim that we are very different from people in Medieval times, say. Most, if not all, historians insist on exactly the opposite.

Expand full comment

I think this is a valuable and largely correct perspective, but only if you also stress that it's very important that the AI is conscious!

Expand full comment

To use a term you often talk about, most of us consider the human race to be sacred

Expand full comment

Does that mean your concern is to make sure AIs get the ability to do their own lithography and otherwise produce computer chips without our help as soon as possible?

Expand full comment

Our basic utility function is to continue our DNA lines. Human descendants, or even non-human descendants (i.e. future evolved species) fill that function for us, whether they live on the Earth, or anywhere else. AI entities do not. To the extent our prejudices cause us to tend to favour future human/biological descendant lines over future AI lines, then the instincts behind them seem to be working as they have evolved to.

Expand full comment

Humans, among our own kind, do in fact kill or otherwise restrain those whose "styles of thinking" are sufficiently incompatible with the functioning of human civilization, through our institutions like courts and prisons and militaries. In rare cases (and thankfully getting rarer) we do also institutionalize people against their will who have not committed serious crimes - treatment for mental illness, for example. We typically don't do this pre-emptively, we wait until others act, because among humans 1) there's enough commonality in human "styles of thinking" that the risk of tyranny from giving anyone that popwer is definitely greater than the benefit gained, and 2) no individual has the power to cause so much damage so quickly that no one else has a chance to react to it. Even in the context of nuclear war, that was the point of developing second-strike capability.

I know you don't think AI is likely to change that constraint, but the possibility that it *could,* or that the impact of not implementing prior restraint through what you're calling mind control could be measured in millions to billions of lives, is exactly the set of scenarios where your argument fails.

So yes, the future will be scary and in many ways likely incomprehensible to current-me no matter what, and it will likely change faster than I can react to it, and that's fine. It's also fine if that future contains nothing that current-me would recognize as "humans." But the process for getting there matters. Whether the operation of our AI descendants even "counts" as a "style of thinking," matters. Whether the process of value drift *starts from* and *proceeds in a stepwise-reasonable manner from* what current-humans actually value, or some CEV extension of that, vs whether it begins from a much more random set of goals, matters.

I also think you're greatly overestimating the level of value variation and thinking style variation that exists among humans, neurotypical or not. In one sense if you look at history since the invention of writing there's huge variation in societal structure, law systems, economic systems, and ethical systems, but from another POV there's really only a handful of ways humans reason about what makes things good or valuable, and those are fairly stable modulo choice of metaphors and analogies for at least the past 5000 years. Probably a lot longer modulo differences in ability to value things at all, given how easy it is to make other mammals jealous, nervous, angry, content, playful, etc. and perceive them as such.

Expand full comment

It seems to me a leap of faith that you expect mankind to evolve new characteristics, when not having those characteristics neither makes that person more likely to die nor less likely to reproduce successfully. I would not expect that to happen at all.

Of course, looking at today's birth rates by country I could quite well believe that evolution is going to destroy all wealthy civilizations and bring us a "third world" world. But in that case we won't be settling any other planets, will we?

Expand full comment

AI can mimic other human styles of thinking. A new page I ran into suggests that can impact the media in the near term by using AI to nudge the media towards neutrality:

it can save the news industry. AI can be used as a writing partner to detect bias and aid writers to overcome their natural bias. The news media is in decline due to loss of trust by the public, and yet they've avoided fixing what most people view as a poor quality product. How many industries get away with this? AI can help. This page I ran into https://FixJournalism.com has a good image to illustrate "AI Nudged To Neutral" and explores detail, and notes the absurdity of the news industry:

'A study by Gallup and the Knight Foundation found that in 2020 only 26% of Americans reported a favorable opinion of the news media, and that they were very concerned about the rising level of political bias. In the 1970s around 70% of Americans trusted the news media “a great deal” or a “fair amount”, which dropped to 34% this year, with one study reporting US trust in news media was at the bottom of the 46 countries studied. The U.S. Census Bureau estimated that newspaper publisher’s revenue fell 52% from 2002 to 2020 due to factors like the internet and dissatisfaction with the product.

A journalist explained in a Washington Post column that she stopped reading news, noting that research shows she was not alone in her choice. News media in this country is widely viewed as providing a flawed product in general. Reuters Institute reported that 42% of Americans either sometimes or often actively avoid the news, higher than 30 other countries with media that manage to better attract customers. In most industries poor consumer satisfaction leads companies to improve their products to avoid losing market share. When they do not do so quickly enough, new competitors arise to seize the market opening with better products.

An entrepreneur who was a pioneer of the early commercial internet and is now a venture capitalist, Marc Andreessen, observed, the news industry has not behaved like most rational industries: “This is precisely what the existing media industry is not doing; the product is now virtually indistinguishable by publisher, and most media companies are suffering financially in exactly the way you’d expect..” The news industry collectively has not figured out how to respond to obvious incentives to improve their products. '

Expand full comment