92 Comments

Thanks but really can't agree with you Robin. Me and many of my friends and intellectuals we follow such as Eliezer Yudkovsky are actually people who very much welcome new technologies in general, yet we think that super human AI could quite likely spell the end of the human race in the same way that humans have eliminated many less intelligent species. This is NOT a general fear of the future or of change, it is a rational analysis of the dangers of AGI which is a challenge not similar to any the human race has faced in the past. AGI might be more analogous to being invaded by super-intelligent grabby aliens which I think you agree is a real danger.

Expand full comment

The additional wrinkle of "all humans die" seems important, but is not obviously highlighted as a difference between the two scenarios.

Expand full comment

The standard claim is that all the supposedly "radical" changes of culture are minuscule compared to the (super-?)exponential array of possibilities: that is, even the most different human cultures are, in some very important ways, similar to ours, while this is not just not guaranteed but without lots of work not even expected of AI. In other words, the human future is many orders of magnitude "less unaligned" than, say, paperclip maximizer future.

Expand full comment

My worry is not that AI will replace us, but that it will die after it does so.

It doesn't have billions of years of evolution and a multitude of individuals for robustness.

For all we know, it could crash or go insane the day after it wipes us out. And then what is left?

Expand full comment

I think there's a major difference between moral progress (when instrumental values and beliefs have shifted in response to new evidence and arguments), random cultural drift, and being killed by a paperclip maximizer. The first is good, the second neutral at best, the third bad.

The reason people recoil from Age of Em is that the drift seems to them to be in a negative direction; many utopias are wildly culturally different from the present, just in ways regarded as good.

AI "descendents" are, yes, likely extremely alien by default; like having "descendents" of any other nonhuman species. Humans are very similar for basic biological reasons.

But even if not, even if we got ems or the like - as you note, super-fast cultural drift and a massive power differential makes it very plausible that bio human rights (to property, but also life etc.) will not be respected! Surely you agree that would be bad? Or do you take a "long view" that a few billion bio human deaths, yourself and loved ones included, are worth it for em civilization to flourish?

If a) AI is more similar to us than different, and b) moral progress dominates over cultural drift sufficiently to outweigh AI self-interest, then I agree we get a somewhat good outcome. I think b) plausibly follows from a), perhaps. But you have not convincingly argued for a); in the past, you frequently argued for em progress to be faster than alien AI progress, but you now seem to concede this was likely wrong.

Expand full comment

I think our fear is borne from a desire to control. We all want more certainty than we can attain, which is we we lean on things like foom, but foom isn't a foregone conclusion. If it happens "alignment" (another attempt at control) isn't going to help. Trying to align AI is like trying to hit an invisible moving target. Not to mention the trajectory of such a thing may not even be discernable. In any event, I signed the letter, but not because I think a pause should happen (and certainly not that governments should be the sources of such mandates), but because the discussion that ensues might help us figure this shit out.

I want more control, too. LOL

Expand full comment

I agree with a lot of this, but I have at least two key disagreements:

1. I am a human. My loved ones are humans. I also have a love of humanity. Certainly, I love other sentient beings too, but the notion of creating other "minds" wholesale that will displace us is not the future I want. Of course I will fight against that. I'm not even sure if this is a disagreement, but it seems to be what you're implying. Maybe I'm just misreading it! If so, my apologies.

2. People in the past have had some degree of control over the future. Even though I agree things have on average improved over time, I don't think we're just in some arrow of history toward "progress".

Expand full comment

What I find fascinating about AI (and LLMs) is what they tell us about the nature of intelligence. ChatGPT for me passes a very superficial Turing Test, but as I use it more it becomes obvious it is nothing like a human mind under the hood. It has no goals or motivations like a biologically-evolved mind does, and it fails spectacularly at some simple tasks like counting words. At the same time it can pass most AP tests and the bar exam. I feel like LLMs are telling us something deep about what "intelligence" means and doesn't mean, just as Deep Blue did with chess.

Expand full comment

Thank you Robin, you are right on target. I don’t think the authoritarians will ignore AI, they will use it to increase their power. Hopefully those who value freedom won’t follow the fear mongers.

Expand full comment

“Life is full of disasters, most of which never occur.”

Expand full comment

I'm not sure if there are any inventions other than nuclear energy, may be crispr (and now AI) that have caused people to fear the upcoming change. Steam boats? Internal combustion? Telephones etc don't seem to have raised any concerns.

Expand full comment

It seems to me that the people who want to halt or pause AI progress are competing against two very powerful forces: (1) the economic and financial incentives of the various organizations that are building various AI technologies; and (2) the national security apparatuses of the various countries which stand to gain the most from AI tech (US, China, Israel, etc.) So those who advocate for a pause or halt in AI research haev to figure out how to overcome *both* of these forces. And, to date, I've no proposal about how to overcome one of these forces, let alone both.

Expand full comment

We may vote “No” to AI change, but that will certainly not stop China from doing so. The AI future is coming regardless of whether we want it or not, but outcomes will probably be more benign if we remain on the forefront. Fear mongering is a tactic employed by those seeking attention, money, and power. Like most other tech, AI is a force multiplying tool that will make us more productive; obviously, it can also be used as a weapon by bad actors. Despite the Hollywood dystopian SkyNet scenario and more likely malicious actors, we should anticipate an enormous net gain for humanity.

Expand full comment

Humans competed successfully against other species by having hands and controlling our environment. We flourished by using first animal power then machine power to replace muscle power. For the first time ever we are competing against our previously unique advantage, brain power. I don't think that the bots will decide to eliminate humans. If that happens we will be in fishery terms an unfortunate by-catch. I don't think that consciousness is essential for the first self-owned bots. Consciousness (Pinker) appears to arise spontaneously when a number of computing modules are talking to each other.

Expand full comment

Think the aligned/unaligned stuff while it makes the argument go down smoothly is actually not load-bearing.

1. Biq Q for me is to what extent humans really have consistent and deep preferences over what happens past their death, not driven by details of presentation and fleeting feelings of the moment.

Like, you offer a conservative dude a vision of shiny metal robots with US flags flying to the stars using insane new technology vs say China winning AI race and ruling the world with AGI aligned to their values of social order and cohesion.. Are they rly gonna consistently side with humans? And ofc Chinese are not 1% as alien as 100 gens of humans ahead could've been. Isn't "shared humanity" an illusion due to folks having trouble really internalizing that past was the alien world where say recently invented rationality tools weren't even present but shamanic dances and visions and other currently atrophied social coherence and religious impulse brain machinery is constantly running.

2. This also seems like a funny instance of a "do you wanna become a vampire" paradox: so, ignoring untimely death of current humans, if the choice is between i gen1 humans gen2 ais vs ii gen1 humans gen2 humans should we really be evaluating the second state in scenario i using prefs of humans from gen1 rather than ais that would actly be the relevant earth inhabitants in that counterfactual - and presumably happy with that outcome.

3. That brings us to a fun idea: so, would utilitarian doomers be happy if we can "self-align" AI so it's incredibly happy, way more so than any reasonable numbers of humans ever would?

Expand full comment

Surely some people think as you describe here.

Yet at https://thezvi.substack.com/p/ai-6-agents-of-change. Zvi said "I believe that if humans lose control of the future, or if all humans die, *that this would be a no-good existentially bad outcome*" He calls people who feel otherwise "enemies of the people" and equates them with anti-natalists and Earth-Trisolaris Organization members.

I think that's a different concern - not just fearing the future and change. It's more about anthropocentrism, and the idea that an otherwise wonderful future is no good if it's aliens or robots that enjoy it, and not people genetically related to Zvi Mowshowitz.

I fear an empty valueless universe, one with unconscious machines or paperclip maximisers. (I'm not saying that's likely.) I'm OK with a non-human universe - it's not my first choice, but there seem to be far worse possibilities.

But then I'm weird. (And, yes, I do have children.)

Expand full comment