92 Comments

Thanks but really can't agree with you Robin. Me and many of my friends and intellectuals we follow such as Eliezer Yudkovsky are actually people who very much welcome new technologies in general, yet we think that super human AI could quite likely spell the end of the human race in the same way that humans have eliminated many less intelligent species. This is NOT a general fear of the future or of change, it is a rational analysis of the dangers of AGI which is a challenge not similar to any the human race has faced in the past. AGI might be more analogous to being invaded by super-intelligent grabby aliens which I think you agree is a real danger.

Expand full comment

Why is being replaced. by AI that much worse than being replaced by descendants who don't share your values?

Expand full comment

I selfishly would prefer a future with more of my biologic descendants than them dying off, humans to some other alien life, human-derived machines/AI to some alien-derived AI. The values they share is somewhat less important than something more like kinship ties.

Expand full comment

Exactly.

Expand full comment

Critics of utilitarianism often imagine highly implausible scenarios that "seem bad", but are assumed to lead to greater aggregate utility. I know of no real world policy debate where those thought experiments have any bearing. But here you contemplating sometime similar--analogous to a utility monster. (I seem to recall that Scott Aaronson made a similar suggestion.)

You may be right, but it's going to be quite a challenge to convince people that total human extinction in 2035 would actually be a good thing if we were replaced by billions of robots with a greater capacity to generate utility than us 8 billion humans, even if that claim were true. (BTW, I'm not taking a position on whether widely cited extinction risk estimates are plausible, which is a separate issue.)

Expand full comment

Any human likely shares a lot of my values. Not all, but most, especially the important ones. Any human process to make more humans will produce people far, far more aligned with me, than 99.9% of possible AI. I'm not worried that I'll lose my culture. I'm worried that there will be nothing of humanity remaining.

Expand full comment

AI could well be vastly intelligent, but not conscious.

If they eradicate humans (and then, likely, most life on earth) there are no more conscious beings on Earth. The Earth is just a giant machine zombie, with literally nothing of value on it, because there is nothing there anymore to perceive any value.

This strikes me as a definite negative. But maybe you don't.

Expand full comment

Well that's a judgement call. Some of us care about the human race and would prefer to be succeeded by humans (even with somewhat different values) than by AIs or ems.

But that wasn't my point, my point was that fear of AGI is not just fear of the future. In fact it's negatively correlated - the people who fear AGI have tended to be people who love technological innovation.

Expand full comment

It's not so much about being replaced; being murdered is the problem.

Expand full comment

The way ur-AI is currently killing us off is by making life so awesome the fertility rate for the rich half of humanity has declined to less than replacement; nice, right?

Expand full comment

Values change from generation to generation. Okay, you and my parents don’t appreciate Rihanna, but there is still huge emotional, intellectual, cultural connections between all of us. Not so with robot killer dog and his CHATGPT master.

Expand full comment

God was angry when statues were worshipped, Now they have sexbots, Now people have sex with statues, I mean robots,"what's the difference? Both are an abomination. David Hanson s name adds up to 666. Google the gemetria calculator, type in the name David Hanson and then hit the button that says calculate gemetria. Then type in the name computer and hit the button again . Do it and you will see for yourself. Keep watch didn't Christ talk about this in Matthew Ch 24. And didn't John mention this in Revelation ch.13. God will destroy AI along with the world, You can not escape to Mars because God will destroy it too. They will claim aliens and when I say they I mean the deception that is going to deceive the world, but those who know Christ will live forever in His Kingdom. You have time to repent. Please do so before your chance is blown away forever.

Expand full comment

No proof yet the "challenge" exists. And even if so, why expect the super genius AI won't treat its creators similar to how we treat elephants, whales, etc.? Hasn't been a great ride for either, but no total destruction (which the doomers seem to assume).

Expand full comment

We've actually caused many species to go extinct, not out of evil intent just out of indifference, and we're going to cause many more over time

Expand full comment

Read more closely: elephants, whales are the example (basically, intelligent, social). There's no way a super genius AI doesn't recognize we're not only intelligent, but its creators. So your "response" entirely misses the point.

Expand full comment

Ok good point, we're safe from AGI because elephants aren't extinct

Expand full comment

Better analogy than (e.g.) EY's hand-wavey Harry Potter stories of magical doom. And burden's on the doomers, not other way round. But seriously, I love that you can't wrestle with the inconsistency between superhuman, godlike intelligence and being dumb enough to kill all of its creators, who are also intelligent enough for communication. Dogs do pretty well! But go with the snark, that's really going to convince people looking for rigorous analysis and explanation. All you've got at base (from EY and others) is "assume a can opener."

Expand full comment

The additional wrinkle of "all humans die" seems important, but is not obviously highlighted as a difference between the two scenarios.

Expand full comment

But that difference is mainly caused by the assumption of a single AI, vs a world of them. How believable is that?

Expand full comment

If you want to propose that AI fear is mostly fear of the future, you should engage with what *people who fear AGI* anticipate is going to happen, not what *you* think is going to happen. I am not nervous about AGI because I think it will result in a galactic civilization alien to me in the same way that the Romans would find modern humanity alien. I am nervous about AGI because I think it will probably result both in my death, and a universe in which there might be no subjective experience at all - much less humans around to laugh and smile and cry.

Expand full comment

Please try to explain/defend "will probably result both in my death and a universe etc." Seriously, that's an unbelievably certain statement for a wildly uncertain future. And please don't just cite to some handwavey EY scifi/harry potter story that boils down to "assume a can opener." Explain how we get from here to there. Please also deal with the inconsistency of a near omniscient, omnipotent AI not having a better sense of what to do with humans than we do. If you believe in EA style utilitarianism, not sure why I should trust what you or EY believe over this godlike AI. Also, please explain how this almost certain to arise in near term godlike AI can nonetheless be constrained, wrestling with both the pragmatic human/political aspects that seem insurmountable, and also why a recursively self-improving all powerful AI can nonetheless be constrained. Thanks!!

Expand full comment

This survival-concern trumps any allegiances I have to neoliberal cosmopolitanism. If the inevitable march of progress is supposed to mean a cabal of AI systems slowly wresting control of Earth's governance and supply chains away from humans and replacing their habitats with computronium, I am extremely content with being a philistine.

Expand full comment

AIs could negotiate mergers / "values handshakes". Merged entity is probably more powerful than sum of its parts (shares all information, resources) and it prevents loss of utility due to conflict.

Expand full comment

The standard claim is that all the supposedly "radical" changes of culture are minuscule compared to the (super-?)exponential array of possibilities: that is, even the most different human cultures are, in some very important ways, similar to ours, while this is not just not guaranteed but without lots of work not even expected of AI. In other words, the human future is many orders of magnitude "less unaligned" than, say, paperclip maximizer future.

Expand full comment

Has anyone tried to estimate this numerically? Doesn't at all seem obvious to me.

Expand full comment

How on earth would one assign numbers to the degree of unalignment? I can rank that a paperclip maximiser is less aligned to my own values than a Hindu theocracy which is less aligned than a soft-libertarian but how could one assign valid numbers to that?

The fact that a problem is difficult to describe mathematically - which risk of nonalignment is - doesn’t make it less of a problem.

Expand full comment

That's a good question, and probably you should ask Yudkowsky what's his calculation of "very high probability" based on in "Value is Fragile": https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile.

Expand full comment

Ah, the appeal to the prophet/authority, implicitly conceding there's not some straightforward, convincing answer, but instead requires study of the canon.

Expand full comment

"Implicit conceding" is not a thing. More importantly, it's not so much an appeal to authority as an answer to the question whether anyone tried to estimate this. Yudkowsky apparently did. I certainly don't treat every word by that guy as holy canon, but it is worth studying things you disagree with, too, unless they make cognitive mistakes so basic that the expected harm from reading is higher than the chance of some helpful nuggets (which is true, e.g., of many philosophers).

Expand full comment

So the AI is super powerful enough to destroy all of us in its quest to create more paper clips, but dumb enough to think killing all of us is the easiest way to do so? Sure, seems totally rational(ist)!!

Expand full comment

Orthogonality thesis. It is totally capable of doing "collaborate (for now)" but also capable of betraying once we're no longer needed and stand in the way of more efficient processes without a second thought.

Expand full comment
Comment deleted
April 18, 2023
Comment deleted
Expand full comment

Well, yeah, "without a second thought" wasn't meant literally - it's just unlikely that it will be any thought we can _detect_.

Expand full comment

My worry is not that AI will replace us, but that it will die after it does so.

It doesn't have billions of years of evolution and a multitude of individuals for robustness.

For all we know, it could crash or go insane the day after it wipes us out. And then what is left?

Expand full comment

this is great! this is a real X-risk if one is not being species-ist like those EAs who presumably should've known better. let's make better and more varied AIs.

Expand full comment

I think there's a major difference between moral progress (when instrumental values and beliefs have shifted in response to new evidence and arguments), random cultural drift, and being killed by a paperclip maximizer. The first is good, the second neutral at best, the third bad.

The reason people recoil from Age of Em is that the drift seems to them to be in a negative direction; many utopias are wildly culturally different from the present, just in ways regarded as good.

AI "descendents" are, yes, likely extremely alien by default; like having "descendents" of any other nonhuman species. Humans are very similar for basic biological reasons.

But even if not, even if we got ems or the like - as you note, super-fast cultural drift and a massive power differential makes it very plausible that bio human rights (to property, but also life etc.) will not be respected! Surely you agree that would be bad? Or do you take a "long view" that a few billion bio human deaths, yourself and loved ones included, are worth it for em civilization to flourish?

If a) AI is more similar to us than different, and b) moral progress dominates over cultural drift sufficiently to outweigh AI self-interest, then I agree we get a somewhat good outcome. I think b) plausibly follows from a), perhaps. But you have not convincingly argued for a); in the past, you frequently argued for em progress to be faster than alien AI progress, but you now seem to concede this was likely wrong.

Expand full comment

I don't not see why AI descendants should be considered extremely alien by default.

Expand full comment

In the space of all possible minds, the average distance between a human and *a random point of that space* is surely much larger than between a human and any conscious being that ever lived so far on Earth (animals). Do you dispute that?

If not, why assume our AI creations land close to humans *by default*?

Expand full comment

I think our fear is borne from a desire to control. We all want more certainty than we can attain, which is we we lean on things like foom, but foom isn't a foregone conclusion. If it happens "alignment" (another attempt at control) isn't going to help. Trying to align AI is like trying to hit an invisible moving target. Not to mention the trajectory of such a thing may not even be discernable. In any event, I signed the letter, but not because I think a pause should happen (and certainly not that governments should be the sources of such mandates), but because the discussion that ensues might help us figure this shit out.

I want more control, too. LOL

Expand full comment

I'm not convinced that anyone will ever have control. When I was young it was perfectly acceptable for a white-collar professional to not use computers. Now even the guy changing the oil in my car has to use a computer. There were never any options or control along that path from A to B, short of disengaging from the mainstream economy.

Expand full comment

I think the control people are seeking doesn't have to be real. The more they want it the further they're willing to shrink their purview until they have some semblance of it.

Expand full comment

in my mind this is related to TC's "we've gotten used to great stagnation" illusion of control and habituation to "nothing really changes" and folly of "can plan 50yrs ahead".

would we have been as fussy mid 19th century amid all the craziness?..

Expand full comment

I agree with a lot of this, but I have at least two key disagreements:

1. I am a human. My loved ones are humans. I also have a love of humanity. Certainly, I love other sentient beings too, but the notion of creating other "minds" wholesale that will displace us is not the future I want. Of course I will fight against that. I'm not even sure if this is a disagreement, but it seems to be what you're implying. Maybe I'm just misreading it! If so, my apologies.

2. People in the past have had some degree of control over the future. Even though I agree things have on average improved over time, I don't think we're just in some arrow of history toward "progress".

Expand full comment

You seem to just prefer "humans" over "other minds" more intrinsically, and less in terms of articulable features they have.

Expand full comment

I agree. I can't articulate what features of humans I prefer over other minds. Perhaps in the future, with more time and reflection, I would be able to more clearly specify what is the divergence in values, and that might change things somewhat. But for now I think the conservative decision would be to not allow all humans to be replaced by other minds, if I can help this be avoided. Maybe I'm just more conservative than you on this?

I want to point out that I do agree with a lot of your other points, including the very low likelihood of AI foom. I find it pretty surprising that you don't seem to value humans or human-like minds more than any other possible minds.

Expand full comment

What I find fascinating about AI (and LLMs) is what they tell us about the nature of intelligence. ChatGPT for me passes a very superficial Turing Test, but as I use it more it becomes obvious it is nothing like a human mind under the hood. It has no goals or motivations like a biologically-evolved mind does, and it fails spectacularly at some simple tasks like counting words. At the same time it can pass most AP tests and the bar exam. I feel like LLMs are telling us something deep about what "intelligence" means and doesn't mean, just as Deep Blue did with chess.

Expand full comment

Thank you Robin, you are right on target. I don’t think the authoritarians will ignore AI, they will use it to increase their power. Hopefully those who value freedom won’t follow the fear mongers.

Expand full comment

“Life is full of disasters, most of which never occur.”

Expand full comment

I'm not sure if there are any inventions other than nuclear energy, may be crispr (and now AI) that have caused people to fear the upcoming change. Steam boats? Internal combustion? Telephones etc don't seem to have raised any concerns.

Expand full comment

Social media, life extension, human mind self modifications, human-animal hybrids, ...

Expand full comment

You surely agree not all changes are of the same type. This world is not a scifi novel, and the vast majority/90%+ of people do not share this openness to radical transformation, justifiably so.

Expand full comment

Tell that to those who tell sci-fi stories to "explain" how AI arises and destroys us all.

Expand full comment

I think you're forgetting many historical concerns. Many feared trains would kill people with acceleration, power lines would zap people, TV would spread violence and turn brains to mush (this fear has not entirely died out), etc.

Expand full comment

I think what distinguishes AI, Crispr, and nuclear energy specifically is that each has a plausible-ish claim to potentially being the literal end of humanity as we know it. So a version of Pascal's wager is at play: How to evaluate tradeoffs when there is a seemingly infinite term on one side.

One way out (which Robin is getting at in some of his replies) is to argue that the annihilation of humanity may not be an infinitely negative outcome, if we can view AIs as our "children" in some sense. We are used to the idea of our children replacing us – and are proud of them for doing so – why should we be less proud of silicon-based superintelligent children?

Expand full comment

When cars were invented people worried that going too fast would rip the air from your lungs and kill you, no kidding.

Expand full comment

It seems to me that the people who want to halt or pause AI progress are competing against two very powerful forces: (1) the economic and financial incentives of the various organizations that are building various AI technologies; and (2) the national security apparatuses of the various countries which stand to gain the most from AI tech (US, China, Israel, etc.) So those who advocate for a pause or halt in AI research haev to figure out how to overcome *both* of these forces. And, to date, I've no proposal about how to overcome one of these forces, let alone both.

Expand full comment

We may vote “No” to AI change, but that will certainly not stop China from doing so. The AI future is coming regardless of whether we want it or not, but outcomes will probably be more benign if we remain on the forefront. Fear mongering is a tactic employed by those seeking attention, money, and power. Like most other tech, AI is a force multiplying tool that will make us more productive; obviously, it can also be used as a weapon by bad actors. Despite the Hollywood dystopian SkyNet scenario and more likely malicious actors, we should anticipate an enormous net gain for humanity.

Expand full comment

Humans competed successfully against other species by having hands and controlling our environment. We flourished by using first animal power then machine power to replace muscle power. For the first time ever we are competing against our previously unique advantage, brain power. I don't think that the bots will decide to eliminate humans. If that happens we will be in fishery terms an unfortunate by-catch. I don't think that consciousness is essential for the first self-owned bots. Consciousness (Pinker) appears to arise spontaneously when a number of computing modules are talking to each other.

Expand full comment

Think the aligned/unaligned stuff while it makes the argument go down smoothly is actually not load-bearing.

1. Biq Q for me is to what extent humans really have consistent and deep preferences over what happens past their death, not driven by details of presentation and fleeting feelings of the moment.

Like, you offer a conservative dude a vision of shiny metal robots with US flags flying to the stars using insane new technology vs say China winning AI race and ruling the world with AGI aligned to their values of social order and cohesion.. Are they rly gonna consistently side with humans? And ofc Chinese are not 1% as alien as 100 gens of humans ahead could've been. Isn't "shared humanity" an illusion due to folks having trouble really internalizing that past was the alien world where say recently invented rationality tools weren't even present but shamanic dances and visions and other currently atrophied social coherence and religious impulse brain machinery is constantly running.

2. This also seems like a funny instance of a "do you wanna become a vampire" paradox: so, ignoring untimely death of current humans, if the choice is between i gen1 humans gen2 ais vs ii gen1 humans gen2 humans should we really be evaluating the second state in scenario i using prefs of humans from gen1 rather than ais that would actly be the relevant earth inhabitants in that counterfactual - and presumably happy with that outcome.

3. That brings us to a fun idea: so, would utilitarian doomers be happy if we can "self-align" AI so it's incredibly happy, way more so than any reasonable numbers of humans ever would?

Expand full comment

Surely some people think as you describe here.

Yet at https://thezvi.substack.com/p/ai-6-agents-of-change. Zvi said "I believe that if humans lose control of the future, or if all humans die, *that this would be a no-good existentially bad outcome*" He calls people who feel otherwise "enemies of the people" and equates them with anti-natalists and Earth-Trisolaris Organization members.

I think that's a different concern - not just fearing the future and change. It's more about anthropocentrism, and the idea that an otherwise wonderful future is no good if it's aliens or robots that enjoy it, and not people genetically related to Zvi Mowshowitz.

I fear an empty valueless universe, one with unconscious machines or paperclip maximisers. (I'm not saying that's likely.) I'm OK with a non-human universe - it's not my first choice, but there seem to be far worse possibilities.

But then I'm weird. (And, yes, I do have children.)

Expand full comment

I'm trying to imagine how quaint these "humans must survive" objections will seem in 10,000 AD when our hyper-intelligent descendants look back on us. (Hello from the past, future people!) It's always easier for us to imagine future losses than future gains; the former are more tangible while the latter require imagination and faith.

Expand full comment

All very religious, with humans destroying planet yet remaining worthy of salvation. Pick a lane!

Expand full comment