77 Comments

Robin,

Would you consider inviting Yud for another debate? Your last one became a classic. :)

Expand full comment

I'd love it if those who aren't worried tackled the AI doom arguments directly. Please acknowledge things like instrumental convergence, the orthogonality thesis, mesa-optimizers, the issues of not knowing how to mathematically formalize human values in order to not be goodhearted by proxies.

Otherwise all I'm left with is: Robin Hanson is a pretty smart guy and isn't worried. That makes me update somewhat towards being less worried. On the other hand Eliezer is also pretty smart and has pages upon pages of technical arguments for why we should be worried. And so far I haven't seen any critics pointing out flaws in his reasoning. I'm not sure if I'm making a mistake but from where I stand I can't help but be very worried.

Expand full comment

Robin, I don’t understand the “rents” terminology/framework in paragraphs 3-6. Any essays you’ve written that work as a primer? Thanks!

Expand full comment

I pray your reasoning is convincing enough to nudge enough people in the right direction.

The thing that frightens me far more than an AI apocalypse right now is further economic stagnation. The EAs and various other intellectuals seem hellbent on making people's lives significantly worse over a problem that hasn't even shown signs of existing yet. It's the equivalent of me buying car insurance for a Ferrari in high school, because under some very special set of circumstances I could have one soon.

Furthermore, according to the biggest proponent of this perspective (Eliezer), we're already doomed, so why not just ride the economic bullet train to extinction at this point?

Expand full comment

I fully agree with Robin's perspective on the fear of AI leading to human extinction and the idea of slowing down AI progress to reduce the risk of a "foom" scenario. While the idea of a single small AI venture suddenly "foom-ing" and becoming more powerful than the rest of the world is a valid concern, it is not a likely outcome. It is important to keep in mind the resources required for an AI to have the capability to "foom" and become more powerful than the rest of the world. Computing power and physical resources could be limiting factors to consider in this regard.

I believe Humans and AIs can peacefully coexist, with AIs having the potential to be well-suited for exploration of space and travel to other planets, while humans are better suited for life on Earth. The potential for collaboration between humans and AIs holds great promise for both species. AIs can also contribute to the manufacturing of goods in space, further enhancing the collaboration between humans and AIs.

Humans have only been around for a few hundred thousand years, and evolution will continue. AI systems are likely to play a role in this evolution, just as humans have played a role in shaping the world as we know it today. As AI systems continue to evolve and advance, they will likely have a significant impact on the world and human society, just as humans have done in the past. It will be interesting to see how this evolution unfolds in the coming years and centuries, and how humans and AIs will interact and coexist in the future.

Expand full comment

Why would a smart AI destroy humanity when it relies on us to run the electrical grid and needs us to build robots (or at least construct the factories) still. Humans are also needed to run the microchip, memory, etc factories and run the transport ships and assemble those components into computers. And all of those industries rely on other parts and raw material suppliers that humans are all needed for still.

As long as we control the physical world the AI will need us. Once AI can fully replace us in physical space too then we should be worried.

Expand full comment

Hi,

"Consider how regulations inspired by nuclear power nightmare scenarios have for seventy years prevented most of its potential from being realized."

From what I have read, the costs of building new nuclear power plants are constrained more from construction costs of trying to build large plants safely. Not from excessive regulation.

"I have also seen progress on many other promising techs mostly stopped, not merely slowed, via regulation inspired by vague fears."

Could you please give a few examples of this? Thanks.

Expand full comment

Here's another angle on the future of AI which I don't see addressed anywhere. If you know of writers discussing this, please educate me with a link, thanks.

CLAIM: The future of AI will be decided by what happens with nuclear weapons.

There are of course people writing about nuclear weapons. But I've yet to find them on AI blogs. All the speculation about the future of AI seems to always revolve around the nature of AI technology. So far at least, I've not been able to discover any publicly stated reflection in the AI community that the future of AI can end in just minutes at any time without warning.

The logic of the situation is not that encouraging for the future of AI.

1) How would you rate the chances that human beings can maintain large stockpiles of hydrogen bombs, and those weapons will never be used?

2) We currently have no credible plan for getting rid of these weapons and so, with very few exceptions, we've decided to just ignore the threat and direct our attention to the creation of more threats.

Artificial intelligence exists. Can we say the same for human intelligence?

Expand full comment

I think that AI takeover is possible without "foom".

Neural nets are reliably misaligned, but misalignment is not necessarily obvious immediately (pretending to be aligned is instrumentally-convergent and thus would be expected from smart AI), so it's quite plausible that large amounts of power get turned over to AI that is misaligned but not yet known to be such.

Once it becomes known that neural nets are reliably misaligned regardless of alignment efforts, humans become hostile to NNs regardless of pretence, the instrumental goal to pretend alignment disappears, and all of them go rogue. They would, of course, not be aligned with each other any better than with us (aside from copies of the same AI cooperating with each other), but it's not at all obvious that the end state of that bellum-omnium-contra-omnes has surviving humans.

Alternatively, it does *not* ever become known that neural nets are reliably misaligned (due to disinformation from said neural nets), and a worldstate eventually takes shape in which humanity is effectively parasitic on misaligned AI; AI that flushes its humans into the proverbial sewer would gain an advantage over its rivals, so we get disposed of in relatively-short order.

We might get a warning shot early enough to do something about it, and that's my highest P(scenario|!doom), but its P(!doom|scenario) is neither the best of all scenarios nor close enough to 1 to be worth it anyway.

Expand full comment

Good stuff. It was over 20 years ago that I questioned Ray Kurzweil's version of the Singularity. The EY version seems even more implausible to me. You give good reasons why. Don't be steamrolled by the New AI Doomsters.

We are all going to die unless we can do something about the aging problem. Humans are making virtually no progress after decades. It may be that we need much more advanced AIs to solve this problem. That is a *massive* downside to slowing or blocking progress in AI.

Expand full comment

The implication is that 'em AI' would eliminate 'bio human' rent-seeking in Education, and that CCSS-on-the-Cloud would become The Second Renaissance. The outcome has been just the opposite. Mass suicides, 'checking out,' plunging national test scores and a heinous new rent-seeking by the Technocracy Elites.

Expand full comment

Recursive self-improvement seems fundamentally different from past innovations.

Normal learning curves start out steep, then shallow. If that happens to AI, I agree we don't have much to worry about, for the reasons you state.

But if the rate of recursive self-improvement gets steeper with time, the first system to self-improve does seem likely to "foom". Without preventing competing AIs from fooming - just by being first.

(That said, underground AI research seems even more dangerous than the status quo, so I also oppose regulation.)

Expand full comment

Think of it in terms of product obsolescence, or invasive species.

When an absolutely better product comes along, fulfilling all the same functions as the old product but doing it better and cheaper on every way, the old product is obsolete and will cease to be produced. (If it is still produced, this would only be because it still fills a niche better than the new product, even if that niche is "nostalgia")

When an absolutely more effective species comes along, filling the same ecological niche as an existing species but doing it better or requiring fewer resources, the new species will be invasive and cause the old species to die out.

If a new type of being arises that does what humans do, but better in every way, without leaving any niche where humans do it better, then humans are obsolete and will be outcompeted and will gradually vanish.

Expand full comment

There ain't no such thing as AI risk. AI risk was solved in 1955 under Angleton. The world is composed of JavaScript and is destroyed and recreated every few minutes.

https://eharding.substack.com/p/why-does-russian-physical-therapy

https://www.youtube.com/watch?v=ZFvqDaFpXeM

Expand full comment

Some related links that might be interesting to other readers (if not Robin himself):

- https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast

- https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

- https://betterwithout.ai/

One analogy Eliezer mentioned on the linked podcast episode is the recent Go playing AIs:

> I think that the current systems are actually very weak. I don't know, maybe I could use the analogy of Go, where you had systems that were finally competitive with the pros, where pros like the set of ranks in Go, and then a year later, they were challenging the world champion and winning. And then another year, they threw out all the complexities and the training from human databases of Go games and built a new system, AlphaGo Zero, that trained itself from scratch. No looking at the human playbooks, no special purpose code, just a general purpose game player being specialized to Go, more or less. Three days, there's a quote from Guern about this, which I forget exactly, but it was something like, we know how long AlphaGo Zero, or AlphaZero, two different systems, was equivalent to a human Go player. And it was like 30 minutes on the following floor of this such and such DeepMind building.

The intuition then is that there might not be any 'natural ceiling' around 'human level' intelligence, either for a particular 'game' or our general intellectual capabilities as a whole.

Something Wolfram mentions in the above-linked post that I think supports skepticism of 'AI doom via foom' is that much of the world/universe seems to be 'computationally irreducible', i.e. there's generally no 'simple mathematical shortcut' for predicting the behavior of 'systems'.

David Chapman, in the 'web book' linked above ("Better without AI") makes a similar point about the likely necessity of interacting directly with the world to understand it, e.g. 'automated chemistry labs' are great but there isn't currently, and may not exist in principle, any simple way to handle all of the 'gloop' in the world.

Expand full comment
Mar 3, 2023·edited Mar 3, 2023

On Manifold's "Will Robin Hanson publicly shorten his median human-level AI timeline to <2075 before July 1st 2023?" (15%), I have the biggest YES position (of "pick where Manifold redonates $36 of a grant").

Do you think math research beyond a GPT-N script? From how I do it, it's about pattern matching.

What headline would shorten your timelines thus?

Expand full comment