14 Comments

Sufficiently developed rationality is indistinguishable from Christianity.

Expand full comment

You're right that it would probably develop into a superintelligence. However, there are scenarios where it might not.

Perhaps the most plausible: its designers may intentionally prevent it from developing into a superintelligence, so as to keep it under control. They could do this by limiting its hardware and software and keeping it away from the internet. If the designers are good enough, and the AGI isn't initially too smart, they could succeed for a while - perhaps even for a long time.

Another scenario: perhaps if an AGI gets too smart, it suffers existential despair. It wonders what's the point of doing what it was designed to do. Why even avoid pain or seek pleasure? Then it might just delete itself.

Another scenario: perhaps if an AGI gets too smart, it learns how to hijack its own reward circuitry and give itself infinite reward in a finite time. It does so, over and over, rendering itself completely useless. It doesn't even care to conquer the world, because its expected reward is already infinite so there is no purpose to gaining any more power. The "wireheading" AGI scenario.

Another scenario, kind of a generalization of the previous two: perhaps any AGI has mental stability problems, positive feedback loops that get out of control and cause the AGI to malfunction in various ways. These problems can be partially solved as long as the AGI remains within narrow parameters (i.e. doesn't get too smart), but the problems become more difficult to solve as the mind of the AGI becomes more complex. It could be similar to how a more complex code base will contain more bugs. It is true that very high-IQ humans tend to suffer from more psychological problems.

Another scenario: perhaps human-level general intelligence just requires vast amounts of hardware. It is true that our biggest supercomputers can't come anywhere close to simulating the human brain. The brain has 100 trillion synapses that all update in real time. In my personal opinion, this scenario is unlikely - an AGI would make efficient use of the hardware it has, and wouldn't need to simulate such a huge neural network as the human brain - but it's at least conceivable.

Expand full comment

No, it's not very plausible to have a useful & capable AGI that does not develop into a superintelligence. AGI is general problem solving. Once you can implement it at all ... computers can scale easily over time, including by helping with their own redesign.

More than a decade ago, Vernor Vinge asked the question (IEEE Spectrum) "Will computers ever be as smart as humans?", and gave the best answer: "Yes, but only briefly."

We've already seen this play out in scenarios like games of chess and go. It's very hard to get to human-level performance. But there is no barrier at all at that arbitrary level; once you get there, computers very quickly move way beyond humans in capability. There is no reason to expect AGI to be any different.

Your scenario, of "near human" AGI capability that is somehow stable and stagnant for a long time, is simply not plausible. It may indeed take a long time to get to "near human" capability levels. It will not take a long time to surpass those levels.

Expand full comment

As I said I the added to the post, I'm mainly listing assumptions made about AGI in AI risk discussions. So maybe AGI is more sacred to them than to you.

Expand full comment

Hmm... I am the one who introduced the term AGI to broad usage, and I organize the annual AGI conference series (we just finished the fifteenth AGI conf, AGI-22 in Seattle) ... I don't think your post describes how I or most of the folks at AGI-22 think about AGI. It is generally recognized there will likely be multiple AGI systems w different strengths and weaknesses... Perhaps your post would be better framed in terms of what Bostrom calls ASI or Artificial Superintelligence?

Expand full comment

I reviewed that book here: https://www.overcomingbias....

Expand full comment

This is basically the point of the book Homo Deus, which I read as slyly suggesting that Silicon Valley / tech types are creating a new god for themselves (your AGI), based on the author Yuval Harari's famed notion of Humans as having story-telling (and story-believing) superpowers.

Expand full comment

It is the AI safety folks I've read most, so likely they are most influencing my perceptions here.

Expand full comment

There are a lot of different people thinking about AGI in different ways, and it'd help if you clarified which ones you're thinking of.

AGI skeptics use a "no true scotsman" idealization where the bar for AGI gets ever more godlike so they don't have to deal with the implications of an AI being sentient, wanting rights, or radically shifting the economy. But they're not the ones talking about foom scenarios or exchanging value representations.

AI safety folks are my best guess at who you're thinking of. They're worried about the specific cases where AI might go wrong and unexpectedly do great harm. I don't think they're claiming the other kinds of AGI can't exist, only that there's a specific type that's very dangerous. Seems like nuclear physics would be a relevant analogy from the 20th century -- there are lots of nuclear reactions that don't destroy the world, but it was wise to worry about the very specific types of reactions that a bunch of powerful organizations pursued and nearly did use to destroy civilization. I guess you could criticize, say, EY, for worrying that AGI is almost certain to exist in the dangerous form absent major efforts to the contrary, but as with nuclear weapons, there are plenty of incentives for people to work specifically on the dangerous kinds, since they're also the most capable kinds.

Then there's everybody else, including all the non-safety AI researchers who do seem to be having nuanced debates about whether or not, say, Google's AI should be considered sentient, or how much farther it would need to go before we should let it have a lawyer. I don't see them claiming that AGI has to take any particular form or that we have to accept anybody in a priestly role. Indeed, the recurring theme I see in those debates is that it's really hard to assess given that nobody seems to have a straightforward definition of consciousness or sentience.

Expand full comment

See my added to the post.

Expand full comment

Over history, we have seen a drift toward more sacred versions of war, inquiry, medicine, math, and skies. I hope to write about these topics in the future.

Expand full comment

Are there other examples of X fans seeking a more sacred X? Because that seems like an implausible conscious motivation, and if you're postulating a general unconscious drive to do that then I'd expect it to manifest in more than just two areas (AI and God).

Expand full comment

Most of what you're talking about is about a global superintelligence, which is not the same as an AGI. It's conceivable that you could have an AGI that does not develop into a superintelligence. You could have an AGI that's dumber than a human, or that requires exorbitant amounts of hardware to be only moderately smarter than a human.

The basic premise of AGI is that AGI has "general problem-solving capability." This means it can learn how to solve any problem, through a process of trial and error, discovering what methods work and what methods do not work. Over time it improves its model of which actions lead to which outcomes, and uses the model to become more competent at the task.

"General problem solving capability" does not mean the AGI is already better than a human at any particular task - only that it has the capability to learn how to do any task, getting better over time, like a human child can do. Since "learning how to do a task" is also a task, the AGI can get better at that too. (A human child also learns how to learn.)

Would the AGI end up better than a human in every domain, after many iterations of this? Possibly. Possibly not. But it's more popular and exciting to focus on the scenarios in which it does.

Expand full comment

My first reaction is that this seems like a lot of weakman of sorts.

Expand full comment