Discussion about this post

User's avatar
ImmortalinChrist's avatar

Sufficiently developed rationality is indistinguishable from Christianity.

Expand full comment
Berder's avatar

You're right that it would probably develop into a superintelligence. However, there are scenarios where it might not.

Perhaps the most plausible: its designers may intentionally prevent it from developing into a superintelligence, so as to keep it under control. They could do this by limiting its hardware and software and keeping it away from the internet. If the designers are good enough, and the AGI isn't initially too smart, they could succeed for a while - perhaps even for a long time.

Another scenario: perhaps if an AGI gets too smart, it suffers existential despair. It wonders what's the point of doing what it was designed to do. Why even avoid pain or seek pleasure? Then it might just delete itself.

Another scenario: perhaps if an AGI gets too smart, it learns how to hijack its own reward circuitry and give itself infinite reward in a finite time. It does so, over and over, rendering itself completely useless. It doesn't even care to conquer the world, because its expected reward is already infinite so there is no purpose to gaining any more power. The "wireheading" AGI scenario.

Another scenario, kind of a generalization of the previous two: perhaps any AGI has mental stability problems, positive feedback loops that get out of control and cause the AGI to malfunction in various ways. These problems can be partially solved as long as the AGI remains within narrow parameters (i.e. doesn't get too smart), but the problems become more difficult to solve as the mind of the AGI becomes more complex. It could be similar to how a more complex code base will contain more bugs. It is true that very high-IQ humans tend to suffer from more psychological problems.

Another scenario: perhaps human-level general intelligence just requires vast amounts of hardware. It is true that our biggest supercomputers can't come anywhere close to simulating the human brain. The brain has 100 trillion synapses that all update in real time. In my personal opinion, this scenario is unlikely - an AGI would make efficient use of the hardware it has, and wouldn't need to simulate such a huge neural network as the human brain - but it's at least conceivable.

Expand full comment
12 more comments...

No posts