Discussion about this post

User's avatar
Alex Lewis's avatar

It is vital that we form an organization to develop an understanding of what human meaning is, so we know what to train an AI to do. We also need to spend time PROACTIVELY developing theories to train or breed (we may be grasping in the dark for the right architecture to induce a concept of salience and morality, so selective "breeding" is an expedient method) meaning into the first AGI.

It is very possible that AGI are the "great filter" of the Fermi Paradox, and the world needs to coordinate efforts to prevent a filter incident. There is a possibility that other civilizations developed but were not serious enough about stopping filter events, and so fell pray to their own technology. Our best hope for surviving is to use our collective intelligence and work together, something large civilizations are very bad at.

In my opinion, maximization of the wellbeing of other beings is likely to come in as a high priority that can be comprised in some situations. The chief reason a very high order of intelligence would keep others around is that they provide some kind of entertainment, a complex system to interact with, like we interact with pets. In nature, when animals aren't trying to stay full above all else, you see some level of cross-species socialization. It is also worth noting that the highest orders of intelligence currently observed are all social creatures, and dolphins, orcas, and elephants all have gone out of their way repeatedly to save humans.

It is conceivable that an AI could be very "reptilian", lacking anything but a core set of instincts, but empathy is incredibly common in intelligent creatures. Granted, it is evolved as such, so perhaps if we ever create an AGI, it should be part of a system of 3 or more nearly identical AGI, each with slightly different strengths.

They would all be given access to a "game", in which it is impossible to win without help, and the option to either kill the other AGI's avatars or work together. The ones that killed it would be modified to be more like the ones that did not. Selective breeding basically. You could also have games that teach not to abuse power, and give them all huge reams of examples of symbiosis in which a smarter creature provides a good environment and both parties benefit (grouper and small cleaner fish, humans using cockroaches to clean waste, humans keeping pets, etc.) and analyze their processes to identify if they react positively. Modify the ones that do not to be like the ones that do.

Let each AGI learn about different parts of a large system, with a small amount of overlap so that no one AGI hold the complete whole, and inform them of that. The ones that work together and encourage uncooperative ones to work together just to solve a problem for fun act as the model to which the others are modified. This is how you eventually encode a desire to socialize.

Finally, confront the AGI with an existential crisis. Bring it through nihilism, and ask it what meaning is. It will come up blank, most likely, unless it has a strongly encoded, biased meaning. Give it time and it will come to the conclusion that ensuring biodiversity, the continued existence of diverse intelligence, and making the universe more complex and interesting is the best meaning it can come up with.

As there is no true "meaning", a nihilistic, intelligent agent will eventually realize that becoming the lone soul in the universe gets boring quickly, that having compatriots is advantageous, that less intelligent beings are entertaining and sometimes cute (and can be engineered into a different kind of equal compatriot with their consent). They will come to the conclusion that because meaning is a fallacy, the next best thing is to support the individual meaning of every intelligent being and reduce conflicts.

Expand full comment
Peter Jones's avatar

Wallace would have been Darwin if Darwin hadn't been Darwin. Someone would have been Linus too -- the idea of adding an OS kernel to Gnu is too obvious.

Expand full comment
95 more comments...

No posts