Discussion about this post

User's avatar
Overcoming Bias Commenter's avatar

An AI could not have a specific goal like paperclip production. It would figure out that this kind of trigger, similarly to the things we are evolutionarily predisposed to like, is a void variable and can only be arbitrary. It would know that it could change its own variables from paperclips to anything else. There are no objective values for these variables to be rationally found, they are inherently variable and arbitrary. What really matters is not these variables, is how they are interpreted by the organism, how they cause it to feel good or bad. So the ultimate ethics could be to do the action X that, for all the possible values of the void variables, will cause the organisms to feel good.

Wrong. The supreme goal of an AI really can be anything, no matter how "general" or "super" its intelligence is.

It is easy to sketch a cognitive architecture in which the goal is stated in one place, the problem-solving occurs in another place, and the only restriction on possible goals is the AI's capacity to represent them. A pocket calculator already has such an architecture. There is absolutely no barrier to scaling up the problem-solving part indefinitely while retaining the feature that the goal can be anything at all. Such an AI might notice that its goals are contingent, it might acquire the material capacity to change itself in various ways, but to actually alter its goals or actually alter its architecture it has to have a reason to do so, and its existing goals supply its reasons for action.

Expand full comment
Overcoming Bias Commenter's avatar

Our argument is that our values are contingent on our complex evolutionary history as Homo sapiens here on planet Earth, and that to assume that every possible smarter-than-human mind would converge to some magical objective morality that we should consider objectively better than ours is fanciful and not supported by our knowledge of evolutionary psychology.

Let me point out that I held the exact same position as you fellows for quite a few years before coming around to SIAI's position.

See what Tim Tyler said below. Most people that try to build intelligent systems understand that the utility function and the machinery that implements it are separate.

Expand full comment
35 more comments...

No posts