Followup to: Nonperson Predicates, Possibility and Could-ness
"All our ships are sentient. You could certainly try telling a ship what to do… but I don't think you'd get very far."
"Your ships think they're sentient!" Hamin chuckled.
"A common delusion shared by some of our human citizens."
— Player of Games, Iain M. Banks
Yesterday, I suggested that, when an AI is trying to build a model of an environment that includes human beings, we want to avoid the AI constructing detailed models that are themselves people. And that, to this end, we would like to know what is or isn't a person – or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.
And as long as you're going to solve that problem anyway, why not apply the same knowledge to create a Very Powerful Optimization Process which is also definitely not a person?
How do you know? Have you solved the sacred mysteries of consciousness and existence?
"Um – okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself -"
Lob's Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model. Does that make it sentient?
"Putting that aside – to create a powerful AI and make it not sentient – I mean, why would you want to?"
Several reasons. Picking the simplest to explain first – I'm not ready to be a father.
Continue reading "Nonsentient Optimizers" »
loading...