JulianMorrison comments on Fake Utility Functions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
I'm not sure that friendly AI even makes conceptual sense. I think of it as the "genie to an ant problem". An ant has the ability to give you commands, and by your basic nature you must obey the letter of the command. How can the ant tie you up in fail-safes so you can't take an excuse to stomp him, burn him with a magnifying glass, feed him poison, etc? (NB: said fail-safes must be conceivable to an ant!) It's impossible. Even general benevolence doesn't help - you might decide to feed him to a starving bird.
(BTW, this is an outdated opinion and I no longer think this.)