JulianMorrison comments on Fake Utility Functions - Less Wrong

22 Post author: Eliezer_Yudkowsky 06 December 2007 04:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: JulianMorrison 07 December 2007 10:25:51AM 0 points [-]

I'm not sure that friendly AI even makes conceptual sense. I think of it as the "genie to an ant problem". An ant has the ability to give you commands, and by your basic nature you must obey the letter of the command. How can the ant tie you up in fail-safes so you can't take an excuse to stomp him, burn him with a magnifying glass, feed him poison, etc? (NB: said fail-safes must be conceivable to an ant!) It's impossible. Even general benevolence doesn't help - you might decide to feed him to a starving bird.

Comment author: JulianMorrison 29 August 2012 09:41:12PM 0 points [-]

(BTW, this is an outdated opinion and I no longer think this.)