Technologos comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: zero_call 16 January 2010 08:29:05PM 2 points [-]

For solving the Friendly AI problem, I suggest the following constraints for your initial hardware system:

1.) All outside input (and input libraries) are explicitly user selected. 2.) No means for the system to establish physical action (e.g., no robotic arms.) 3.) No means for the system to establish unexpected communication (e.g., no radio transmitters.)

Once this closed system has reached a suitable level of AI, then the problem of making it friendly can be worked on much easier and more practically, and without risk of the world ending.

To start out from the beginning to make a GAI friendly through some other means seems rather ambitious to me. Why not just work on AI now, make sure when you're getting close to the goal, that the AI is suitably restricted, and then finally use the AI itself as an experimental testbed for "personality certification".

(Can someone explain/link me to why this isn't currently espoused?)

Comment author: Technologos 16 January 2010 08:32:26PM 8 points [-]

This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.

Comment author: Alicorn 16 January 2010 08:35:56PM 7 points [-]

I don't think the publicly available details establish "how", merely "that".

Comment author: Technologos 16 January 2010 08:56:23PM 4 points [-]

Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."

Comment author: RobinZ 16 January 2010 09:03:28PM 1 point [-]

Wait, since when is Eliezer transhuman?

Comment author: Technologos 16 January 2010 09:23:54PM 6 points [-]

Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?

Comment author: RobinZ 16 January 2010 10:19:19PM 2 points [-]

Certainly they can; what I am emphasizing is that "transhuman" is an overly strong criterion.

Comment author: Technologos 16 January 2010 10:21:25PM 3 points [-]

Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.