Alicorn comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 16 January 2010 08:35:56PM 7 points [-]

I don't think the publicly available details establish "how", merely "that".

Comment author: Technologos 16 January 2010 08:56:23PM 4 points [-]

Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."

Comment author: RobinZ 16 January 2010 09:03:28PM 1 point [-]

Wait, since when is Eliezer transhuman?

Comment author: Technologos 16 January 2010 09:23:54PM 6 points [-]

Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?

Comment author: RobinZ 16 January 2010 10:19:19PM 2 points [-]

Certainly they can; what I am emphasizing is that "transhuman" is an overly strong criterion.

Comment author: Technologos 16 January 2010 10:21:25PM 3 points [-]

Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.