You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

polymathwannabe comments on The AI That Pretends To Be Human - Less Wrong Discussion

1 Post author: Houshalter 02 February 2016 07:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: polymathwannabe 02 February 2016 10:01:32PM 0 points [-]

Isn't the skill of oratory precisely the skill that gets you unboxed?

Comment author: gjm 02 February 2016 10:48:58PM *  2 points [-]

Enough skill in oratory (or something closely related) gets you unboxed. The question is how plausible it is that a superintelligent AI would have enough. (A related question is whether there's such a thing as enough. There might not be, just as there's no such thing as enough kinetic energy to let you escape from inside a black hole's horizon, but the reported results of AI-Box games[1] suggest -- though they certainly don't prove -- that there is.)

[1] The term "experiments" seems a little too highfalutin'.

[EDITED to add: I take it Houshalter is saying that Hitler's known oratorical skills aren't enough to convince him that H. would have won an AI-Box game, playing as the AI. I am inclined to agree. Hitler was very good at stirring up a crowd, but it's not clear how that generalizes to persuading an intelligent and skeptical individual.]

Comment author: Houshalter 03 February 2016 01:23:18AM 0 points [-]

Well for one, the human isn't in a box trying to get out. So an AI mimicking a human isn't going to say weird things like "let me out of this box!" This method is equivalent to writing Hitler a letter asking him a question, and him sending you an answer. That doesn't seem dangerous at all.

Second, I really don't believe Hitler could escape from a box. The AI box experiments suggest a human can do it, but the scenario is very different than a real AI box situation. E.g. no back and forth with the gatekeeper, and the gatekeeper doesn't have to sit there for 2 hours and listen to the AI emotionally abuse him. If Hitler says something mean, the gatekeeper can just turn him off or walk away.