MixedNuts comments on I attempted the AI Box Experiment (and lost) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (244)
First argument looks perfectly within the rules to me.
Second argument is against the rules.
Tuxedage and I interpreted this to mean that the AI party couldn't offer things, but could point out real-world consequences beyond their control. Some people on #lesswrong disagreed with the second part.
I agree with Tuxedage and you about emotional hacks.
I interpreted it the same way as #lesswrong. Has anyone tried asking him? He's pretty forthcoming regarding the rules, since they make the success more impressive.
EDIT: I'm having trouble thinking of an emotional attack that could get an AI out of a box, in a short time, especially since the guard and AI are both assumed personas.