AI Box experiment over!
Just crossposting.
Khoth and I are playing the AI Box game. Khoth has played as AI once before, and as a result of that has an Interesting Idea. Despite losing as AI the first time round, I'm assigning Khoth a higher chance of winning than a random AI willing to play, at 1%!
http://www.reddit.com/r/LessWrong/comments/29gq90/ai_box_experiment_khoth_ai_vs_gracefu_gk/
Link contains more information.
EDIT
AI Box experiment is over. Logs: http://pastebin.com/Jee2P6BD
My takeaway: Update the rules. Read logs for more information.
On the other hand, I will consider other offers from people who want to simulate the AI.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I have wanted to be the Boxer; I too cannot comprehend what could convince someone to unbox (Or rather, I can think of a few approaches like just-plain-begging or channeling Phillip K Dick, but I don't take them too seriously).
Think harder. Start with why something is impossible and split it up.
1) I can't possibly be persuaded.
Why 1?
You do have hints from the previous experiments. They mostly involved breaking someone emotionally.