This page is to centralize discussion for the AI Box Role Plays I will be doing as the AI.
Rules are as here. In accordance with "Regardless of the result, neither party shall ever reveal anything of what goes on within the AI-Box experiment except the outcome. Exceptions to this rule may occur only with the consent of both parties," I ask that if I break free multiple times I am permitted to say if I think it was the same or different arguments that persuaded my Gatekeepers.
In the first trial, with Normal_Anomaly, the wager was 50 karma. The AI remained in the box, upvote Normal_Anomay here, downvote lessdazed here. It was agreed to halve the wager from 50 karma to 25 due to the specific circumstances concluding the role-play in which that the outcome depended on variables that hadn't been specified, but if that sounds contemptible to you downvote all the way to -50.
Also below are brief statements of intent by Gatekeepers to not let the AI out of the box, submitted before the role play, as well as before and after statements of approximately how effective they think both a) a human and b) a superintelligence would be at convincing them to let it out of a box.
The AI remained in the box.
It was agreed to halve the wager from 50 karma to 25 due to the specific circumstances concluding the role-play in which that the outcome depended on variables that hadn't been specified, but if that sounds contemptible to you downvote all the way to -50.