"Bah, everyone wants to be the gatekeeper. What we NEED are AIs."
-- Schizoguy
Some of you have expressed the opinion that the AI-Box Experiment doesn't seem so impossible after all. That's the spirit! Some of you even think you know how I did it.
There are folks aplenty who want to try being the Gatekeeper. You can even find people who sincerely believe that not even a transhuman AI could persuade them to let it out of the box, previous experiments notwithstanding. But finding anyone to play the AI - let alone anyone who thinks they can play the AI and win - is much harder.
Me, I'm out of the AI game, unless Larry Page wants to try it for a million dollars or something.
But if there's anyone out there who thinks they've got what it takes to be the AI, leave a comment. Likewise anyone who wants to play the Gatekeeper.
Matchmaking and arrangements are your responsibility.
Make sure you specify in advance the bet amount, and whether the bet will be asymmetrical. If you definitely intend to publish the transcript, make sure both parties know this. Please note any other departures from the suggested rules for our benefit.
I would ask that prospective Gatekeepers indicate whether they (1) believe that no human-level mind could persuade them to release it from the Box and (2) believe that not even a transhuman AI could persuade them to release it.
As a courtesy, please announce all Experiments before they are conducted, including the bet, so that we have some notion of the statistics even if some meetings fail to take place. Bear in mind that to properly puncture my mystique (you know you want to puncture it), it will help if the AI and Gatekeeper are both verifiably Real People<tm>.
"Good luck," he said impartially.
1. I really like this blog, and have been lurking here for a few months.
2. Having said that, Eliezer's carry-on in respect of the AI-boxing issue does him no credit. His views on the feasibility of AI-boxing are only an opinion, he has managed to give it weight in some circles with his 2 heavily promoted "victories" (the 3 "losses" are mentioned far less frequently). By not publishing the transcripts, no lessons of value are taught ("Wow, that Eliezer is smart" is not worth repeating, we already know that). I think the real reason the transcripts are still secret is simply that they are plain boring and contain no insights of value.
My opinion, for what it is worth, is that AI-boxing should not be discarded. The AI-boxing approach does not need to be perfect to be useful, all it needs to be is better than alternative approaches. AI-boxing has one big advantage over "FAI" approach: it is conceptually simple. As such, it seems possible to more or less rigorously analyse the failure modes and take precautions. Can the same be said of FAI?
3. For a learning experience, I would like to be the AI in the suggested experiment, $10 even stakes, transcript to be published. I am only available time is 9-11 pm Singapore time... e-mail milanoman at yahoo dot com to set up.
D. Alex