I have signed up to play an AI, and having given it quite a bit of thought as a result I think I have achieved some insight. Interestingly, one of the insights came as a result of assuming that secrecy was a necessary condition for success. That assumption led more or less directly to an approach that I think might work. I'll let you know tomorrow.
An interesting consequence of having arrived at this insight is that even if it works I won't be able to tell you what it is. Having been on the receiving end of such cageyness I know how annoying it is. But...
Now that's below the belt.... ;)
Really? Why? I've read Eliezer's writings extensively. I have enormous respect for him. I think he's one of the great unsung intellects of our time. And I thought that comment was well within the bounds of the rules that he himself establishes. To simply assume that Eliezer is honest would be exactly the kind of bias that this entire blog is dedicated to overturning.
Too much at stake for that sort of thing I reckon. All it takes is a quick copy and paste of those lines and goodbye career.
That depends on what career you are pursuing, and how much risk you are willing to take.
Silas -- I can't discuss specifics, but I can say there were no cheap tricks involved; Eliezer and I followed the spirit as well as the letter of the experimental protocol.
AFAIKT, Silas's approach is within both the spirit and the letter of the protocol.
Since I'm playing the conspiracy theorist I have to ask: how can we know that you are telling the truth? In fact, how can we know that the person who posted this comment is the same person who participated in the experiment? How can we know that this person even exists? How do we know that Russell Wal...
Now, I think I see the answer. Basically, Eliezer_Yudkowsky doesn't really have to convince the gatekeeper to stupidly give away $X. All he has to do is convince them that "It would be a good thing if people saw that the result of this AI-Box experiment was that the human got tricked, because that would stimulate interest in {Friendliness, AGI, the Singularity}, and that interest would be a good thing."
That's a pretty compelling theory as well, though it leaves open the question of why Eliezer is wringing his hands over ethics (since there see...
With regards to the ai-box experiment; I defy the data. :-)
Your reason for the insistence on secrecy (that you have to resort to techniques that you consider unethical and therefore do not want to have committed to the record) rings hollow. The sense of mystery that you have now built up around this anecdote is itself unethical by scientific standards. With no evidence that you won other than the test subject's statement we cannot know that you did not simply conspire with them to make such a statement. The history of pseudo-science is lousy with hoaxe...
The conclusion was that you don't get interference regardless of what you do at the other end, because the paths are potentially distinguishable.
That's not quite true. The conclusion was that there actually is interference at the other end, but there are two interference patterns that cancel each other out and make it appear that there is no interference. You can apparently produce interference by bringing (classical) information back form one end of the experiment to the other, but you aren't really creating it, you are just "filtering out" interference that was already there.
> I'll let you know how it goes.
I lost. But I think I put up a good fight.