Posts

Sorted by New

Wiki Contributions

Comments

> I'll let you know how it goes.

I lost. But I think I put up a good fight.

I have signed up to play an AI, and having given it quite a bit of thought as a result I think I have achieved some insight. Interestingly, one of the insights came as a result of assuming that secrecy was a necessary condition for success. That assumption led more or less directly to an approach that I think might work. I'll let you know tomorrow.

An interesting consequence of having arrived at this insight is that even if it works I won't be able to tell you what it is. Having been on the receiving end of such cageyness I know how annoying it is. But I can tell you this: the insight has a property similar to a Godel sentence or the Epimenides sentence. This insight (if indeed it works) undermines itself by being communicated. If I tell you what it is, you can correctly respond, "That will never work." And you will indeed be correct. Nonetheless, I think it has a good shot at working.

(I don't know if my insight is the same as Eliezer's, but it seems to share another interesting property: it will not be easy to put it into practice. It's not just a "trick." It will be difficult.)

I'll let you know how it goes.

Now that's below the belt.... ;)

Really? Why? I've read Eliezer's writings extensively. I have enormous respect for him. I think he's one of the great unsung intellects of our time. And I thought that comment was well within the bounds of the rules that he himself establishes. To simply assume that Eliezer is honest would be exactly the kind of bias that this entire blog is dedicated to overturning.

Too much at stake for that sort of thing I reckon. All it takes is a quick copy and paste of those lines and goodbye career.

That depends on what career you are pursuing, and how much risk you are willing to take.

Silas -- I can't discuss specifics, but I can say there were no cheap tricks involved; Eliezer and I followed the spirit as well as the letter of the experimental protocol.

AFAIKT, Silas's approach is within both the spirit and the letter of the protocol.

Since I'm playing the conspiracy theorist I have to ask: how can we know that you are telling the truth? In fact, how can we know that the person who posted this comment is the same person who participated in the experiment? How can we know that this person even exists? How do we know that Russell Wallace is not a persona created by Eliezer Yudkowski?

Conspiracy theories thrive even in the face of published data. This is no way that a secret dataset can withstand one.

Now, I think I see the answer. Basically, Eliezer_Yudkowsky doesn't really have to convince the gatekeeper to stupidly give away $X. All he has to do is convince them that "It would be a good thing if people saw that the result of this AI-Box experiment was that the human got tricked, because that would stimulate interest in {Friendliness, AGI, the Singularity}, and that interest would be a good thing."

That's a pretty compelling theory as well, though it leaves open the question of why Eliezer is wringing his hands over ethics (since there seems to me to be nothing unethical about this approach). There seem to me to be two possibilities: either this is not how Eliezer actually did it (assuming he really did do it, which is far from clear), or it is how he did it and all the hand-wringing is just part of the act.

Gotta hand it to him, though, it's a pretty clever way to draw attention to your cause.

There's a reason that secret experimental protocols are anathema to science.

My bad. I should have said: there's a reason that keeping experimental data secret is anathema to science. The protocol in this case is manifestly not secret.

With regards to the ai-box experiment; I defy the data. :-)

Your reason for the insistence on secrecy (that you have to resort to techniques that you consider unethical and therefore do not want to have committed to the record) rings hollow. The sense of mystery that you have now built up around this anecdote is itself unethical by scientific standards. With no evidence that you won other than the test subject's statement we cannot know that you did not simply conspire with them to make such a statement. The history of pseudo-science is lousy with hoaxes.

In other words, if I were playing the game, I would say to the test subject:

"Look, we both know this is fake. I've just sent you $500 via paypal. If you say you let me out I'll send you another $500."

From a strictly Bayesian point of view that seems to me to be the overwhelmingly more probably explanation.

There's a reason that secret experimental protocols are anathema to science.

The conclusion was that you don't get interference regardless of what you do at the other end, because the paths are potentially distinguishable.

That's not quite true. The conclusion was that there actually is interference at the other end, but there are two interference patterns that cancel each other out and make it appear that there is no interference. You can apparently produce interference by bringing (classical) information back form one end of the experiment to the other, but you aren't really creating it, you are just "filtering out" interference that was already there.