bogdanb comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread.

Comment author: bogdanb 11 November 2009 11:07:15PM 20 points [-]

How did you win any of the AI-in-the-box challenges?

Comment author: righteousreason 12 November 2009 02:47:29AM 9 points [-]

http://news.ycombinator.com/item?id=195959

"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

All right, this much of a hint:

There's no super-clever special trick to it. I just did it the hard way.

Something of an entrepreneurial lesson there, I guess."

Comment author: bogdanb 10 January 2010 12:34:42AM 0 points [-]

I know that part. I was hoping for a bit more...

Comment author: Unnamed 17 November 2009 02:22:58AM 7 points [-]

Here's an alternative question if you don't want to answer bogdanb's: When you won AI-Box challenges, did you win them all in the same way (using the same argument/approach/tactic) or in different ways?

Comment author: Yorick_Newsome 12 November 2009 01:26:30AM 4 points [-]

Something tells me he won't answer this one. But I support the question! I'm awfully curious as well.

Comment author: CronoDAS 16 November 2009 09:50:09AM 2 points [-]

Perhaps this would be a more appropriate version of the above:

What suggestions would you give to someone playing the role of an AI in an AI-Box challenge?

Comment author: SilasBarta 12 November 2009 08:59:57PM 2 points [-]

Voted down. Eliezer Yudkowsky has made clear he's not answering that, and it seems like an important issue for him.

Comment author: wedrifid 15 November 2009 10:24:23AM *  3 points [-]

Voted back up. He will not answer but there's no harm in asking. In fact, asking serves to raise awareness both on the surprising (to me at least) result and also on the importance Eliezer places on the topic.

Comment author: SilasBarta 16 November 2009 01:05:36AM -1 points [-]

Yes, there is harm in asking. Provoking people to break contractual agreements they've made with others and have made clear they regard as vital, generally counts as Not. Cool.

Comment author: Jordan 16 November 2009 01:50:00AM *  3 points [-]

In this case though, it's clear that Eliezer wants people to get something out of knowing about the AI box experiments. That's my extrapolated Eliezer volition at least. Since for me and many others we can't get anything out of the experiments without knowing what happened, I feel it is justified to question Eliezer where we see a contradiction in his stated wishes and our extrapolation of his volition.

In most situations I would agree that it's not cool to push.

Comment author: wedrifid 16 November 2009 08:38:19AM 1 point [-]

As the OP said, Eliezer hasn't been subpoenaed. The questions here are merely stimulus to which he can respond with whichever insights or signals he desires to convey. For what little it is worth my 1.58 bits is 'up'.

(At least, if it is granted that a given person has read a post and that his voting decision is made actively then I think I would count it as 1.58 bits. It's a little blurry.)

Comment author: [deleted] 17 November 2009 02:11:00AM 1 point [-]

It depends on the probability distribution of comments.

Comment author: wedrifid 17 November 2009 02:38:05AM 0 points [-]

It depends on the probability distribution of comments.

Good point. Probability distribution of comments relative to those doing the evaluation.

Comment author: bogdanb 10 January 2010 12:33:51AM 0 points [-]

IIRC* the agreement was to not disclose the contents of a contest without the agreement of both participants. My hope was not that Eliezer might break his word, but that evidence of continued interest in the matter might persuade him to obtain permission from at least one of his former opponents. (And to agree himself, as the case may be.)

(*: and my question was based on that supposition)