PhilGoetz comments on Discussion: Yudkowsky's actual accomplishments besides divulgation - Less Wrong

31 Post author: Raw_Power 25 June 2011 11:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (115)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 28 June 2011 03:41:37PM 8 points [-]

I remember when Eliezer told people about the AI-box experiments he had not yet performed, and I predicted, with high confidence, that people would not "let him out of the box" and give him money; and I was wrong.

I still wonder if the conversations went something like this:

"If we say you let me out of the box, then people will take the risk of AI more seriously, possibly saving the world."

"Oh. Okay, then."

Eliezer said that no such trickery was involved. But, he would say that in either case.

Comment author: Normal_Anomaly 30 June 2011 07:18:25PM 1 point [-]

I wouldn't be persuaded to "let the AI out" by that argument. In fact, even after reading about the AI box experiments I still can't imagine any argument that would convince me to let the AI out. As somebody not affiliated with SIAI at all, I think my somehow being persuaded would count for more evidence than, for instance Carl Shulman being persuaded. Unfortunately, because I'm not affiliated with the AI research community in general, I'm presumably not qualified to participate in an AI-box experiment.

Comment author: XiXiDu 07 July 2011 07:19:16PM 5 points [-]

I wouldn't be persuaded to "let the AI out" by that argument. In fact, even after reading about the AI box experiments I still can't imagine any argument that would convince me to let the AI out.

For some time now I suspect that the argument that convinced Carl Shulman and others was along the lines of acausal trade. See here, here and here. Subsequently I suspect that those who didn't let the AI out of the box either didn't understand the implications, haven't had enough trust into the foundations and actuality of acausal trade, or were more like General Thud.

Comment author: PhilGoetz 11 July 2011 12:21:23AM *  1 point [-]

When Eliezer was doing them, the primary qualification was being willing to put up enough money to get Eliezer to do it. (I'm not criticizing him for this - it was a clever and interesting fundraising technique; and doing it for small sums would set a bad precedent.)

Comment author: timtyler 28 June 2011 07:22:47PM *  1 point [-]

I still wonder if the conversations went something like this:

"If we say you let me out of the box, then people will take the risk of AI more seriously, possibly saving the world."

"Oh. Okay, then."

If he had said that to me, I would have asked what evidence there was that his putting the fear of machines into people would actually help anyone - except for him and possibly the members of his proposed "Fellowship of the AI".