Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

christopherj comments on Shut up and do the impossible! - Less Wrong

28 Post author: Eliezer_Yudkowsky 08 October 2008 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: christopherj 23 October 2013 06:25:49PM 0 points [-]

This is almost exactly the argument I thought of as well, although of course it means cheating by pointing out that you are in fact not a dangerous AI (and aren't in a box anyways). The key point is "since there's a risk someone would let the AI out of the box, posing huge existential risk, you're gambling on the fate of humanity by failing to support awareness for this risk". This naturally leads to a point you missed,

  1. Publicly suggesting that Eliezer cheated, is a violation of your own argument. By weakening the fear of fallible guardians, you yourself are gambling the fate of humanity, and that for mere pride and not even $10.

I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).

Comment author: Moss_Piglet 23 October 2013 06:39:50PM *  1 point [-]

I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).

I feel like I should use this out the next time I get some disconfirming data for one of my pet hypotheses.

"Sure I may have manipulated the results so that it looks like I cloned Sasquatch, but since my intent was to prove that Sasquatch could be cloned it's still honest on the meta-level!"

Both scenarios are cheating because there is a specific experiment which is supposed to test the hypothesis, and it is being faked rather than approached honestly. Begging the Question is a fallacy; you cannot support an assertion solely with your belief in the assertion.

(Not that I think Mr Yudkowski cheated; smarter people have been convinced to do weirder things than what he claims to have convinced people to do, so it seems fairly plausible. Just pointing out how odd the reasoning here is.)

Comment author: robertskmiles 07 May 2014 02:45:27PM 0 points [-]

How is this different from the point evand made above?