Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

evand comments on Shut up and do the impossible! - Less Wrong

28 Post author: Eliezer_Yudkowsky 08 October 2008 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: evand 28 July 2012 05:30:29PM 2 points [-]

I must conclude one (or more) of a few things from this post, none of them terribly flattering.

  1. You do not actually believe this argument.
  2. You have not thought through its logical conclusions.
  3. You do not actually believe that AI risk is a real thing.
  4. You value the plus-votes (or other social status) you get from writing this post more highly than you value marginal improvements in the likelihood of the survival of humanity.

I find it rather odd to be advocating self-censorship, as it's not something I normally do. However, I think in this case it is the only ethical action that is consistent with your statement that the argument "might work", if I interpret "might work" as "might work with you as the gatekeeper". I also think that the problems here are clear enough that, for arguments along these lines, you should not settle for "might" before publicly posting the argument. That is, you should stop and think through its implications.

Comment author: robertskmiles 28 July 2012 07:19:23PM *  0 points [-]

I'm not certain that I have properly understood your post. I'm assuming that your argument is: "The argument you present is one that advocates self-censorship. However, the posting of that argument itself violates the self-censorship that the argument proposes. This is bad."

So first I'll clarify my position with regards to the things listed. I believe the argument. I expect it would work on me if I were the gatekeeper. I don't believe that my argument is the one that Eliezer actually used, because of the "no real-world material stakes" rule; I don't believe he would break the spirit of a rule he imposed on himself. At the time of posting I had not given a great deal of thought to the argument's ramifications. I believe that AI risk is very much a real thing. When I have a clever idea, I want to share it. Neither votes nor the future of humanity weighed very heavily on my decision to post.

To address your argument as I see it: I think you have a flawed implicit assumption, i.e. that posting my argument has a comparable effect on AI risk to that of keeping Eliezer in the box. My situation in posting the argument is not like the situation of the gatekeeper in the experiment, with regards to the impact of their choice on the future of humanity. The gatekeeper is taking part in a widely publicised 'test of the boxability of AI', and has agreed to keep the chat contents secret. The test can only pass or fail, those are the gatekeeper's options. But publishing "Here is an argument that some gatekeepers may be convinced by" is quite different from allowing a public boxability test to show AIs as boxable. In fact, I think the effect on AI risk of publishing my argument is negligible or even positive, because I don't think reading my argument will persuade anyone that AIs are boxable.

People generally assess an argument's plausibility based on their own judgement. And my argument takes as a premise (or intermediary conclusion) that AIs are unboxable (see 1.3). Believing that you could reliably be persuaded that AIs are unboxable, or believing that a smart, rational, highly-motivated-to-scepticism person could be reliably persuaded that AIs are unboxable, is very very close to personally believing that AIs are unboxable. In other words, the only people who would find my argument persuasive (as presented in overview) are those who already believe that AIs are unboxable. The fact that Eliezer could have used my argument to cause a test to 'unfairly' show AIs as unboxable is actually evidence that AIs are not boxable, because it is more likely in a world in which AIs are unboxable than one in which they are boxable.

P.S. I love how meta this has become.

Comment author: evand 29 July 2012 01:41:49PM 0 points [-]

Your re-statement of my position is basically accurate. (As an aside, thank you for including it: I was rather surprised how much simpler it made the process of composing a reply to not have to worry about whole classes of misunderstanding.)

I still think there's some danger in publicly posting arguments like this. Please note, for the record, that I'm not asking you to retract anything. I think retractions do more harm than good, see the Streisand effect. I just hope that this discussion will give pause to you or anyone reading this discussion later, and make them stop to consider what the real-world implications are. Which is not to say I think they're all negative; in fact, on further reflection, there are more positive aspects than I had originally considered.

In particular, I am concerned that there is a difference between being told "here is a potentially persuasive argument", and being on the receiving end of that argument in actual use. I believe that the former creates an "immunizing" effect. If a person who believed in boxability heard such arguments in advance, I believe it would increase their likelihood of success as a gatekeeper in the simulation. While this is not true for rational superintelligent actors, that description does not apply to humans. A highly competent AI player might take a combination of approaches, which are effective if presented together, but not if the gatekeeper has seen them before individually and rejected them while failing to update on their likely effectiveness.

At present, the AI has the advantage of being the offensive player. They can prepare in a much more obvious manner, by coming up with arguments exactly like this. The defensive player has to prepare answers to unknown arguments, immunize their thought process against specific non-rational attacks, etc. The question is, if you believe your original argument, how much help is it worth giving to potential future gatekeepers? The obvious response, of course, is that the people that make interesting gatekeepers who we can learn from are exactly the ones who won't go looking for discussions like this in the first place.

P.S. I'm also greatly enjoying the meta.