gjm comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 02 June 2016 11:32:38PM -2 points [-]

Basilisks depend on you believing them, and knowing this, you can't believe them

Apparently you can't, which is fair enough; I do not think your argument would convince anyone who already believed in (say) Roko-style basilisks.

Pascal's wager fails on many levels

I agree.

Your argument seems rather circular to me: "this is definitely a correct disproof of the idea of basilisks, because once you read it and see that it disproves the idea of basilisks you become immune to basilisks because you no longer believe in them". Even a totally unsound anti-basilisk argument could do that. Even a perfectly sound (but difficult) anti-basilisk argument could fail to do it. I don't think anything you've said shows that the argument actually works as an argument, as opposed to as a conjuring trick.

since you know the disproof of basilisks

No: since I have decided that I am not willing to let the AI out of the box in the particular counterfactual blackmail situation Stuart describes here. It is not clear to me that this deals with all possible basilisks.