You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on A few misconceptions surrounding Roko's basilisk - Less Wrong Discussion

39 Post author: RobbBB 05 October 2015 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 10 October 2015 09:42:33PM 0 points [-]

Roko believed that the probability was much higher

All I know about what Roko believed about the probability is that (1) he used the word "might" just as I did and (2) he wrote "And even if you only think that the probability of this happening is 1%, ..." suggesting that (a) he himself probably thought it was higher and (b) he thought it was somewhat reasonable to estimate it at 1%. So I'm standing by my "might" and robustly deny your claim that writing "might" was strawmanning.

if you don't want AIs to be utilitarian

If you're standing in front of me with a gun and telling me that you have done some calculations suggesting that on balance the world would be a happier place without me in it, then I would probably prefer you not to be utilitarian. This has essentially nothing to do with whether I think utilitarianism produces correct answers. (If I have a lot of faith in your reasoning and am sufficiently strong-minded then I might instead decide that you ought to shoot me. But my likely failure to do so merely indicates typical human self-interest.)

The important part is the practical consequences for how we should build AI.

Perhaps so, in which case calling the argument "a case against utilitarianism" is simply incorrect.

Comment author: Houshalter 11 October 2015 04:16:32AM 0 points [-]

Roko's argument implies the AI will torture. The probability you think his argument is correct is a different matter. Roko was just saying that "if you think there is a 1% chance that my argument is correct", not "if my argument is correct, there is a 1% chance the AI will torture."

This really isn't important though. The point is, if an AI has some likelihood of torturing you, you shouldn't want it to be built. You can call that self-interest, but that's admitting you don't really want utilitarianism to begin with. Which is the point.

Anyway this is just steel-manning Roko's argument. I think the issue is with acausal trade, not utilitarianism. And that seems to be the issue most people have with it.