You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

turchin comments on A few misconceptions surrounding Roko's basilisk - Less Wrong Discussion

39 Post author: RobbBB 05 October 2015 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 06 October 2015 07:24:17AM 0 points [-]

It is not a conclusion from previous facts. It is a fact which I know about my self and which I add here.

Comment author: [deleted] 06 October 2015 08:07:30AM *  1 point [-]

It is a fact which I know about my self and which I add here.

Relevant here is WHY you can't kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don't have the emotional fortitude to do so? The AI will have no such qualms.

Comment author: turchin 06 October 2015 08:37:24AM -1 points [-]

It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them. So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.