You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MattG comments on A few misconceptions surrounding Roko's basilisk - Less Wrong Discussion

39 Post author: RobbBB 05 October 2015 09:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (125)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 October 2015 05:17:07AM 4 points [-]

FAI can't torture people, period.

The only way I could possibly see this being true is if the FAI is a deontologist.

Comment author: turchin 06 October 2015 06:17:49AM -1 points [-]

If I believe that FAI can't torture people, strong versions of RB does not work for me.

We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view. So will I kill him? No, because I can't kill.

The same way if I know that an AI is going to torture anyone I don't think that it is FAI, and will not invest a cant in its creation. RB fails.

Comment author: [deleted] 06 October 2015 07:11:54AM 1 point [-]

We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view. So will I kill him? No, because I can't kill.

I'm not seeing how you got to "I can't kill" from this chain of logic. It doesn't follow from any of the premises.

Comment author: turchin 06 October 2015 07:24:17AM 0 points [-]

It is not a conclusion from previous facts. It is a fact which I know about my self and which I add here.

Comment author: [deleted] 06 October 2015 08:07:30AM *  1 point [-]

It is a fact which I know about my self and which I add here.

Relevant here is WHY you can't kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don't have the emotional fortitude to do so? The AI will have no such qualms.

Comment author: turchin 06 October 2015 08:37:24AM -1 points [-]

It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them. So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.