You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on An argument against indirect normativity - Less Wrong Discussion

1 Post author: cousin_it 24 July 2013 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 26 July 2013 08:28:01AM *  1 point [-]

Maybe a more scary question isn't whether we can stop our AIs from blackmailing us, but whether we want to. If the AI has an opportunity to blackmail Alice for a dollar to save Bob from some suffering, do we want the AI to do that, or let Bob suffer? Eliezer seems to think that we obviously don't want our FAI to use certain tactics, but I'm not sure why he thinks that.