cousin_it comments on An argument against indirect normativity - Less Wrong

1 Post author: cousin_it 24 July 2013 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: Karl 24 July 2013 11:39:03PM *  3 points [-]

By that term I simply mean Eliezer's idea that the correct decision theory ought to use a maximization vantage points with a no-blackmail equilibrium.

Comment author: cousin_it 26 July 2013 08:28:01AM *  1 point [-]

Maybe a more scary question isn't whether we can stop our AIs from blackmailing us, but whether we want to. If the AI has an opportunity to blackmail Alice for a dollar to save Bob from some suffering, do we want the AI to do that, or let Bob suffer? Eliezer seems to think that we obviously don't want our FAI to use certain tactics, but I'm not sure why he thinks that.