Nick_Tarleton comments on Open Thread June 2010, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (606)
Dark arts, huh? Sometime ago I put forward the following scenario:
Bob wants to kill a kitten. The FAI wants to save the kitten because it's a good thing according to our CEV. So the FAI threatens Bob with 50 years of torture unless Bob lets the kitten go. The FAI has two distinct reasons why threatening Bob is okay: a) Bob will comply and there will be no need to torture him, b) the FAI is lying anyway. Expected utility reasoning says the FAI is doing the Right Thing. But do we want that?
(Yes, this is yet another riff on consequentialism, deontologism and lying. Should FAIs follow deontological rules? For that matter, should humans?)
Expected utility reasoning with a particular utility function says the FAI is right. If we disagree, our preferences might be described by some other utility function.