Jay_Schweikert comments on Problem of Optimal False Information - Less Wrong

16 Post author: Endovior 15 October 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jay_Schweikert 16 October 2012 05:28:13PM 1 point [-]

Can we solve this problem by slightly modifying the hypothetical to say that Omega is computing your utility function perfectly in every respect except for whatever extent you care about truth for its own sake? Depending on exactly how we define Omega's capabilities and the concept of utility, there probably is a sense in which the answer really is determined by definition (or in which the example is impossible to construct). But I took the spirit of the question to be "you are effectively guaranteed to get a massively huge dose of utility/disutility in basically every respect, but it's the product of believing a false/true statement -- what say you?"