You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kindly comments on Problem of Optimal False Information - Less Wrong Discussion

16 Post author: Endovior 15 October 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread.

Comment author: Kindly 15 October 2012 10:59:57PM *  20 points [-]

Least optimal truths are probably really scary and to be avoided at all costs. At the risk of helping everyone here generalize from fictional evidence, I will point out the similarity to the Cthaeh in The Wise Man's Fear.

On the other hand, a reasonably okay falsehood to end up believing is something like "35682114754753135567 is prime", which I don't expect to affect my life at all if I suddenly start believing it. The optimal falsehood can't possibly be worse than that. Furthermore, if you value not being deceived about important things then the optimality of the optimal falsehood should take that into account, making it more likely that the falsehood won't be about anything important.

Edit: Would the following be a valid falsehood? "The following program is a really cool video game: <code that is actually for a Friendly AI>"

Comment author: AlexMennen 15 October 2012 11:54:54PM 16 points [-]

Would the following be a valid falsehood? "The following program is a really cool video game: <code that is actually for a Friendly AI>"

I think we have a good contender for the optimal false information here.

Comment author: Endovior 16 October 2012 12:21:47AM 3 points [-]

The problem specifies that something will be revealed to you, which will program you to believe it, even though false. It doesn't explicitly limit what can be injected into the information stream. So yes, assuming you would value the existence of a Friendly AI, yes, that's entirely valid as optimal false information. Cost: you are temporarily wrong about something, and realize your error soon enough.

Comment author: EricHerboso 16 October 2012 03:02:44AM 3 points [-]

Except after executing the code, you'd know it was FAI and not a video game, which goes against the OP's rule that you honestly believe in the falsehood continually.

I guess it works if you replace "FAI" in your example with "FAI who masquerades as a really cool video game to you and everyone you will one day contact" or something similar, though.

Comment author: Endovior 16 October 2012 12:25:57PM 2 points [-]

The original problem didn't specify how long you'd continue to believe the falsehood. You do, in fact, believe it, so stopping believing it would be at least as hard as changing your mind in ordinary circumstances (not easy, nor impossible). The code for FAI probably doesn't run on your home computer, so there's that... you go off looking for someone who can help you with your video game code, someone else figures out what it is you're come across and gets the hardware to implement, and suddenly the world gets taken over. Depending on how attentive you were to the process, you might not correlate the two immediately, but if you were there when the people were running things, then that's pretty good evidence that something more serious then a video game happened.

Comment author: Endovior 15 October 2012 11:55:41PM 1 point [-]

Yes, least optimal truths are really terrible, and the analogy is apt. You are not a perfect rationalist. You cannot perfectly simulate even one future, much less infinite possible ones. The truth can hurt you, or possibly kill you, and you have just been warned about it. This problem is a demonstration of that fact.

That said, if your terminal value is not truth, a most optimal falsehood (not merely a reasonably okay one) would be a really good thing. Since you are (again) not a perfect rationalist, there's bound to be something that you could be falsely believing that would lead you to better consequences than your current beliefs.