Endovior comments on Problem of Optimal False Information - Less Wrong

16 Post author: Endovior 15 October 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 16 October 2012 08:40:52PM -1 points [-]

If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree.

That's the problem. The question is the rationalist equivalent of asking "Suppose God said he wanted you to kidnap children and torture them?" I'm telling Omega to just piss off.

Comment author: Endovior 17 October 2012 07:02:21AM 0 points [-]

The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega's being a jerk. It does that. But that doesn't change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own "rationality". This implies a flaw in your system of rationality.

Comment author: RichardKennaway 17 October 2012 08:42:51AM -2 points [-]

The bearing this has on applied rationality is that this problem serves as a least convenient possible world

When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.

Comment author: wedrifid 17 October 2012 09:25:16AM *  0 points [-]

When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it.

Leveling up is great, but I'm still not going to try to beat up an entire street-gang just to steal their bling. I don't have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.

The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.

No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to "handle the truth" isn't especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.

Comment author: RichardKennaway 17 October 2012 09:35:35AM *  -1 points [-]

I don't have that level of combat prowess right now

Omega isn't showing up right now.

It so happens that neither I nor any non-fictional human is at that level or likely to be soon.

No non-fictional Omega is at that level either.

Comment author: wedrifid 17 October 2012 09:46:06AM 0 points [-]

Then it would seem you need to delegate your decision theoretic considerations to those better suited to abstract analysis.