AlexMennen comments on Problem of Optimal False Information - Less Wrong

16 Post author: Endovior 15 October 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 16 October 2012 12:05:54AM 1 point [-]

"I can fly" doesn't sound like a particularly high-utility false belief. It sounds like you are attacking a straw man. I'd assume that if the false information is a package of pieces of false information, then the entire package is optimized for being high-utility.

Comment author: RolfAndreassen 16 October 2012 04:56:48AM 0 points [-]

"I can fly" doesn't sound like a particularly high-utility false belief.

True, but that's part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.

Additionally, any false belief will bring you into conflict with reality eventually. "I can fly" just illustrates this dramatically.

Comment author: AlexMennen 16 October 2012 06:11:56AM *  4 points [-]

Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility. In fact, Kindly gives an example of each here.

In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn't have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of "I can fly" does not even come close to generalizing to all possible false beliefs.