Endovior comments on Problem of Optimal False Information - Less Wrong

16 Post author: Endovior 15 October 2012 09:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 16 October 2012 04:02:05PM 4 points [-]

I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.

Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.

Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.

Omega example: You believe that "hepaticocholangiocholecystenterostomies" refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer's.

Comment author: Desrtopa 16 October 2012 05:26:55PM 1 point [-]

The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it's hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they're wrong; when I arrive, I will find out that the train has already left.

As for the third, I suspect that there isn't a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.

Comment author: thomblake 16 October 2012 05:38:46PM 3 points [-]

Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.

Comment author: Endovior 17 October 2012 06:50:20AM 0 points [-]

A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.

Comment author: thomblake 16 October 2012 05:42:16PM 0 points [-]

The second falls into the category I mentioned previously of things that increase my utility only as I find out they're wrong; when I arrive, I will find out that the train has already left.

You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.

Comment author: Desrtopa 16 October 2012 05:46:20PM 0 points [-]

The example fails the "that you would normally reject outright" criterion though, unless I already have well established knowledge of the actual train scheduling times.