You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on The self-fooling problem. - Less Wrong Discussion

10 Post author: PuyaSharif 10 October 2011 10:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: gwern 10 October 2011 10:41:56PM *  22 points [-]

But if you randomize, then you have the risk of placing the coin at a location where it can be easily found, like on a table or on the floor. You could eliminate those risky locations by excluding them as alternatives in your randomization process, but that would mean including a chain of reasoning!

That doesn't undo the gain from removing the obvious places. As an example, look at the probability-density graphics in http://thevirtuosi.blogspot.com/2011/10/linear-theory-of-battleship.html . Imagine you smooth them out to remove the 'obvious' places (initially high probability). Then you randomly pick, with that probability, each square. How does your other self undo this process?

(You also don't formulate it right. If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn't he infer that his former self would be trying to help him win and so would begin his search at a Schelling point? He wouldn't even consider trying to beat an adversarial strategy like 'randomize' because he doesn't think there's an adversary!)

Comment author: Viliam_Bur 11 October 2011 11:52:36AM 5 points [-]

If the memory-wiped self is told that finding the coin wins, and also that a former self placed the coin for him to find, wouldn't he infer that his former self would be trying to help him win [...]?

The FINDER should hear the same story as the HIDER, with the only change that "win = find a coin". The finder should also know that the hider received a false information that "win = not find a coin". The finder should also know that the hider received an information that the finder will receive a false information what "win = find a coin", et cetera. (It seems like an infinite chain of information, but both hider and finder only need to receive a finite pattern describing it.)

Now we have a meta-problem: After such explanations, wouldn't both hider and finder suspect that they are given a false information? If yes, should they both expect probability 50% of being lied to? Or is there some asymetry, for example that rewarding "not find a coin" could make more sense to an outside spectator (which makes the rules) than rewarding "find a coin"? A perfectly rational spectator should prevent hider and finder from guessing which one of them is being lied to. But if the chance of both being right or wrong is exactly 50%, why should they even try?

Comment author: ArisKatsaris 11 October 2011 12:13:38PM *  8 points [-]

All those are complications that needn't arise with a slightly different formulation. Just imagine that we're talking about someone who for fun decides to put such a challenge to their future self.

Anyway, we can postulate a person who hides something as best as they can, then they erase their own memory, then they decide to locate what they previously hid. Both hider version and searcher version try to do the best they can, because that's the maximum amount of fun for both of them. (The searcher will reintegrate the hider portion of their memories afterwards)

Comment author: dlthomas 11 October 2011 05:18:52PM 1 point [-]

I recall something like this coming up a few times in fiction - someone erases their memory of something because they need not to know it for a time; then, later, must re-discover it.

Comment author: Alejandro1 11 October 2011 05:24:00PM 1 point [-]
Comment author: dlthomas 11 October 2011 05:27:41PM 1 point [-]

Thank you.