CuSithBell comments on The Rhythm of Disagreement - Less Wrong

12 Post author: Eliezer_Yudkowsky 01 June 2008 08:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (65)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: CuSithBell 24 May 2012 09:56:04PM *  0 points [-]

1/4 of the smallest possible amount you could win doesn't count as a large constant benefit in my view of things, but that's a bit of a nitpick. In any case, what do you think about the rest of the post?

Comment author: cousin_it 24 May 2012 10:03:36PM 0 points [-]

Oh, sorry, just noticed the last part of your comment. It seems wrong, you can get a higher than 50% chance of picking the better envelope. The degenerate case is if you already know there's only one possibility, e.g. 1 dollar in one envelope and 2 dollars in the other. If you open the envelope and see 1 dollar, then you know you must switch, so you get the better envelope with probability 100%. You can get more fuzzy cases by sort of smearing out the distribution of envelopes continuously, starting from that degenerate case, and using the f(x) strategy. The chance of picking the better envelope will fall below 100%, but I think it will stay above 50%. Do you want my opinion on anything else? :-)

Comment author: CuSithBell 24 May 2012 10:09:34PM 0 points [-]

That's definitely cheating! We don't have access to the means by which X is generated. In the absence of a stated distribution, can we still do better than 50%?

Comment author: cousin_it 25 May 2012 12:10:40AM *  1 point [-]

Well, Thrun's algorithm does better than 50% for every distribution. But no matter what f(x) we choose, there will always be distributions that make the chance arbitrarily close to 50% (say, less than 50%+epsilon). To see why, note that for a given f(x) we can construct a distribution far enough from zero that all values of f(x) are less than epsilon, so the chance of switching prescribed by the algorithm is also less than epsilon.

The next question is whether we can find any other randomized algorithm that does better than 50%+epsilon on any distribution. The answer to that is also no.

1) Note that any randomized algorithm must decide whether to switch or not, based on the contents of the envelope and possibly a random number generator. In other words, it must be described by a function f(x) like in Thrun's algorithm. f(x) doesn't have to be monotonic, but must lie between 0 and 1 inclusive for every x.

2) For every such function f(x), we will construct a distribution of envelopes that makes it do worse than 50%+epsilon.

3) Let's consider for each number x the degenerate distribution D_x that always puts x dollars in one envelope and 2*x in the other.

4) To make the algorithm do worse than 50%+epsilon on distribution D_x, we need the chance of switching at 2*x to be not much lower than the chance of switching at x. Namely, we need the condition f(2*x)>f(x)-epsilon.

5) Now we only need to prove that there exists an x such that f(2*x)>f(x)-epsilon. We will prove that by reductio ad absurdum. If we had f(2*x)≤f(x)-epsilon for every x, we could iterate that and obtain f(x)≥f(x*2^n)+n*epsilon for every x and n, which would make f(x) greater than 1. Contradiction, QED.

Comment author: CuSithBell 25 May 2012 03:21:12AM 0 points [-]

Yes, that all looks sensible. The point I'm trying to get at - the one I think Eliezer was gesturing towards - was that for any f and any epsilon, f(x) - f(2x) < epsilon for almost all x, in the formal sense. The next step is less straightforward - does it then follow that, prior to the selection of x, our expectation for getting the right answer is 50%? This seems to be Eliezer's implication. However, it seems also to rest on an infinite uniform random distribution, which I understand can be problematic. Or have I misunderstood?

Comment author: cousin_it 25 May 2012 08:16:36AM 0 points [-]

That's called an improper prior. Eliezer mentions in the post that it was his first idea, but turned out to be irrelevant to the analysis.

Comment author: CuSithBell 25 May 2012 06:27:19PM 0 points [-]

So I guess we're back to square one, then.

Comment author: cousin_it 25 May 2012 07:11:11PM *  0 points [-]

I don't understand. Which part are you still confused about? To me the whole thing seems quite clear.

Comment author: CuSithBell 25 May 2012 08:00:53PM 0 points [-]

How did Eliezer determine that the expected benefit of the algorithm over random chance is zero?

Comment author: cousin_it 25 May 2012 08:49:23PM *  1 point [-]

He didn't say that, he said the benefit gets closer and closer to zero if you modify the setup in a certain way. I couldn't find an interpretation that makes his statement correct, but at least it's meaningful.