I still have no clue how Thrun's method could be correct in the ideal case. He's relying on having some clue as to how much money there is in the envelopes.
Here's a restatement of the problem that makes my objection clearer:
"Hi again."
"Omega, my pal! Thanks for the million dollars. That was pretty sweet."
"I've got another offer for you, my fairy AI-son. Here is a box with some amount of Gxnthrzian currency. It's yours if you do one little thing for me."
"Umm. How could I spend that?"
"They will make contact with Earth in two weeks. It'll be awesome. Anyway, so that little thing - here's a document. Sort of a Gxnthrzian IOU. Just scratch one of those two circles, and the box is yours. And don't squint - you can't actually read it with your current technology level, and of course it's in an alien language you don't know."
I take the circular sheet of plastic, but say, "Waaait. How much is the IOU for?"
"The IOU is for one third as much as money in the box. I'll handle the payment for you - no fee."
I turn the circle around and arbitrarily select one of the circles.
Omega interrupts me from scratching it. "If you pick that circle, then after the IOU is paid, you will have a grand total of one Atrazad, fifteen thousand Joks, and two Libgurs."
"Well, what if I check the other one?"
"Sore wa himitsu desu!"
I still have no clue how Thrun's method could be correct in the ideal case. He's relying on having some clue as to how much money there is in the envelopes.
The following is an elementary Bayesian analysis of why (a de-randomized version of) Thrun's method works. Does this not include the "ideal case" for some reason?
Let A and B be two fixed, but unknown-to-you, numbers. Let Z be a third number. (You may suppose that Z is known or unknown, random or not; it doesn't matter.) Assume that, so far as your state of knowledge is concerned,
A and B are distinct with probability 1;
A < B ≤ Z and B < A ≤ Z are equally likely; that is,
p(A < B ≤ Z) = p(B < A ≤ Z);
A < Z < B has strictly non-zero probability; that is,
p(A < Z < B) > 0.
It is then clear that
p(A ≤ Z) = p(B < A ≤ Z) + p(A < B ≤ Z) + p(A < Z < B),
because the propositions on the RHS are mutually exclusive and exhaustive special cases of the proposition on the LHS. With conditions (2) and (3) above, it then follows that
p(B < A ≤ Z) / p(A ≤ Z) < 1/2.
For, B < A ≤ Z is one of two equally probable and mutually exclusive events that, together, fail to exhaust all the ways that A ≤ Z could happen. (Condition (3) in particular makes the inequality strict.) By Bayes's formula p(P & Q) / p(Q) = p(P | Q), this then becomes
p(B < A | A ≤ Z) < 1/2.
In other words, upon learning that A ≤ Z, you ought to think that A is probably smaller than B.
Thrun's algorithm is correct. To see why, note that no matter how the envelope contents are distributed, all situations faced by the player can be grouped into pairs, where each pair consists of situations (x,2x) and (2x,x) which are equally likely. Within each pair the chance of switching from x to 2x is higher than the chance of switching from 2x to x, because f(x)>f(2x) by construction.
BTW, we have an ongoing discussion there about some math aspects of the algorithm.
On the other hand, if, hypothetically, Scott Aaronson should say, "Eliezer, your question about why 'energy' in the Hamiltonian and 'energy' in General Relativity are the same quantity, is complete nonsense, it doesn't even have an answer, I can't explain why because you know too little," I would be like "Okay."
This is one of a number of comments by Eliezer from that era that seem to imply that he thought Scott Aaronson was a physicist. I don't know exactly what gave him that impression (it presumably has something to do with the fact that Scott studies quantum computing), but (just to make it explicit for the record) it is false. As I'm sure Eliezer knows by now, Scott Aaronson is a theoretical computer scientist. He knows a lot more (one presumes) about P and NP than about general relativity. (In fact, the cultural difference that exists between him and physicists is one of the classic themes of his blog.)
Scott Aaronson is repaying the favor by referring people to Yudkowsky's writings on FAI and calling him an "AI visionary".
It took me a while to realize that in the context this resolves to what seems to constitute a clever insult.
I was mostly just being snarky, but Eliezer is more famous as a rationality blogger and fanfiction author than an AI researcher.
Today's post, The Rhythm of Disagreement was originally published on 01 June 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was A Premature Word on AI, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.