The second formulation is simpler, but then leads to absurdities such as counterfactual mugging. This is a failure of the theory.
If you don't think so, try a counterfactual mugging on everyday people, and then try it at a LessWrong meeting. Which group do you think will be more likely to come out ahead, in this practical example?
As it says on the wiki:
If some particular ritual of cognition—even one that you have long cherished as "rational"—systematically gives poorer results relative to some alternative, it is not rational to cling to it.
The counterfactual mugging requires that the deal be offered by an entity that is known to be both perfectly honest and a perfect predictor. If Omega tries to counterfactually mug you, you should pay him. If I try to counterfactually mug you, paying up would be significantly less wise.
A sufficiently good decision theory should get both of those cases right.
- This thread has run its course. You will find newer threads in the discussion section.
Another discussion thread - the fourth - has reached the (arbitrary?) 500 comments threshold, so it's time for a new thread for Eliezer Yudkowsky's widely-praised Harry Potter fanfic.
Most of the paratext and fan-made resources are listed on Mr. LessWrong's author page. There is also AdeleneDawner's collection of most of the previously-published Author's Notes.
Older threads: one, two, three, four. By tag.
Newer threads are in the Discussion section, starting from Part 6.
Spoiler policy as suggested by Unnamed and approved by Eliezer, me, and at least three other upmodders:
It would also be quite sensible and welcome to continue the practice of declaring at the top of your post which chapters you are about to discuss, especially for newly-published ones, so that people who haven't yet seen them can stop reading in time.