Imagine that one day you come home to see your neighbors milling about your house and the Publisher's Clearinghouse (PHC) van just pulling away. You know that PHC has been running a new schtick recently of selling $100 lottery tickets to win $10,000 instead of just giving money away. In fact, you've used that very contest as a teachable moment with your kids to explain how once the first ticket of the 100 printed was sold, scratched, and determined not to be the winner -- that the average expected value of the remaining tickets was greater than their cost and they were therefore increasingly worth buying. Now, it's weeks later, most of the tickets have been sold, scratched, and not winners and they came to your house. In fact, there were only two tickets remaining. And you weren't home. Fortunately, your neighbor and best friend Bob asked if he could buy the ticket for you. Sensing a great human interest story (and lots of publicity), PHC said yes. Unfortunately, Bob picked the wrong ticket. After all your neighbors disperse and Bob and you are alone, Bob says that he'd really appreciate it if he could get his hundred dollars back. Is he mugging you? Or, do you give it to him?
The disanalogy here is that you have a long term social relationship with Bob that you don't have with Omega, and the $100 are an investment into that relationship.
The outcomes don't seem to be tied together as they were in the original problem; is it true that if had he won, he would only then have given you the money if, had he not won, you would have given him the $100 back? That isn't clear.
The counterfactual anti-mugging: One day No-mega appears. No-mega is completely trustworthy etc. No-mega describes the counterfactual mugging to you, and predicts what you would have done in that situation not having met No-mega, if Omega had asked you for $100.
If you would have given Omega the $100, No-mega gives you nothing. If you would not have given Omega $100, No-mega gives you $10000. No-mega doesn't ask you any questions or offer you any choices. Do you get the money? Would an ideal rationalist get the money?
Okay, next scenario: you have a magic box with a number p inscribed on it. When you open it, either No-mega comes out (probability p) and performs a counterfactual anti-mugging, or Omega comes out (probability 1-p), flips a fair coin and proceeds to either ask for $100, give you $10000, or give you nothing, as in the counterfactual mugging.
Before you open the box, you have a chance to precommit. What do you do?
Yes, that there can just as easily be a superintelligence that rewards people predicted to act one way as one that rewards people predicted to act the other. Which precommitment is most rational depends depends on the which type you expect to encounter.
I don't expect to encounter either, and on the other hand I can't rule out fallible human analogues of either. So for now I'm not precommitting either way.
You don't precommit to "give away the $100, to anyone who asks". You precommit to give away the $100 in exactly the situation I described. Or, generalizing such precommitments, you just compute your decisions on the spot, in a reflectively consistent fashion. If that's what you want do to with your future self, that is.
there can just as easily be a superintelligence that rewards people predicted to act one way as one that rewards people predicted to act the other.
Yeah, now. But after Omega really, really, appears in front of you, chance of Omega existing is about 1. Chance of No-Mega is still almost non-existent. In this problem, existence of Omega is given. It's not something you are expecting to encounter now, just as we're not expecting to encounter eccentric Kavkan billionaires that will give you money for toxicating yourself. The Kavka's Toxin and the counterfactual mugging present a scenario that is given, and ask you how would you act then.
Philosopher Kenny Easwaran reported in 2007 that:
Josh von Korff, a physics grad student here at Berkeley, and versions of Newcomb’s problem. He shared my general intuition that one should choose only one box in the standard version of Newcomb’s problem, but that one should smoke in the smoking lesion example. However, he took this intuition seriously enough that he was able to come up with a decision-theoretic protocol that actually seems to make these recommendations. It ends up making some other really strange predictions, but it seems interesting to consider, and also ends up resembling something Kantian!
The basic idea is that right now, I should plan all my future decisions in such a way that they maximize my expected utility right now, and stick to those decisions. In some sense, this policy obviously has the highest expectation overall, because of how it’s designed.
Korff also reinvents counterfactual mugging:
...Here’s another situation that Josh described that started to make things seem a little more weird. In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god. If one encounters a beggar, then one can choose to either give the b
In Ancient Greece, while wandering on the road, every day one either encounters a beggar or a god.
If it's an iterated game, then the decision to pay is a lot less unintuitive.
My two bits: Omega's request is unreasonable.
Precommitting is something that you can only do before the coin is flipped. That's what the "pre" means. Omega's game rewards a precommitment, but Omega is asking for a commitment.
Precommitting is a rational thing to do because before the coin toss, the result is unknown and unknowable, even by Omega (I assume that's what "fair coin" means). This is a completely different course of action than committing after the coin toss is known! The utility computation for precommitment is not and should not be the same as the one for commitment.
In the example, you have access to information that pre-you doesn't (the outcome of the flip). If rationalists are supposed to update on new information, then it is irrational for you to behave like pre-you.
Had the coin come up differently, Omega might have explained the secrets of friendly artificial general intelligence. However, he now asks that you murder 15 people.
Omega remains completely trustworthy, if a bit sick.
Ha, I'll re-raise: Had the coin come up differently, Omega would have filled ten Hubble volumes with CEV-output. However, he now asks that you blow up this Hubble volume.
(Not only do you blow up the universe (ending humanity for eternity) you're glad that Omega showed to offer this transparently excellent deal. Morbid, ne?)
For some reason, raising the stakes in these hypotheticals to the point of actual pain has become reflex for me. I'm not sure if it's to help train my emotions to be able to make the right choices in horrible circumstances, or just my years in the Bardic Conspiracy looking for an outlet.
So imagine yourself in the most inconvenient possible world where Omega is a known feature of the environment and has long been seen to follow through on promises of this type; it does not particularly occur to you or anyone that believing this fact makes you insane.
When I phrase it that way - imagine myself in a world full of other people confronted by similar Omega-induced dilemmas - I suddenly find that I feel substantially less uncomfortable; indicating that some of what I thought was pure ethical constraint is actually social ethical constraint. Still, it may function to the same self-protective effect as ethical constraint.
To add to the comments below, if you're going to take this route, you might as well have already decided that encountering Omega at all is less likely than that you have gone insane.
That may be true, but it's still a dodge. Conditional on not being insane, what's your answer?
Additionally, I don't see why Omega asking you to give it 100 dollars vs 15 human lives necessarily crosses the threshold of "more likely that I'm just a nutbar". I don't expect to talk to Omega anytime soon...
Can you please explain the reasoning behind this? Given all of the restrictions mentioned (no iterations, no possible benefit to this self) I can't see any reason to part with my hard earned cash. My "gut" says "Hell no!" but I'm curious to see if I'm missing something.
There are various intuition pumps to explain the answer.
The simplest is to imagine that a moment from now, Omega walks up to you and says "I'm sorry, I would have given you $10000, except I simulated what would happen if I asked you for $100 and you refused". In that case, you would certainly wish you had been the sort of person to give up the $100.
Which means that right now, with both scenarios equally probable, you should want to be the sort of person who will give up the $100, since if you are that sort of person, there's half a chance you'll get $10000.
If you want to be the sort of person who'll do X given Y, then when Y turns up, you'd better bloody well do X.
If you want to be the sort of person who'll do X given Y, then when Y turns up, you'd better bloody well do X.
Well said. That's a lot of the motivation behind my choice of decision theory in a nutshell.
Thanks, it's good to know I'm on the right track =)
I think this core insight is one of the clearest changes in my thought process since starting to read OB/LW -- I can't imagine myself leaping to "well, I'd hand him $100, of course" a couple years ago.
I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.
I'm saying, go ahead and start by imagining a situation like the one in the problem, except it's all happening in the future -- you don't yet know how the coin will land.
You would want to decide in advance that if the coin came up against you, you would cough up $100.
The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.
So it's a shame that in the problem as stated, you don't get to precommit.
But the fact that you don't get advance knowledge shouldn't change anything. You can just decide for yourself, right now, to follow this simple rule:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.
I'm actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by "perfect knowledge". Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I'm just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think "if there was a true fact which my past self could have learned, which would have caused him to precommit etc." should do the trick. Gonna have to sleep on that.
ETA2: "What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.
ETA2: "What would you do in situation X?" and "What would you like to pre-commit to doing, should you ever encounter situation X?" should, to a rational agent, be one and the same question.
...and that's an even better way of putting it.
I work on AI. In particular, on decision systems stable under self-modification. Any agent who does not give the $100 in situations like this will self-modify to give $100 in situations like this. I don't spend a whole lot of time thinking about decision theories that are unstable under reflection. QED.
If you need special cases, your decision theory is not consistent under reflection. In other words, it should simply always do the thing that it would precommit to doing, because, as MBlume put it, the decision theory is formulated in such fashion that "What would you precommit to?" and "What will you do?" work out to be one and the same question.
Nope. I don't care what quirks in my neurology do - I don't care what answer the material calculator returns, only the answer to 2 + 2 = ?
This requires though that Omega have decided to make the bet in a fashion which exhibited no dependency on its advance knowledge of the coin.
Hi,
My name is Omega. You may have heard of me.
Anyway, I have just tossed a fair coin, and given that the coin came up tails, I'm gonna have to ask each of you to give me $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, if the coin came up heads instead of tails, I'd have given you each $10000, but only to those that would agree to give me $100 if the coin came up tails.
You know, if Omega is truly doing a full simulation of my cognitive algorithm, then it seems my interactions with him should be dominated by my desire for him to stop it, since he is effectively creating and murdering copies of me.
The decision doesn't need to be read off from a straightforward simulation, it can be an on-demand, so to say, reconstruction of the outcome from the counterfactual. I believe it should be possible to calculate just your decision, without constructing a morally significant computation. Knowing your decision may be as simple as checking whether you adhere a certain decision theory.
I guess I'm a bit tired of "God was unable to make the show today so the part of Omniscient being will be played by Omega" puzzles, even if in my mind Omega looks amusingly like the Flying Spaghetti Monster.
Particularly in this case where Omega is being explicitly dishonest - Omega is claiming to be either be sufficiently omniscient to predict my actions, or insufficiently omniscient to predict the result of a 'fair' coin, except that the 'fair' coin is explicitly predetermined to always give the same result . . . except . . .
What's the point of using rationalism to think things through logically if you keep placing yourself into illogical philosophical worlds to test the logic?
I convinced myself to one-box in Newcomb by simply treating it as if the contents of the boxes magically change when I made my decision. Simply draw the decision tree and maximize u-value.
I convinced myself to cooperate in the Prisoner's Dilemma by treating it as if whatever decision I made the other person would magically make too. Simply draw the decision tree and maximize u-value.
It seems that Omega is different because I actually have the information, where in the others I don't.
For example, In Newcomb, if we could see the contents of both boxes, then...
I really fail to see why you're all so fascinated by Newcomb-like problems. When you break causality, all logic based on causality doesn't function any more. If you try to model it mathematically, you will get inconsistent model always.
There's no need to break causality. You are a being implemented in chaotic wetware. However, there's no reason to think we couldn't have rational agents implemented in much more predictable form, as python routines for example, so that any being with superior computation power could simply inspect the source and determine what the output would be.
In such a case, Newcomb-like problems would arise, perfectly lawfully, under normal physics.
In fact, Newcomb-like problems fall naturally out of any ability to simulate and predict the actions of other agents. Omega as described is essentially the limit as predictive power goes to infinity.
If you have two agents trying to precommit not to be blackmailed by each other / precommit not to pay attention to the others precommitment, then any attempt to take a limit of this Newcomblike problem does depend on how you approach the limit. (I don't know how to solve this problem.)
Rice's theorem says you can't predict every possible algorithm in general. Plenty of particular algorithms can be predictable. If you're running on a classical computer and Omega has a copy of you, you are perfectly predictable.
And all of your choices are just as real as they ever were, see the OB sequence on free will (I think someone referred to it already).
This problem seems uninteresting to me too. Though more realistic newcomb-like problems are interesting; for there are parts of life where newcombian reasoning works for real.
I find the problem interesting, so I'll try to explain why I find it interesting.
So there are these blogs called Overcoming Bias and Less Wrong, and the people posting on it seem like very smart people, and they say very reasonable things. They offer to teach how to become rational, in the sense of "winning more often". I want to win more often too, so I read the blogs.
Now a lot of what these people are saying sounds very reasonable, but it's also clear that the people saying these things are much smarter than me; so much so that although their conclusions sound very reasonable, I can't always follow all the arguments or steps used to reach those conclusions. As part of my rationalist training, I try to notice when I can follow the steps to a conclusion, and when I can't, and remember which conclusions I believe in because I fully understand it, and which conclusions I am "tentatively believing in" because someone smart said it, and I'm just taking their word for it for now.
So now Vladim...
The primary reason for resolving Newcomb-like problems is to explore the fundamental limitations of decision theories.
It sounds like you are still confused about free will. See Righting a Wrong Question, Possibility and Could-ness, and Daniel Dennett's lecture here.
Not really - all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it's clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no "choice" in deciding to come to work tomorrow, if I predict based on your record that you're 99.99% reliable? Where is the cut-off that free will gets lost?
I'm very torn on this problem. Every time I think I've got it figured out and start typing out my reasons why, I change my mind, and throw away my 6+ paragraph explanation and start over, arguing the opposite case, only to change my mind again.
I think the problem has to do with strong conflicts between my rational arguments and my intuition. This problem is a much more interesting koan for me than one hand clapping, or tree in the forest.
I think my answer would be "I would have agreed, had you asked me when the coin chances were .5 and .5. Now that they're 1 and 0, I have no reason to agree."
Seriously, why stick with an agreement you never made? Besides, if Omega can predict me this well he knows how the coin will come up and how I'll react. Why then, should I try to act otherwise. Somehow, I think I just don't get it.
This problem seems conceptually identical to Kavka's toxin puzzle; we have merely replaced intending to drink the poison/pay $100 with being the sort of person whom Omega would predict would do it.
Since, as has been pointed out, one needn't be a perfect predictor for the game to work, I think I'll actually try this on some of my friends.
Whether I give Omega the $100 depends entirely on whether there will be multiple iterations of coin-flipping. If there will be multiple iterations, giving Omega the $100 is indeed winning, just like buying a financial instrument that increases in value is winning.
I know that this is a very old post, but I thought that I should add a link to The Counterfactual Prisoner's Dilemma, which is a thought experiment Cousin_it and I independently came up with to demonstrate why you should care about this dilemma.
The setup is as follows:
Omega, a perfect predictor, flips a coin. If if comes up heads, Omega asks you for $100, then pays you $10,000 if it predict you would have paid if it had come up tails and you were told it was tails. If it comes up tails, Omega asks you for $100, then pays you $10,000 if it predicts you...
So, is it reasonable to pre-commit to giving the $100 in the counterfactual mugging game? (Pre-commitment is one solution to the Newcomb problem.) On first glance, it seems that a pre-commitment will work.
But now consider "counter-counterfactual mugging". In this game, Omega meets me and scans my brain. If it finds that I've pre-committed to handing over the $s in the counterfactual mugging game, then it empties my bank account. If I haven't pre-committed to doing anything in counterfactual mugging, then it rewards me with $1 million. Damn.
So wha...
If I found myself in this kind of scenario then it would imply that I was very wrong about how I reason about anthropics in an ensemble universe (as with Pascal's mugging or any sort of situation where an agent has enough computing power to take control of that much of my measure such that I find myself in a contrived philosophical experiment). In fact, I would be so surprised to find myself in such a situation that I would question the reasoning that led me to think one boxing was the best course of action in the first place, because somewhere along the w...
Normally, you can assume your thought processes are uncorrelated with whats out there. Newcomb-like problems however, do have the state of the outside universe correlated with your actual thoughts, and this is what throws people off.
If you are unsure if the state of the universe is X or Y (say with p = 1/2 for simplicity), and we can chose either option A or B, we can calculate the expected utility of choosing A vs B by taking 1/2u(A,X)+1/2u(A,Y) and comparing it to 1/2u(B,X)+1/2u(B,Y).
In a newcomb-like problem, where the state of the experiment is actuall...
There is a caveat: if you are an agent who is constructed to live in the world where Omega tossed its coin to come out tails, so that the state space for which your utility function and prior are defined doesn't contain the areas corresponding to the coin coming up heads, you don't need to give up $100. You only give up $100 as a tribute to the part of your morality specified on the counterfactual area of the state space.
I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).
With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be "as close as possible" to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their resp...
I'm way late to this party, but aren't we ignoring something obvious? Such as imperfect knowledge of how likely Omega is to be right about its prediction of what you would do? If you live in a universe where Omega is a known fact and nobody thinks themselves insane when they meet him, well, then it's the degenerate case where you are 100% certain that Omega predicts correctly. If you lived in such a universe presumably you would know it, and everyone in that world would pre-commit to giving Omega $100, just like in ours pizza-deliverers pre-commit to not c...
Under my syntacticist cosmology, which is a kind of Tegmarkian/Almondian crossover (with measure flowing along the seemingly 'backward' causal relations), the answer becomes trivially "yes, give Omega the $100" because counterfactual-me exists. In fact, since this-Omega simulates counterfactual-me and counterfactual-Omega simulates this-me, the (backwards) flow of measure ensures that the subjective probabilities of finding myself in real-me and counterfactual-me must be fairly close together; consequently this remains my decision even in the Al...
This is just the one-shot Prisoner's Dilemma. You being split into two different possible worlds, is just like the two prisoners being taken into two different cells.
Therefore, you should give Omega $100 if and only if you would cooperate in the one-shot PD.
I don't see the difficulty. No, you don't win by giving Omega $100. Yes, it would have been a winning bet before the flip if, as you specify, the coin is fair. Your PS, in which you say to "assume that in the overwhelming measure of the MWI worlds it gives the same outcome", contradicts the assertion that the coin is fair, and so you have asked us for an answer to an incoherent question.
is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?
No, it's a clear loss.
The only winning scenario is, "the coin comes down heads and you have an effective commitment to have paid if it came down tails."
By making a binding precommitment, you effectively gamble that the coin will come down heads. If it comes down tails instead, clearly you have lost the gamble. Giving the $100 when you didn't even make the precommitment would just be pointlessly giving away money.
I realise I'm coming to this a little late, but I'm a little unclear about this case. This is my understanding:
When you ask me if I should give Omega the $100, I commit to "yes" because I am the agent who might meet Omega one day, and since I am in fact at the time before the coin has been flipped right now, by the usual expected value calculations the rational choice is to decide to.
So does that mean that if I commit now (eg: by giving myself a monetary incentive to give the $100), and my friend John meets Omega tomorrow who has flipped the coi...
Well, this comes up different ways under different interpretations. If there is a chance that I am being simulated, that is this is part of his determining my choice, then I give him $100. If the coin is quantum, that is there will exist other mes getting the money, I give him $100. If there is a chance that I will encounter similar situations again, I give him $100. If I were informed of the deal beforehand, I give him $100. Given that I am not simulated, given that the coin is deterministic, and given that I will never again encounter Omega, I don't thin...
Suppose Omega gives you the same choice, but says that if a head had come up, it would have killed you, but only if you {would have refused|will refuse} to give it your lousy $100 {if the coin had come up heads|given that the coin has come up heads}. Not sure what the correct tense is, here.
I believe that I would keep the $100 in your problem, but give it up in mine.
ETA: Can you clarify your postscript? Presumably you don't want the knowledge about the distribution of coin-flip states across future Everett branches to be available for the purposes of the expected utility calculation?
I think that this is a critical point, worthy of a blog post of its own.
Impossible possible worlds are a confusion.
The inclination to trade with fiction seems like a serious problem within this community.
Ok so there's a good chance I'm just being an idiot here, but I feel like a multiple worlds kind of interpretation serves well here. If, as you say, "the coin is deterministic, [and] in the overwhelming measure of the MWI worlds it gives the same outcome," then I don't believe the coin is fair. And if the coin isn't fair, then of course I'm not giving Omega any money. If, on the other hand, the coin is fair, and so I have reason to believe that in roughly half of the worlds the coin landed on the other side and Omega posed the opposite question, then by giving Omega the $100 I'm giving the me in those other worlds $1000 and I'm perfectly happy to do that.
Not sure how to delete, but this was meant to be a reply.
I think that what really does my head in about this problem is, although I may right now be motivated to make a commitment, because of the hope of winning the 10K, nonetheless my commitment cannot rely on that motivation, because when it comes to the crunch, that possibility has evaporated and the associated motivation is gone. I can only make an effective commitment if I have something more persistent - like the suggested $1000 contract with a third party. Without that, I cannot trust my future self to follow through, because the reasons that I would curr...
Does this particular thought experiment really have any practical application?
I can think of plenty of similar scenarios that are genuinely useful and worth considering, but all of them can be expressed with much simpler and more intuitive scenarios - eg when the offer will/might be repeated, or when you get to choose in advance whether to flip the coin and win 10000/lose 100. But with the scenario as stated - what real phenomenon is there that would reward you for being willing to counterfactually take an otherwise-detrimental action for no reason other than qualifying for the counterfactual reward? Even if we decide the best course of action in this contrived scenario - therefore what?
The only mechanisms I know of by which Omega can accurately predict me without introducing paradoxes is by running something like a simulation, as others have suggested. But I really, truly, only care about the universe I happen to know about, and for the life of me, I can't figure out why I should care about any other. So even if the universe I perceive really is just simulated so that Omega can figure out what I would do in this situation, I don't understand why I should care about "my" utility in some other universe. So, two box, keep my $100.
Edit: I should add that my not caring about other universes is conditional on my having no reason to believe they exist.
I have one minor question about this problem, would I be allowed to say, offer omega $50 instead of the $100 he asked for in exchange for $5000 and the promise that, if it had occured that the coin landed head, it would give me $5000 and ask me for $50, which he (going to refer to all sentinents as he, that way I don't have to waste time typing figuring out whether the person I'm talking about is he,she, or it.) would know to do since Omega would simulate the me when the tail landed tails, and thus the simulated me would offer him this proposition. Which s...
I am unable to see how this boils down to anything but a moral problem (and therefore with no objective solution).
Compare this to a simple lost bet. Omega tells you about the deal, you agree, and then he flips a coin, which comes out tails. Why exactly would you pay the $100 in this example?
Because someone will punish/ostracise me if I renege (or other external consequences)? Then in the CM case all that matters is what the consequences are for your payment/refusal.
Because I have an absolute/irrational/moral desire to hold to my word? Then the only questio...
Uniqueness raises all sorts of problems for decision theory, because expected utility implicitly assumes many trials. This may just be another example of that general phenomenon.
The Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says, it really tossed a coin and really would've given you $10000.
How do I know that? I would assign a lower prior probability to that than to me waking up tomorrow with a blue tentacle instead of my right arm; so, it such a situation, I would just believe Omega is bullshitting me.
See Least convenient possible world. These technical difficulties are irrelevant to the problem itself.
Precommitting should be, as someone already said, signing a paper with a third party agreeing to give them $1000 in case you fail to give the $100 to Omega. Precommitment means you have no other option. You can't say that you both precommitted to give the $100 AND refused to do it when presented with the case.
Which means, if Omega presents you with the scenario before the coin toss, you precommit (by signing the contract with the third party). If Omega presents you with the scenario after the coin toss AND also tells you it has already come up tails - you ...
How do you verify that "Omega" really is Omega and not a drunk in a bar? I can't think of a way of doing it - so it sounds like a fraud to me.
Why is the Omega asking me, when it already knows my answer? So what happens to Omega/the universe when I say no?
If he asks me the question, I have already answered the question, so I don't need to post this comment. I acted as I did. But i didn't act as I did (Omega hasn't shown up in my part of the universe), so we all know my answer.
Wow, this reddit softtware is pretty neat for a blog.
I'd love to see a post on the best introductory books to logic, and also epistemology. Epistemology, especially, seems to lack good introductory texts.
Related to: Can Counterfactuals Be True?, Newcomb's Problem and Regret of Rationality.
Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, Omega tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.
Omega can predict your decision in case it asked you to give it $100, even if that hasn't actually happened, it can compute the counterfactual truth. Omega is also known to be absolutely honest and trustworthy, no word-twisting, so the facts are really as it says, it really tossed a coin and really would've given you $10000.
From your current position, it seems absurd to give up your $100. Nothing good happens if you do that, the coin has already landed tails up, you'll never see the counterfactual $10000. But look at this situation from your point of view before Omega tossed the coin. There, you have two possible branches ahead of you, of equal probability. On one branch, you are asked to part with $100, and on the other branch, you are conditionally given $10000. If you decide to keep $100, the expected gain from this decision is $0: there is no exchange of money, you don't give Omega anything on the first branch, and as a result Omega doesn't give you anything on the second branch. If you decide to give $100 on the first branch, then Omega gives you $10000 on the second branch, so the expected gain from this decision is
-$100 * 0.5 + $10000 * 0.5 = $4950
So, this straightforward calculation tells that you ought to give up your $100. It looks like a good idea before the coin toss, but it starts to look like a bad idea after the coin came up tails. Had you known about the deal in advance, one possible course of action would be to set up a precommitment. You contract a third party, agreeing that you'll lose $1000 if you don't give $100 to Omega, in case it asks for that. In this case, you leave yourself no other choice.
But in this game, explicit precommitment is not an option: you didn't know about Omega's little game until the coin was already tossed and the outcome of the toss was given to you. The only thing that stands between Omega and your 100$ is your ritual of cognition. And so I ask you all: is the decision to give up $100 when you have no real benefit from it, only counterfactual benefit, an example of winning?
P.S. Let's assume that the coin is deterministic, that in the overwhelming measure of the MWI worlds it gives the same outcome. You don't care about a fraction that sees a different result, in all reality the result is that Omega won't even consider giving you $10000, it only asks for your $100. Also, the deal is unique, you won't see Omega ever again.