All of brianm's Comments + Replies

brianm00

I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.

I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, ... (read more)

brianm60

The Humans are Special trope here gives a lot of examples of this. Reputedly, it was a premise that John Campbell, editor of Amazing Stories, was very fond of, accounting for its prevalence.

brianm30

and as such makes bullets an appropriate response to such acts, whereas they were not before.

Ah, I think I've misunderstood you - I thought you were talking about the initiating act (ie. that it was as appropriate to initiate shooting someone as to insult them), whereas you're talking about the response to the act: that bullets are an appropriate response to bullets, therefore if interchangable, they're an appropriate response to speech too. However, I don't think you can take the first part of that as given - many (including me) would disagree that bu... (read more)

0CuSithBell
I'm only interjecting, if there is a misunderstanding, it's probably with jtk3. For my part I think the positions being argued are much clearer now, thank you!
brianm40

If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution.

That's clearly not the case. If they're interchangable, it merely means they'd be equally appropriate, but that doesn't say anything about their absolute appropriateness level. If neither are appropriate responses, that's just as interchangable as both being appropriate - and it's clearly that more restrictive route being advocated here (ie. moving such speech into the bullet category, rather than moving the bullet category into the region of ... (read more)

3CuSithBell
I don't understand this... the notion of a "more restrictive route" doesn't seem to make much of a difference to the objection - the suggested move involves placing a certain type of speech act into the realm of "bullets", and as such makes bullets an appropriate response to such acts, whereas they were not before. Is that right? Edit: That is, if speech B is now equivalent to shooting someone, it's not a case of "harmless speech A can now be responded to with bullets or B," but of "speech B can now be responded to with bullets."
brianm60

Is that justified though? Suppose a subset of British go about demanding restriction on salmon image production. Would that justify you going out of your way to promote the production of such images, making them more likely to be seen by the subset not making such demands?

0ANTIcarrot
We don't have to suppose. This has happened in recent history. When a small group of british people turn hostile and violent for a specific cause, the media services and the population decry their actions, and the British government invariably arrests them. Thatnks to football hooligans, riots, the IRA, 7/7, and its nanny state system of CCTV cameras, the UK is actually quite good at this sort of thing. In comparison the islamic world tends to take a 'boys will be boys' attitude to this kind of thing. While I appreciate the utility of avoiding words like 'blaim' and 'fault' it's kinda hard when the 'victims' are not only indirectly supporting terrorism but actively egging them on.
2Desrtopa
That might depend on whether it discouraged the salmon extremists from making such demands.
5khafra
The above looks like a standard least convenient possible world adjustment; and the original post was already trying for a scenario like that, so I'm not sure why it was downvoted. The question of why we experience that visceral revulsion at attempted control of our private thoughts and expressions is a fascinating one. I could try to attack it with introspection, but I'd like to see some experiments if anybody knows of relevant studies.
brianm90

But the argument here is going the other way - less permissive, not more. The equivalent analogy would be:

To hold that speech is interchangeable with violence is to hold that certain forms of speech are no more an appropriate answer than a bullet.

The issue at stake is why. Why is speech OK, but a punch not? Presumably because one causes physical pain and the other not. So, in Yvain's salmon situation, when such speech does now cause pain should we treat it the same or different from violence? Why or why not? What then about other forms of mental to... (read more)

4jtk3
No, I'm defending a bright line which Yvain would obliterate. If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution. So which to which argument would you prefer a bullet? The brits are feeling the pain of a real physical assault, under the skin. That's not mental torment, it's electrodes. A crucial difference is that we can change our minds about what offends us but we cannot choose not to respond to electrodes in the brain and we cannot choose not to bleed when pierced by a bullet. It is not my comprehensive answer but I think it is a sufficient answer. They are not interchangeable. Many words would have hurt me deeply 15 years ago but hardly any can now because I've changed my mind about them. It is within my power to feel zero pain from anything you might say. People really can change their minds to take less offense if they want to. They cant choose to not be harmed by a punch or a bullet. Different.
brianm20

Newcomb's scenario has the added wrinkle that event B also causes event A

I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decis... (read more)

brianm10

I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.

0Tyrrell_McAllister
It's commonplace for an event A to cause an event B, with both sharing a third antecedent cause C. (The bullet's firing causes the prisoner to die, but the finger's pulling of the trigger causes both.) Newcomb's scenario has the added wrinkle that event B also causes event A. Nonetheless, both still have the antecedent cause C that you describe. All of this only makes sense under the right analysis of causation. In this case, the right analysis is a manipulationist one, such as that given by Judea Pearl.
brianm10

It's not actually putting it forth as a conclusion though - it's just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.

brianm20

Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)

brianm10

The doomsday assumption makes the assumptions that:

  1. We are randomly selected from all the observers who will ever exist.
  2. The observers increase expoentially, such that there are 2/3 of those who have ever lived at any particular generation
  3. They are wiped out by a catastrophic event, rather than slowly dwindling or other

(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is... (read more)

1SilasBarta
Actually, it requires that we be selected from a small subset of these observers, such as "humans" or "conscious entities" or, perhaps most appropriate, "beings capable of reflecting on this problem". Well, for the numbers to work out, there would have to be a sharp drop-off before the slow-dwindling, which is roughly as worrisome as a "pure doomsday".
1Stuart_Armstrong
Then what about introducing a C' between C and D: You are told the initial rules. Then, later you are told about the killing, and then, even later, that the killing had already happened and that you were spared. What would you say the odds were there?
brianm30

The various Newcombe situations have fairly direct analogues in everyday things like ultimatum situations, or promise keeping. They alter it to reduce the number of variables, so the "certainty of trusting other party" dial gets turned up to 100% of Omega, "expectation of repeat" to 0 etc, in order to evaluate how to think of such problems when we cut out certain factors.

That said, I'm not actually sure what this question has to do with Newcombe's paradox / counterfactual mugging, or what exactly is interesting about it. If it's just ... (read more)

brianm290

I think the problem is that people tend to conflate intention with effect, often with dire effect, (eg. "Banning drugs == reducing harm from drug use"). Thus when they see a mechanism in place that seems intended to penalise guessing, they assume that its the same as actually penalising guessing, and that anything that shows otherwise must be a mistake.

This may explian the "moral" objection of the one student: The test attempts to penalise guessing, so working against this intention is "cheating" by exploiting a flaw in the t... (read more)

brianm30

I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.

Reality is messy, and while we have to deal with it eventually, it's useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily s... (read more)

0thomblake
You seem to misunderstand what models are for. A model is not the actual thing - thus, we do not say, "Why did you build a scale model of the solar system - we have the actual solar system for that!". Instead, models always leave something out - they abstract away the details we don't think are important to simplify thinking about the problem. Other than that, I agree.
4Cosmos
Models are also dangerously seductive. You're gaining precision at the expense of correspondence to reality, which can only be a temporary trade off if you're ever going to put your knowledge to work. I most strongly object to modeling as used in economics. Modeling is no longer about getting traction on difficult concepts - building these stylized models has become a goal in and of itself, and mathematical formalization is almost a prerequisite for getting published in a major journal.
brianm10

Ah sorry, I'd thought this was in relation to the source available situation. I think this may still be wrong however. Consider the pair of programs below:

A: 
    return Strategy.Defect.

B: 
    if(random(0, 1.0) <0.5) {return Strategy.Cooperate; }

    while(true)
    {
        if(simulate(other, self) == Strategy.Cooperate) { return Strategy.Cooperate; }
    }

simulate(A,A) terminates immediately. simulate(B,B) eventually terminates. simulate(B,A) will not terminate 50% of the time.

1gwern
As a functional programmer, I can't help but notice that invisible global state like random destroys the referential transparency of these programs and that they cease to be functions. If the random number generator's seed were passed in as an argument, restoring purity, would termination be restored? (Offhand, I don't think it would. Even if program A used each digit to decide whether to defect or cooperate, program B could just follow the same strategy but choose the reverse and simulate one more step.)
0cousin_it
Yes, you're right. Thanks!
brianm00

I don't think this holds. Its clearly possible to construct code like:

if(other_src == my_sourcecode) { return Strategy.COOPERATE; }
if(simulate(other_src, my_sourcecode) == Strategy.COOPERATE)
{
  return Strategy.COOPERATE;
}
else
{
  return Strategy.DEFECT;
}

B is similar, with slightly different logic in the second part (even a comment difference would suffice).

simulate(A,A) and simulate(B,B) clearly terminate, but simulate(A,B) still calls simulate(B,A) which calls simulate(A,B) ...

0cousin_it
No, reread the post - in the second scenario programs can't read or compare each other's source code. You're given two ObjectCode instances that are totally opaque except you can pass them to the simulator. If you still succeed in constructing a counterexample for my hypothesis, do let me know.
brianm60

Type 3 is just impossible.

No - it just means it can't be perfect. A scanner that works 99.9999999% of the time is effectively indistinguishable from a 100% for the purpose of the problem. One that is 100% except in the presence of recursion is completely identical if we can't construct such a scanner.

My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?

I would one-box, but I'd do so regardless of the method being used, unless I was confident I could bl... (read more)

brianm70

Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:

3b) Omega uses a scanner, but we don't know how the scanner works (or we'd be Omega-level entities ourselves).

5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stic... (read more)

1cousin_it
3b) Our ignorance doesn't change the fact that, if the scanner is in principle repeatable, reality contains a contradiction. Type 3 is just impossible. 5) If I were in this situation, I'd assume a prior over possible Omegas that gave large weight to types 1 and 2, which means I would one-box. My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?
brianm20

It doesn't seem at all sensible to me that the principle of "acting as one would formerly have liked to have precommitted to acting" should have unbounded utility.

Mostly agreed, though I'd quibble that it does have unbounded utility, but that I probably don't have unbounded capability to enact the strategy. If I were capable of (cheaply) compelling my future self to murder in situations where it would be a general advantage to precommit, I would.

brianm50

From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.

There is no value in being the kind of person who globally optimizes because of the expectation to win

... (read more)
brianm20

Rationality can be life and death, but that applies to collective and institutional decisions just as much as for our individual ones. Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make. Investment into improving my individual rationality is more valuable purely due to self-interest - we may invest more to providing a 1% improvement to our own lives than we do to reducing collective decision making mistakes that costs thousands of lives a year. But survival isn't the only goal we have! Even if it were, there are good reasons to put more emphasis on collective rational decision making - the decisions of others can also affect us.

5patrissimo
Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make. And you have far less impact on them. None, in most cases. When it comes to the transformation of effort applied to impact on your life, developing individual skills has vastly more effect - by orders of magnitude, I would say.
brianm50

I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.

8Paul Crowley
But because people aren't so invested in it, they mostly won't work so hard to rationalise it; mostly people who are really trying to be rational will simply drop it, and you're left with a fairly flabby opposition. Whereas lots of smart people who really wanted to be clear-thinking have fought to hang onto religion, and built huge castles of error to defend it.
brianm20

It all depends on how the hack is administered. If future-me does think rationally, he will indeed come to the conclusion that he should not pay. Any brain-hack that will actually be successful must then be tied to a superseding rational decision or to something other than rationality. If not tied to rationality, it needs to be a hardcoded response, immediately implemented, rather than one that is thought about.

There are obvious ways to set up a superseding condition: put $101 in escrow, hire an assassin to kill you if you renege, but obviously the cos... (read more)

2topynate
This is precisely my reasoning too. It doesn't seem at all sensible to me that the principle of "acting as one would formerly have liked to have precommitted to acting" should have unbounded utility. ETA: When you say: Now this seems a very good point to me indeed. If we have evolved machinery present in our brains that predictably and unavoidably makes us feel good about following through on a threat and bad about not doing so - and I think that we do have that machinery - then this comes close to resolving the problem. But the point about such a mechanism is that it is tuned to have a limited effect - an effect that I am pretty sure would be insufficient to cause me to murder 15 people in the vast majority of circumstances.
brianm20

Yes, exactly. I think this post by MBlume gives the best description of the most general such hack needed:

If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.

By adopting and sticking to such a strategy, I will on average come out ahead in a wide variety of Newcomblike situations. Obviously the actual benefit of such a hack is marginal, given the unlikeliness of an Omega-like being appearing, and me believing it. Since I've already invested the effort through... (read more)

3topynate
Definitely. Here lies my problem. I would like to adopt such a strategy (or a better one if any exists), and not alter my strategy when I actually encounter a Newcomblike situation. Now in the original Newcomb problem, I have no reason to do so: if I alter my strategy so as to two-box, then I will end up with less money (although I would have difficulties proving this in the formalism I use in the article). But in the mugging problem, altering my strategy to "keep $100 in this instance only" will, in an (Omega appears, coin is tails) state, net me more money. Therefore I believe that keeping to my strategy must have intrinsic value to me, greater than that of the $100 I would lose, in order for me to keep it. Now I can answer your question about how the MAD brain-hack and the mugging brain-hack are related. In the MAD situation, the institutions actions are "hardcoded" to occur. In the case of the mugging brain-hack, this would count as, say, wiring a device to one's brain that takes over in Omega situations. This may well be possible in some situations, but I wanted to deal with the harder problem of how to fashion the brain that, on learning it is in a "tails" state, does not then want to remove such a hack. Now if I expect to be faced with many Omega mugging problems in the future, then a glimmer of hope appears; although "keep $100 in this instance only" may then seem to be an improved strategy, I know that this conclusion must in fact be incorrect, as whatever process I use to arrive at it is, if allowed to operate, highly likely to lose money for me in the future. In other words, this makes the problem more similar to Newcomb's problem: in the states of the world in which I make the modification, I lose money <-> in the states of the world in which I two-box, I make less money. But the problem as posed involves an Omega turning up and convincing you that this problem is the last Newcomblike problem you will ever face. ETA: In case it wasn't clear, if I ass
brianm20

If you think that through and decide that way, then your precommitting method didn't work. The idea is that you must somehow now prevent your future self from behaving rationally in that situation - if they do, they will perform exactly the thought process you describe. The method of doing so, whether making a public promise (and valuing your spoken word more than $100), hiring a hitman to kill you if you renege or just having the capability of reliably convincing yourself to do so (effectively valuing keeping faith with your self-promise more than $100)... (read more)

brianm10

Yes - it is effectively the organisational level of such a brain hack (though it would be advantageous if the officers were performing such a hack on their own brains, rather than being irrational in general - rationality in other situations is a valuable property in those with their fingers on the button.)

In the MAD case, it is deliberately arranged that retaliation is immediate and automatic

Isn't that exactly the same as the desired effect of your brain-hack in the mugging situation? Instead of removing the ability to not retaliate, we want to remov... (read more)

2topynate
OK, so to clarify, the problem you're considering is the one where, with no preparation on your part, Omega appears and announces tails? EDIT: Oops. Clearly you don't mean that. Do you want me to imagine a general hack we can make that increases our expected utility conditional on Omega appearing, but that we can profitably make even without having proof or prior evidence of Omega's existence? EDIT 2: I do want to answer your question "Isn't that exactly the same as the desired effect of your brain-hack in the mugging situation?", but I'd rather wait on your reply to mine before I formulate it.
brianm20

But that fooling can only go so far. The better your opponent is at testing your irrational mask, the higher the risk of them spotting a bluff, and thus the closer the gap between acting irrational and being irrational. Only by being irrational can you be sure they won't spot the lie.

Beyond a certain payoff ratio, the risk from being caught out lying is bigger than the chance of having to carry through. For that reason, you end up actually appointing officers who are will actually carry through - even to the point of blind testing them with simulated te... (read more)

2topynate
If I can take this back to the "agents maximising their utility" interpretation: this is then a genuine example of a brain hack, the brain in this case being the institutional decision structure of a Cold War government (lets say the Soviets). Having decided that only by massively retaliating in the possible world where America has attacked is there a win, and having realised that as currently constituted the institution would not retaliate under those circumstances, the institution modified itself so that it would retaliate under those circumstances. I find it interesting that it would have to use irrational agents (the retaliatory officers) as part of its decision structure in order to achieve this. This points to another difference between Omega mugging and MAD: whereas in the former, its assumed you have the chance to modify yourself in between Omega appearing and your making the decision, in the MAD case, it is deliberately arranged that retaliation is immediate and automatic (corresponding to removing the ability not to retaliate from the Soviet command structure).
brianm30

That would seem to be a very easy thing for them to test. Unless we keep committing atrocities every now and again to fool them, they're going to work out that it's false. Even if they do believe us (or it's true), that would itself be a good argument why our leaders would want to start the war - leading to the conclusion that they should do so to get the first strike advantage, maximising their chances.

It would seem better to convince them in some way that doesn't require us to pay such a cost if possible: and to convince the enemy that we're generally rational, reasonable people except in such circumstances where they attack us.

3topynate
Many countries involved in protracted disputes do commit atrocities against third parties every now and again; perhaps not for this reason, though. The problem is that "generally rational, reasonable people" will generally remain so even if attacked. It's much easier to convince an enemy that you are irrational, to some extent. If you can hide your level of rationality, then in a game like MAD you increase your expected score and reduce your opponent's by reducing the information available to them. One difference between MAD and the Omega mugging is that Omega is defined so as to make any such concealment useless. ETA: This (short and very good) paper by Yamin Htun discusses the kind of irrationality I mean. Quote: Substitute "anti-altruistic" for "altruistic" and this is what I was aiming at.
brianm30

I don't think that's true. I mentioned one real-world case that is very close to the hypothesised game in the other post: the Mutually Assured Destruction policy, or ultimatums in general.

First note that Omega's perfection as a predictor is not neccessary. With an appropriate payoff matrix even a 50.1% accurate omega doesn't change the optimal strategy. (One proviso on this is that the method of prediction must be such that it is non-spoofable. For example, I could perhaps play Omega with a 90% success rate, but knowing that I don't have access to brai... (read more)

3topynate
It follows that you should convince an enemy you actually find killing innocent civilians pleasurable, and are looking for an excuse to do so.
brianm50

Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don't think it would be unusual to find someone who would indeed appear 99.99% of the time - does that mean that person has no free will?

People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn't change matters much (or even increases predictability). There are obviously some s... (read more)

brianm40

To make that claim, you do need to first establish that he would accept a bet of 15 lives vs some reward in the first place, which I think is what he is claiming he would not do. There's a difference between making a bet and reneging, and not accepting the bet. If you would not commit murder to save a million lives in the first place, then the refusal is for a different reason than just the fact that the stakes are raised.

brianm40

At that point, it's no longer a precommittal - it's how you face the consequences of your decision whether to precommit or not.
Note that the hypothetical loss case presented in the post is not in fact the decision point - that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?

If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would d... (read more)

brianm40

The problem only asks about what you would do in the failure case, and I think this obscures the fact that the relevant decision point is right now. If you would refuse to pay, that means that you are the type of person who would not have won had the coin flip turned out differently, either because you haven't considered the matter (and luckily turn out to be in the situation where your choice worked out better), or because you would renege on such a commitment when it occurred in reality.

However at this point, the coin flip hasn't been made. The globally... (read more)

4Vladimir_Nesov
What if there is no "on average", if the choice to give away the $100 is the only choice you are given in your life? There is no value in being the kind of person who globally optimizes because of the expectation to win on average. You only make this choice because it's what you are, not because you expect the reality on average to be the way you want it to be.
brianm80

Sure - all bets are off if you aren't absolutely sure Omega is trustworthy.

I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.

brianm80

That level of precomitting is only neccessary if you are unable to trust yourself to carry through with a self-imposed precommitment. If you are capable of this, you can decide now to act irrationally to certain future decisions in order to benefit to a greater degree than someone who can't. If the temptation to go back on your self-promise is too great in the failure case, then you would have lost in the win case - you are simply a fortunate loser who found out the flaw in his promise in the case where being flawed was beneficial. It doesn't change the... (read more)

4Lightwave
Okay, I agree that this level of precomitting is not necessary. But if the deal is really a one-time offer, then, when presented with the case of the coin already having come up tails, you can no longer ever benefit from being the sort of person who would precommit. Since you will never again be presented with a newcomb-like scenario, then you will have no benefit from being the precommiting type. Therefore you shouldn't give the $100. If, on the other hand, you still expect that you can encounter some other Omega-like thing which will present you with such a scenario, doesn't this make the deal repeatable, which is not how the question was formulated?
brianm70

Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega's omniscient, I'd be honest about it, too, and cough up the money if I lost.

If it's rational to do this when Omega asks you in advance, isn't it also rational to make such a commitment right now? Whether you make the commitment in response to Omega's notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff. If you now commit to a "if this exact situation c... (read more)

-1findis
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950. If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.
4AndySimpson
No, I will not precommit to giving up my $100 for cases where Omega demands the money after the coin flip has occurred. There is no incentive to precommit in those cases, because the outcome is already against me and there's not a chance that it "would" go in my favour.
2thomblake
Maybe in thought-experiment-world. But if there's a significant chance that you'll misidentify a con man as Omega, then this tendency makes you lose on average.
1[anonymous]
Brianm understands reflective consistency!
brianm90

Chances are I can predict such a response too, and so won't tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. "I've a $50 bet you'll attend tomorrow. Be there and I'll split it 50:50"). It doesn't change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he's also able to predict the effects of him telling you) to a similar precision?

2Roko
Because that's pretty much our intuitive definition of free will; that it is not possible for someone to predict your actions, announce it publicly, and still be correct. If you disagree, we are disagreeing about the intuitive definition of "free will" that most people carry around in their heads. At least admit that most people would be unsurprised if a person predicted that they would (e.g.) brush their teeth in the morning (without telling them in advance that it had predicted that), versus predicting that they would knock a vase over, and then as a result of that prediction, the vase actually getting knocked over.
brianm30

I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).

With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be "as close as possible" to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their resp... (read more)

5Nebu
Well, the other way to look at it is "What action leads me to win?" in the Newcomb problem, one-boxing wins, so you and I are in agreement there. But in this problem, not-giving-away-$100 wins. Sure, I want to be the "type of person who one boxes", but why do I want to be that person? Because I want to win. Being that type of person in this problem actually makes you lose. The problem states that this is a one-shot bet, and that after you do or don't give Omega the $100, he flies away from this galaxy and will never interact with you again. So why give him the $100? It won't make you win in the long term.
brianm110

Not really - all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it's clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no "choice" in deciding to come to work tomorrow, if I predict based on your record that you're 99.99% reliable? Where is the cut-off that free will gets lost?

5Roko
Humans are subtle beasts. If you tell me that you have predicted that I will go to work based upon my 99.99% attendance record, the probability that I will go to work drops dramatically upon me receiving that information, because there is a good chance that I'll not go just to be awkward. This option of "taking your prediction into account, I'll do the opposite to be awkward" is why it feels like you have free will.