The Humans are Special trope here gives a lot of examples of this. Reputedly, it was a premise that John Campbell, editor of Amazing Stories, was very fond of, accounting for its prevalence.
and as such makes bullets an appropriate response to such acts, whereas they were not before.
Ah, I think I've misunderstood you - I thought you were talking about the initiating act (ie. that it was as appropriate to initiate shooting someone as to insult them), whereas you're talking about the response to the act: that bullets are an appropriate response to bullets, therefore if interchangable, they're an appropriate response to speech too. However, I don't think you can take the first part of that as given - many (including me) would disagree that bu...
If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution.
That's clearly not the case. If they're interchangable, it merely means they'd be equally appropriate, but that doesn't say anything about their absolute appropriateness level. If neither are appropriate responses, that's just as interchangable as both being appropriate - and it's clearly that more restrictive route being advocated here (ie. moving such speech into the bullet category, rather than moving the bullet category into the region of ...
Is that justified though? Suppose a subset of British go about demanding restriction on salmon image production. Would that justify you going out of your way to promote the production of such images, making them more likely to be seen by the subset not making such demands?
But the argument here is going the other way - less permissive, not more. The equivalent analogy would be:
To hold that speech is interchangeable with violence is to hold that certain forms of speech are no more an appropriate answer than a bullet.
The issue at stake is why. Why is speech OK, but a punch not? Presumably because one causes physical pain and the other not. So, in Yvain's salmon situation, when such speech does now cause pain should we treat it the same or different from violence? Why or why not? What then about other forms of mental to...
Newcomb's scenario has the added wrinkle that event B also causes event A
I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decis...
I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.
It's not actually putting it forth as a conclusion though - it's just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.
Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)
The doomsday assumption makes the assumptions that:
(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is...
The various Newcombe situations have fairly direct analogues in everyday things like ultimatum situations, or promise keeping. They alter it to reduce the number of variables, so the "certainty of trusting other party" dial gets turned up to 100% of Omega, "expectation of repeat" to 0 etc, in order to evaluate how to think of such problems when we cut out certain factors.
That said, I'm not actually sure what this question has to do with Newcombe's paradox / counterfactual mugging, or what exactly is interesting about it. If it's just ...
I think the problem is that people tend to conflate intention with effect, often with dire effect, (eg. "Banning drugs == reducing harm from drug use"). Thus when they see a mechanism in place that seems intended to penalise guessing, they assume that its the same as actually penalising guessing, and that anything that shows otherwise must be a mistake.
This may explian the "moral" objection of the one student: The test attempts to penalise guessing, so working against this intention is "cheating" by exploiting a flaw in the t...
I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.
Reality is messy, and while we have to deal with it eventually, it's useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily s...
Ah sorry, I'd thought this was in relation to the source available situation. I think this may still be wrong however. Consider the pair of programs below:
A:
return Strategy.Defect.
B:
if(random(0, 1.0) <0.5) {return Strategy.Cooperate; }
while(true)
{
if(simulate(other, self) == Strategy.Cooperate) { return Strategy.Cooperate; }
}
simulate(A,A) terminates immediately. simulate(B,B) eventually terminates. simulate(B,A) will not terminate 50% of the time.
I don't think this holds. Its clearly possible to construct code like:
if(other_src == my_sourcecode) { return Strategy.COOPERATE; }
if(simulate(other_src, my_sourcecode) == Strategy.COOPERATE)
{
return Strategy.COOPERATE;
}
else
{
return Strategy.DEFECT;
}
B is similar, with slightly different logic in the second part (even a comment difference would suffice).
simulate(A,A) and simulate(B,B) clearly terminate, but simulate(A,B) still calls simulate(B,A) which calls simulate(A,B) ...
Type 3 is just impossible.
No - it just means it can't be perfect. A scanner that works 99.9999999% of the time is effectively indistinguishable from a 100% for the purpose of the problem. One that is 100% except in the presence of recursion is completely identical if we can't construct such a scanner.
My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?
I would one-box, but I'd do so regardless of the method being used, unless I was confident I could bl...
Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:
3b) Omega uses a scanner, but we don't know how the scanner works (or we'd be Omega-level entities ourselves).
5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stic...
It doesn't seem at all sensible to me that the principle of "acting as one would formerly have liked to have precommitted to acting" should have unbounded utility.
Mostly agreed, though I'd quibble that it does have unbounded utility, but that I probably don't have unbounded capability to enact the strategy. If I were capable of (cheaply) compelling my future self to murder in situations where it would be a general advantage to precommit, I would.
From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.
...There is no value in being the kind of person who globally optimizes because of the expectation to win
Rationality can be life and death, but that applies to collective and institutional decisions just as much as for our individual ones. Arguably more so: the decisions made by governments, cultures and large institutions have far larger effects than any decision I'll ever make. Investment into improving my individual rationality is more valuable purely due to self-interest - we may invest more to providing a 1% improvement to our own lives than we do to reducing collective decision making mistakes that costs thousands of lives a year. But survival isn't the only goal we have! Even if it were, there are good reasons to put more emphasis on collective rational decision making - the decisions of others can also affect us.
I think there are other examples with just as much agreement on their wrongness, many of which have a much lower degree of investment even for their believers. Astrology for instance has many believers, but they tend to be fairly weak beliefs, and don't produce such a defensive reaction when criticized. Lots of other superstitions also exist, so sadly I don't think we'll run out of examples any time soon.
It all depends on how the hack is administered. If future-me does think rationally, he will indeed come to the conclusion that he should not pay. Any brain-hack that will actually be successful must then be tied to a superseding rational decision or to something other than rationality. If not tied to rationality, it needs to be a hardcoded response, immediately implemented, rather than one that is thought about.
There are obvious ways to set up a superseding condition: put $101 in escrow, hire an assassin to kill you if you renege, but obviously the cos...
Yes, exactly. I think this post by MBlume gives the best description of the most general such hack needed:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting and sticking to such a strategy, I will on average come out ahead in a wide variety of Newcomblike situations. Obviously the actual benefit of such a hack is marginal, given the unlikeliness of an Omega-like being appearing, and me believing it. Since I've already invested the effort through...
If you think that through and decide that way, then your precommitting method didn't work. The idea is that you must somehow now prevent your future self from behaving rationally in that situation - if they do, they will perform exactly the thought process you describe. The method of doing so, whether making a public promise (and valuing your spoken word more than $100), hiring a hitman to kill you if you renege or just having the capability of reliably convincing yourself to do so (effectively valuing keeping faith with your self-promise more than $100)...
Yes - it is effectively the organisational level of such a brain hack (though it would be advantageous if the officers were performing such a hack on their own brains, rather than being irrational in general - rationality in other situations is a valuable property in those with their fingers on the button.)
In the MAD case, it is deliberately arranged that retaliation is immediate and automatic
Isn't that exactly the same as the desired effect of your brain-hack in the mugging situation? Instead of removing the ability to not retaliate, we want to remov...
But that fooling can only go so far. The better your opponent is at testing your irrational mask, the higher the risk of them spotting a bluff, and thus the closer the gap between acting irrational and being irrational. Only by being irrational can you be sure they won't spot the lie.
Beyond a certain payoff ratio, the risk from being caught out lying is bigger than the chance of having to carry through. For that reason, you end up actually appointing officers who are will actually carry through - even to the point of blind testing them with simulated te...
That would seem to be a very easy thing for them to test. Unless we keep committing atrocities every now and again to fool them, they're going to work out that it's false. Even if they do believe us (or it's true), that would itself be a good argument why our leaders would want to start the war - leading to the conclusion that they should do so to get the first strike advantage, maximising their chances.
It would seem better to convince them in some way that doesn't require us to pay such a cost if possible: and to convince the enemy that we're generally rational, reasonable people except in such circumstances where they attack us.
I don't think that's true. I mentioned one real-world case that is very close to the hypothesised game in the other post: the Mutually Assured Destruction policy, or ultimatums in general.
First note that Omega's perfection as a predictor is not neccessary. With an appropriate payoff matrix even a 50.1% accurate omega doesn't change the optimal strategy. (One proviso on this is that the method of prediction must be such that it is non-spoofable. For example, I could perhaps play Omega with a 90% success rate, but knowing that I don't have access to brai...
Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don't think it would be unusual to find someone who would indeed appear 99.99% of the time - does that mean that person has no free will?
People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn't change matters much (or even increases predictability). There are obviously some s...
To make that claim, you do need to first establish that he would accept a bet of 15 lives vs some reward in the first place, which I think is what he is claiming he would not do. There's a difference between making a bet and reneging, and not accepting the bet. If you would not commit murder to save a million lives in the first place, then the refusal is for a different reason than just the fact that the stakes are raised.
At that point, it's no longer a precommittal - it's how you face the consequences of your decision whether to precommit or not.
Note that the hypothetical loss case presented in the post is not in fact the decision point - that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?
If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would d...
The problem only asks about what you would do in the failure case, and I think this obscures the fact that the relevant decision point is right now. If you would refuse to pay, that means that you are the type of person who would not have won had the coin flip turned out differently, either because you haven't considered the matter (and luckily turn out to be in the situation where your choice worked out better), or because you would renege on such a commitment when it occurred in reality.
However at this point, the coin flip hasn't been made. The globally...
Sure - all bets are off if you aren't absolutely sure Omega is trustworthy.
I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.
That level of precomitting is only neccessary if you are unable to trust yourself to carry through with a self-imposed precommitment. If you are capable of this, you can decide now to act irrationally to certain future decisions in order to benefit to a greater degree than someone who can't. If the temptation to go back on your self-promise is too great in the failure case, then you would have lost in the win case - you are simply a fortunate loser who found out the flaw in his promise in the case where being flawed was beneficial. It doesn't change the...
Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega's omniscient, I'd be honest about it, too, and cough up the money if I lost.
If it's rational to do this when Omega asks you in advance, isn't it also rational to make such a commitment right now? Whether you make the commitment in response to Omega's notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff. If you now commit to a "if this exact situation c...
Chances are I can predict such a response too, and so won't tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. "I've a $50 bet you'll attend tomorrow. Be there and I'll split it 50:50"). It doesn't change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he's also able to predict the effects of him telling you) to a similar precision?
I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).
With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be "as close as possible" to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their resp...
Not really - all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it's clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no "choice" in deciding to come to work tomorrow, if I predict based on your record that you're 99.99% reliable? Where is the cut-off that free will gets lost?
I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.
I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, ... (read more)