The Humans are Special trope here gives a lot of examples of this. Reputedly, it was a premise that John Campbell, editor of Amazing Stories, was very fond of, accounting for its prevalence.
and as such makes bullets an appropriate response to such acts, whereas they were not before.
Ah, I think I've misunderstood you - I thought you were talking about the initiating act (ie. that it was as appropriate to initiate shooting someone as to insult them), whereas you're talking about the response to the act: that bullets are an appropriate response to bullets, therefore if interchangable, they're an appropriate response to speech too. However, I don't think you can take the first part of that as given - many (including me) would disagree that bu...
If they are interchangeable it follows that answering an argument with a bullet may be the efficient solution.
That's clearly not the case. If they're interchangable, it merely means they'd be equally appropriate, but that doesn't say anything about their absolute appropriateness level. If neither are appropriate responses, that's just as interchangable as both being appropriate - and it's clearly that more restrictive route being advocated here (ie. moving such speech into the bullet category, rather than moving the bullet category into the region of ...
Is that justified though? Suppose a subset of British go about demanding restriction on salmon image production. Would that justify you going out of your way to promote the production of such images, making them more likely to be seen by the subset not making such demands?
But the argument here is going the other way - less permissive, not more. The equivalent analogy would be:
To hold that speech is interchangeable with violence is to hold that certain forms of speech are no more an appropriate answer than a bullet.
The issue at stake is why. Why is speech OK, but a punch not? Presumably because one causes physical pain and the other not. So, in Yvain's salmon situation, when such speech does now cause pain should we treat it the same or different from violence? Why or why not? What then about other forms of mental to...
Newcomb's scenario has the added wrinkle that event B also causes event A
I don't see how. Omega doesn't make the prediction because you made the action - he makes it because he can predict that a person of a particular mental configuration at time T will make decision A at time T+1. If I were to play the part of Omega, I couldn't achieve perfect prediction, but might be able to achieve, say, 90% by studying what people say they will do on blogs about Newcombe's paradox, and performing observation as to what such people actually do (so long as my decis...
I don't see why Newcombe's paradox breaks causality - it seems more accurate to say that both events are caused by an earlier cause: your predisposition to choose a particular way. Both Omega's prediction and your action are caused by this predisposition, meaning Omega's prediction is merely correlated with, not a cause of, your choice.
It's not actually putting it forth as a conclusion though - it's just a flaw in our wetware that makes us interpret it as such. We could imagine a perfectly rational being who could accurately work out the probability of a particular person having done it, then randomly sample the population (or even work through each one in turn) looking for the killer. Our problem as humans is that once the idea is planted, we overreact to confirming evidence.
Thinking this through a bit more, you're right - this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)
The doomsday assumption makes the assumptions that:
(Now those assumptions are a bit dubious - things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than 2/3 proportion will live in future generations (eg if the next generation is...
The various Newcombe situations have fairly direct analogues in everyday things like ultimatum situations, or promise keeping. They alter it to reduce the number of variables, so the "certainty of trusting other party" dial gets turned up to 100% of Omega, "expectation of repeat" to 0 etc, in order to evaluate how to think of such problems when we cut out certain factors.
That said, I'm not actually sure what this question has to do with Newcombe's paradox / counterfactual mugging, or what exactly is interesting about it. If it's just ...
I think the problem is that people tend to conflate intention with effect, often with dire effect, (eg. "Banning drugs == reducing harm from drug use"). Thus when they see a mechanism in place that seems intended to penalise guessing, they assume that its the same as actually penalising guessing, and that anything that shows otherwise must be a mistake.
This may explian the "moral" objection of the one student: The test attempts to penalise guessing, so working against this intention is "cheating" by exploiting a flaw in the t...
I don't see the purpose of such thought experiments as being to model reality (we've already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.
Reality is messy, and while we have to deal with it eventually, it's useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily s...
Ah sorry, I'd thought this was in relation to the source available situation. I think this may still be wrong however. Consider the pair of programs below:
A:
return Strategy.Defect.
B:
if(random(0, 1.0) <0.5) {return Strategy.Cooperate; }
while(true)
{
if(simulate(other, self) == Strategy.Cooperate) { return Strategy.Cooperate; }
}
simulate(A,A) terminates immediately. simulate(B,B) eventually terminates. simulate(B,A) will not terminate 50% of the time.
I don't think this holds. Its clearly possible to construct code like:
if(other_src == my_sourcecode) { return Strategy.COOPERATE; }
if(simulate(other_src, my_sourcecode) == Strategy.COOPERATE)
{
return Strategy.COOPERATE;
}
else
{
return Strategy.DEFECT;
}
B is similar, with slightly different logic in the second part (even a comment difference would suffice).
simulate(A,A) and simulate(B,B) clearly terminate, but simulate(A,B) still calls simulate(B,A) which calls simulate(A,B) ...
Type 3 is just impossible.
No - it just means it can't be perfect. A scanner that works 99.9999999% of the time is effectively indistinguishable from a 100% for the purpose of the problem. One that is 100% except in the presence of recursion is completely identical if we can't construct such a scanner.
My prior is justified because a workable Omega of type 3 or 4 is harder for me to imagine than 1 or 2. Disagree? What would you do as a good Bayesian?
I would one-box, but I'd do so regardless of the method being used, unless I was confident I could bl...
Aren't these rather ducking the point? The situations all seem to be assuming that we ourselves have Omega-level information and resources, in which case why do we care about the money anyway? I'd say the relevant cases are:
3b) Omega uses a scanner, but we don't know how the scanner works (or we'd be Omega-level entities ourselves).
5) Omega is using one of the above methods, or one we haven't thought of, but we don't know which. For all we know he could be reading the answers we gave on this blog post, and is just really good at guessing who will stic...
It doesn't seem at all sensible to me that the principle of "acting as one would formerly have liked to have precommitted to acting" should have unbounded utility.
Mostly agreed, though I'd quibble that it does have unbounded utility, but that I probably don't have unbounded capability to enact the strategy. If I were capable of (cheaply) compelling my future self to murder in situations where it would be a general advantage to precommit, I would.
From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.
...There is no value in being the kind of person who globally optimizes because of the expectation to win
I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information.
I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, ... (read more)