I'm pretty sure that decision theories are not designed on that basis.
You are wrong. In fact, this is a totally standard thing to consider, and "avoid back-chaining defection in games of fixed length" is a known problem, with various known strategies.
Yes, that is the problem in question!
If you want the payoff, you have to be the kind of person who will pay the counterfactual mugger, even once you no longer can benefit from doing so. Is that a reasonable feature for a decision theory to have? It's not clear that it is; it seems strange to pay out, even though the expected value of becoming that kind of person is clearly positive before you see the coin. That's what the counterfactual mugging is about.
If you're asking "why care" rhetorically, and you believe the answer is "you shouldn't be...
Your decision is a result of your decision theory, and your decision theory is a fact about you, not just something that happens in that moment.
You can say - I'm not making the decision ahead of time, I'm waiting until after I see that Omega has flipped tails. In which case, when Omega predicts your behavior ahead of time, he predicts that you won't decide until after the coin flip, resulting in hypothetically refusing to pay given tails, so - although the coin flip hasn't happened yet and could still come up heads - your yet-unmade decision has the same effect as if you had loudly precommitted to it.
You're trying to reason in temporal order, but that doesn't work in the presence of predictors.
You're fundamentally failing to address the problem.
For one, your examples just plain omit the "Omega is a predictor" part, which is key to the situation. Since Omega is a predictor, there is no distinction between making the decision ahead of time or not.
For another, unless you can prove that your proposed alternative doesn't have pathologies just as bad as the Counterfactual Mugging, you're at best back to square one.
It's very easy to say "look, just don't do the pathological thing". It's very hard to formalize that into an actual dec...
But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?
That's what the problem is asking!
This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. "Stick to your decision theory, except when you non-rigorously decide not to" isn't a resolution to the problem, any more than "ignore the calculations since they're wrong" was a resolution to the ultraviolet catastrophe.
Again, the point of this experiment is that we w...
There will never be any more 10K; there is no motivation any more to give 100. Following my precommitment, unless it is externally enforced, no longer makes any sense.
This is the point of the thought experiment.
Omega is a predictor. His actions aren't just based on what you decide, but on what he predicts that you will decide.
If your decision theory says "nah, I'm not paying you" when you aren't given advance warning or repeated trials, then that is a fact about your decision theory even before Omega flips his coin. He flips his coin, gets hea...
Decision theory is an attempt to formalize the human decision process. The point isn't that we really are unsure whether you should leave people to die of thirst, but how we can encode that in an actual decision theory. Like so many discussions on Less Wrong, this implicitly comes back to AI design: an AI needs a decision theory, and that decision theory needs to not have major failure modes, or at least the failure modes should be well-understood.
If your AI somehow assigns a nonzero probability to "I will face a massive penalty unless I do this reall...
Precommitments are used in decision-theoretic problems. Some people have proposed that a good decision theory should take the action that it would have precommitted to, if it had known in advance to do such a thing. This is an attempt to examine the consequences of that.
I'm not sure you've described a different mistake than Eliezer has?
Certainly, a student with a sufficiently incomplete understanding of heat conduction is going to have lots of lines of thought that terminate in question marks. The thesis of the post, as I read it, is that we want to be able to recognize when our thoughts terminate in question marks, rather than assuming we're doing something valid because our words sound like things the professor might say.
No part of his objection hinged on reversibility, only the same linearity assumption you rely on to get a result at all.
OK. I think I see what you are getting at.
First, one could simply reject your conclusion:
However at no point did I do anything that could be described as "simulating you".
The argument here is something like "just because you did the calculations differently doesn't mean your calculations failed to simulate a consciousness". Without a real model of how computation gives rise to consciousness (assuming it does), this is hard to resolve.
Second, one could simply accept it: there are some ways to do a given calculation which are ethical,...
From the point of view of physics, it contains garbage,
But a miracle occurs, and your physics simulation still works accurately for the individual components...?
I get that your assumption of "linear physics" gives you this. But I don't see any reason to believe that physics is "linear" in this very weird sense. In general, when you do calculations with garbage, you get garbage. If I time-evolve a simulation of (my house plus a bomb) for an hour, then remove all the bomb components at the end, I definitely do not get the same result as running a simulation with no bomb.
And apparently insurance companies can make money because the expected utility of buying insurance is lower than it's price.
No, the expected monetary value of insurance is lower than its price. (Assuming that the insurance company's assessment of your risk level is accurate.) You're equivocating between money and utility, which is the source of your confusion.
Suppose I offered a simple wager: we flip a coin, and if it comes up heads, I give you a million dollars. But if it comes up tails, you owe me a million dollars, and I get every cent you earn until...
Non-locality, surely? Or "would violate locality"?
Because we can't actually get infinite information, but we still want to calculate things.
And in practice, we can in fact calculate things to some level of precision, using a less-than-infinite amount of information.
You're right, I missed that line.
If I were making music in the style of someone who died six years before I was born, people would probably think I was out of style. I'm not sure if this is the historical fallacy I don't have a name for, where we gloss over differences in a few decades because they're less salient to us than the differences between the 1990s and the 1960s, or if musical styles just change more quickly now.
I spent a long time associating Amazon with "something in South America, so it's probably not accessible to me" before the company was as ultra-famous as it is now.
On the other hand, asteroid mining technologies have some risks of their own, although this only reaches "existential" if somebody starts mining the big ones.
The largest nuclear weapon was the Tsar Bomba: 50 megatonnes of TNT, roughly equivalent to a 3.3-million-tonne impactor. Asteroids larger than this are thought to number in the tens of millions, and at the time of writing only 1.1 million had been provisionally identified. Asteroid shunting at or beyond this scale is by definition a trans-nuclear technology, which means a point comes where the necessary level of trust is unprecedented.
"Willpower is not exhaustible" is not necessarily the same claim as "willpower is infallible". If, for example, you have a flat 75% chance of turning down sweets, then avoiding sweets still makes you more likely to not eat them. You're not spending willpower, it's just inherently unreliable.