I'm putting this in the discussion section because I'm not sure whether something like this has already been thought of, and I don't want to repeat things in a top-level post.
Anyway, consider a Prisoner's-Dilemma-like situation with the following payoff matrix:
You defect, opponent defects: 0 utils
You defect, opponent cooperates: 3 utils
You cooperate, opponent defects: 1 util
You cooperate, opponent cooperates: 2 utils
Assume all players have either have full information about their opponents, or are allowed to communicate and will be able to deduce each others' strategy correctly.
Suppose you are a a timeless decision theory agent playing this modified Prisoner's Dilemma with an actor that will always pick "defect" no matter what your strategy is. Clearly, your best move is to cooperate, gaining you 1 util instead of no utility, and giving your opponent his maximum 3 utils instead of the no utility he would get if you defected. Now suppose you are playing against another timeless decision theory agent. Clearly, the best strategy is to be that actor which defects no matter what. If both agents do this, the worst possible result for both of them occurs.
This situation can actually happen in the real world. Suppose there are two rival countries, and one demands some tribute or concession from the other, and threatens war if the other country does not agree, even though such a war would be very costly for both countries. The rulers of the threatened country can either pay the less expensive tribute or accept a more expensive war so that the first country will back off, but the rulers of the first country have thought of that and have committed to not back down anyway. If the tribute is worth 1 util to each side, and a war costs 2 utils to each side, this is identical to the payoff matrix I described. I'd be pretty surprised if nothing like this has ever happened.
Fair enough, and thanks for supplying the name.
It does not matter what probability of defecting if you expect the other agent to defect you precommit to, just so long as it is greater than 1/3. This is because if you do precommit to defecting with probability > 1/3 in that situation, the probability of that situation occurring is exactly 0. Of course, that assumes mutual perfect information about each others' strategy. If beliefs about each others' strategy is merely very well correlated with reality, it may be better to commit to always defecting anyway, because if your strategy is to defect with probability slightly greater than 1/3, and the other agent expects a high probability that that is your strategy, but also some probability that you will chicken out and cooperate with with probability 1, he might decide that defecting is worthwhile. If he does, that indicates that your probability of defecting was too low. Of course, having a higher chance of defecting conditional on him defecting does hurt you if he does, so the best strategy will not necessarily be to always defect; it depends on the kind of uncertainty in the information. But the point is, defecting with probability 1/3+ε is not necessarily always best.