You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what's to prevent you from just disregarding the strategy?
Disregarding it once will convince yourself and others that you will disregard it in the future, and remove your ability to make other precommitments.
The nuclear war example is more complicated, because presumably having a nuclear war will be the last thing you ever do. I would credit it to evolved instincts. Evolution "knows" that precommitments are important, so it gives us the desire to follow them even when it is not immediately rational to do so - for example, a lust for revenge that ought to be sufficient to make us retaliate in nuclear war, or a concept of "honor" that does the same.
I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?
Our brain has several mechanisms by which we can make commitments: Honor. Pride. Duty. Guilt. You can use any of those emotional mechanisms to the service of enforcing commitments.
Because CDT isn't rational. You don't always have to act only for the sake of things that you can cause. If you're a transparent agent then you sometimes have to become the kind of agent that will carry out a precommitment. If that commitment fails then the rational thing to do is to carry out your threat.
EDIT: No-one else in the thread appears to understand that you don't need to have an additional reason (like a third party agreement) in order to carry out your threat.
I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?
People don't normally do things because that would be the rational thing to do. They do things because they believe to be the kind of person who does such things. Usually you would have to train to overcome that bias, but in this case you can make it work in your favor. So here is the three step program to learn to precommit:
That should make precommitments second nature to you.
Put the keys to the nuclear weapons in the hands of people who have been conditioned to retaliate as part of their job.
In terms of general ways of precommitting, there are a few options:
The most obvious solution is to coerce your future self, by creating a future downside of not following through that is worse than the future downside of following through. Nuclear deterrence is a tough one, but In principle this is no different from coercing someone else. (I guess one could ask if it's any more ethical, at that...)
By taking the idea of precommitements absolutely seriously. However, I'm not sure if it is actually possible in practice, and I doubt that the standard techniques for decompartmentalization are sufficient.
See a lawyer and notary and sign a contract. Be skeptical of precommitments when this isn't a realistic option.
Another way to think about this, modify your utility function to care about your precommitments.
To use your example:
For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.
Of course, not retaliating will ensure that the future of humanity is dominated by the evil values (if I didn't consider their values evil, why did I get into a nuclear standoff with them?) of someone who is, furthermore, willing to start an nuclear war.
I personally find that much more terrifying then the deaths of a few of their civilians in this generation.
You seem to be misunderstanding of the purpose of the "least convenient possible world". The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking "least convenient possible world" to self sabotage attempts to solve problems in the real world.
No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn't in fact negative utility. But unless you think that this will always be the case (it isn't) the request for you to move to the LCPW is valid.
Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn't applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics - no utility evaluation can exceed the existence of humanity.
It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?
You put the answer in the title. We are humans, not rational agents. We have built in mechanisms to handle this. Pride, embrace it. This actually becomes easier with experience. I've found that in times when I've tried to be a good little CDT agent and suppress my human instincts it has gone badly for me. My personal psychology doesn't react well to the suppression and I've actually been surprised how often failing to follow through with a threat (or what should be an implied threat) had more negative consequences than I anticipated. On this my instincts and my ethics are aligned.
Ideally, your decision to follow that precommitment should be so strong that you don't really have a choice, retaliating is something you don't even think about but execute by default. With precommitments, you want to restrict your own decision-possibilities.
If I hadn't dissolved the question already, I'd probably have come up with something like "by making precommitments, you want to undermine your free will so that once that events (nuclear strike etc.) happened, you don't have a free choice anymore because your free will is nonexistent in that situation".
How can you precommit to something where the commitment is carried out only after you know your commitment strategy has failed?
It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?
You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what's to prevent you from just disregarding the strategy?
If a commitment strategy will fail you don't want to make the commitment but if you will not follow the commitment even when the strategy fails then you never made the commitment in the first place.
For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.
I'm not saying commitments aren't useful, I'm just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?
I apologize if reading this makes it harder for any of you to make precommitments. I'm hoping someone has a better solution than simply tricking your future self.