I'm pretty much a novice at decision theory, although I'm competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the "you play prisoner's dilemma against a copy of yourself" example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you're capable of self-modifying, you're also capable of arbitrarily strong precommitments, which solves the issue without (really) changing decision theories. For example, you can just precommit to "I will cooperate with everyone who shares this precommitment" (for some well-defined "cooperate"*). Then when you're copied, your copy shares the precommitment and you're good.
Does that sounds about right or am I missing something?
*regardless of decision theory, you probably wouldn't want to cooperate with someone who plans to use any resources she obtains to harm you as much as possible, for example.
The literature largely defines CDT as incapable of precommitments. If you want to propose a specific model of how to choose commitments, just do it.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.