Quantum Immortality/Suicide
This doesn't seem to fit. There isn't anything about Quantum Immortality that requires UDT (except in as much as any decision requires at least some kind of decision theory). The difficulty (and common confusion) is around translating primitive preference-intuitions into preference-beliefs about wavefunctions or branches. Once the values are given, both CDT and EDT will just result in the same decision that UDT would make (unless the specific decision also combines one of the other issues UDT is required for.)
There isn't anything about Quantum Immortality that requires UDT (except in as much as any decision requires at least some kind of decision theory). The difficulty (and common confusion) is around translating primitive preference-intuitions into preference-beliefs about wavefunctions or branches. Once the values are given, both CDT and EDT will just result in the same decision that UDT would make (unless the specific decision also combines one of the other issues UDT is required for.)
I think UDT makes it possible to understand what decisions are, and how wavefunctions can depend on one's decisions. Before I came up UDT ideas, this was really unclear to me, and I had considered some other decision theory approaches where Quantum Immortality was sort of baked in. For example I had the idea that the wavefunction couldn't be changed, but when you make decisions, you're choosing which branch of the wavefunction your consciousness continues into.
Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.
Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.
Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go "where's my free will?". Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go "where's my causality?". Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.
I'm not sure if you're aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
I noticed that recently I wrote several comments of the form "UDT can be seen as a step towards solving X" and thought it might be a good idea to list in one place all of the problems that helped motivate UDT1 (not including problems that came up subsequent to that post).