Look how robot controllers are implemented, look at real theories, observe that treating copies as extra servos is trivial change and works. It also works when the copies are not full and can distinguish between each other. Also, re-learn that values in theory are theoretical and are not homologous to underlying physical implementation; it is of no more interest that the action A is present in N physically independent systems, than that the action A is a real number but hardware is using floating point binary.
Philosophers have tendency to pick some random minor implementation detail, and get some sort of philosophical problem with it. For example the world may be deterministic, a minor implementation detail, the philosophers go "where's my free will?". Exact same thing with decision theories. Same theoretic action variable represents several different objects, that could be 2 robot arms wired in parallel, that could be two controllers with identical state wired to 2 robot arms, everything works the same but for the latter philosophers go "where's my causality?". Never mind that the physics is reversible at fundamental level and notion of causality is just a cognitive tool, for everyone else.
I'm not sure if you're aware that my interest in these problems is mostly philosophical to begin with. For example I wrote the post that is the first link in my list in 1997, when I had no interest in AI at all, but was thinking about how humans would deal with probabilities when mind copying becomes possible in the future. Do you object to philosophers trying to solve philosophical problems in general, or just to AI builders making use of philosophical solutions or thinking like philosophers?
I noticed that recently I wrote several comments of the form "UDT can be seen as a step towards solving X" and thought it might be a good idea to list in one place all of the problems that helped motivate UDT1 (not including problems that came up subsequent to that post).