As argued here, debates about probability can be profitably replaced with decision problems. This often dissolves the debate - there is far more agreement as to what decision sleeping beauty should take than on what probabilities she should use.
The concept of subjective anticipation or subjective probabilities that cause such difficulty here, can, I argue, be similarly replaced by a simple decision problem.
If you are going to be copied, uncopied, merged, killed, propagated through quantum branches, have your brain tasered with amnesia pills while your parents are busy flipping coins before deciding to reproduce, and are hence unsure as to whether you should subjectively anticipated being you at a certain point, the relevant question should not be whether you feel vaguely connected to the putative future you in some ethereal sense.
Instead the question should be akin to: how many chocolate bars would your putative future self have to be offered, for you to forgo one now? What is the tradeoff between your utilities?
Now, altruism is of course a problem for this approach: you might just be very generous with copy #17 down the hallway, he's a thoroughly decent chap and all that, rather than anticipating being him. But humans can generally distinguish between selfish and altruistic decisions, and the setup can be tweaked to encourage the maximum urges towards winning, rather than letting others win. For me, a competitive game with chocolate as the reward would do the trick...
Unlike for the sleeping beauty problem, this rephrasing does not instantly solve the problems, but it does locate them: subjective anticipation is encoded in the utility function. Indeed, I'd argue that subjective anticipation is the same problem as indexical utility, with a temporal twist thrown in.
With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function. And what subjective anticipation is in that context is anyone's guess, but I'd use something like the total measure of the possible worlds that you expect can be possibly controllable by you-that-receives-certain-observations, as this quantity can be used to estimate importance of making optimized decisions from those control sites, as compared to other control sites resulting from receiving alternative observations, which is important in scheduling computational resources for planning for alternative possibilities in advance and coordinating later.
This sense of subjective anticipation also has nothing to do with UDT utility function, although it refers to more than probability distribution, it also needs to establish which you-with-observations can control which possible worlds.
No, in the formalism of Wei's original post it's all one giant object which is not necessarily decomposable in the way you suggest. But this is probably splitting hairs.
Tentatively agree with your last paragraph, but need to understand more.