One problem is that in most cases, humans simply can't "precommit" in the relevant sense. We can't really (i.e. completely) move a decision from the future into the present. When I think I have "precommitted" to do the dishes tomorrow, it is still the case that I will have to decide, tomorrow, whether or not to follow through with this "precommitment". So I haven't actually precommitted in the sense relevant for causal decision theory, which requires that the future decision has already been made and that nothing will be left to decide.
So if you e.g. try to commit to one-boxing in Newcomb's problem, it is still the case that you have to actually decide between one-boxing and two-boxing when you stand before the two boxes. And then you will have no causal reason to do one-boxing anymore. The memory of the alleged "precommitment" of your past self is now just a recommendation, or a request, not something that relieves you from making your current decision.
An exception is when we can actively restrict our future actions. E.g. you can precommit to not use your phone tomorrow by locking it in a safe with a time-lock. But this type of precommitment often isn't practically possible.
Being able to do arbitrary true precommitments could also be dangerous overall. It would mean that we really can't change the precommitted decision in the future (since it has already been made in the past), even if unexpected new information will strongly imply we should do so. Moreover, it could lead to ruinous commitment races in bargaining situations.
I suspect that it is, though my inquiries as of yet are mostly in probability theory realm, not decision theory, so I may be missing some domain specific details.
It seems to me that we can reduce alternative decision theories such as FDT to CDT with a particular set of precommitments. And the ultimate decision theory is something like "I precommit to act in every decision problem the way I wished I have precommited to act in this particular decision problem".
I'm not sure I know what you mean by this, but if you mean causal effects, no, it considers all pasts, and all timelines.
(A reader might balk, "but that's computationally infeasible", but we're talking about mathematic idealizations, the mathematical idealization of CDT is also computationally infeasible. Once we're talking about serious engineering projects to make implementable approximations of these things, you don't know what's going to be feasible.)