I don't think that's quite right- a sufficiently smart Friendly CDT agent could self-modify into a TDT (or higher decision theory) agent without compromising Friendliness (albeit with the ugly hack of remaining CDT with respect to consequences that happened causally before the change).
That sounds intriguing also. Again, a reference to something written by someone who understands it better might be helpful so as to make some sense of it.
Maybe it would be helpful to you to think of self-modifications and alternative decision theories as unrestricted precommitment. If you had the ability to irrevocably precommit to following any decision rule in the future, which rule would you choose? Surely it wouldn't be pure CDT, because you can tractably identify situations where CDT loses.
A monthly thread for posting rationality-related quotes you've seen recently (or had stored in your quotesfile for ages).