I don't see how it would be possible to feel emotions, attachments specifically, to anything without those also translating to emotions regarding death.
Any scheme that prohibits fear in relation to death would either significantly mess with the intricate web of other emotions, modifying e.g. "I originally wanted to be with my loved ones, but I don't want to be with them in the future, since I may be dead then" (or you'll be fearful of not seeing your loved ones again because of dying, which gets conspicuously close to "fear of death"), or just ignore death altogether, to avoid doing the updates that are contingent on death. So it comes close to the statement you attribute to me.
Think about it like a Bayesian belief propagation graph. If you propagate the change, the overall changes would be huge. The only way to avoid them is to cut out the node and pretend it's still there, like beeping out a name whenever it comes out. However, that would lead to the failure mode of what happens when you run across that node while coming to a decision, eventually accidental suicide (it's a rather important node in your day-to-day life).
(As an aside, I remember arguments here on LW that AIXI would accidentally suicide, don't remember the details, unfortunately.)
Just wondering, suppose someone (say, at a meetup) offered you $100 to come up with a counterargument you would find convincing, would you be able to?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.