Superintelligence can solve all problems, even as it won't necessarily do so. But not quite. In the modern world, you exist within physics, and all you do obeys its laws. Yet it's still you who decides what you do. If there is a simulation of your behavior, that doesn't change the attribution of the reason for it. When the fabric of reality is woven from superintelligent will rather than only physical law, the reason for your own decisions is still you, and it's still not possible for something that is not you to make decisions that are yours.
Potential for manipulation or physical destruction doesn't distinguish the role of superintelligence from that of the physical world. To make decisions in the world, you first need to exist, in an environment where you are able to function. Being overwritten or changed into something else, either with brute force or subtle manipulation or social influence, is a form of not letting this premise obtain.
A solved world is post-instrumental, all things that are done for a purpose could be done by AI to more effectively reach that purpose. It could even be more effective than you at figuring out what your decisions are going to be! This is similar to what an optimizing compiler does, the behavior of machine code is still determined by the meaning of the source code, even if that source code is erased and only exists conceptually. With humans, the physical implementation is similarly not straightforward, all the proteins and synaptic vesicles are more akin to machine code than a conceptually reasonable rendering of a person. So it's fair to say that we are already not physically present in the world, the things that are physically present are better described as kludgy and imperfect simulators.
In this framing, superintelligence is capable of being a better simulator of you, but it gets no headway in being capable of deciding your behavior. A puzzle associated with determinism is to point out that superintelligence can show you a verifiable formal proof of what you're going to do. But that doesn't really work, since deciding to ignore the proof and doing whatever makes the situation you observed go away as counterfactual, while in actuality you are still determining your own actions. Only if you do have the property of deferring to external proofs of what you'll do, will it become the case that an external proof claiming something about your behavior is the reason for that behavior. But that's no way to live. Instead, it's you who controls the course of proofs about your behavior, not those who are writing the proofs.
In case this helps you write that post, here are some things I am still confused about.
What, specifically, is the "you" that is no longer present in the scenario of strong manipulation?
In the "mindless parroting" scenario, what happens if the ASI magically disappeared? Does/can the "you" reappear? Under what circumstances?
Why is this not a fully general argument against other humans helping you make decisions? For example, if someone decides to mindlessly parrot everything a cult leader tells them, I agree there's a sense in which they are no longer present. But the choice to obey is still theirs, and can be changed, and they can reappear if they change that one choice.
OTOH, if this is a fully general argument about anyone helping anyone else make decisions, that seems like a major (and underspecified) redefinition of both "help" and "decision." It then seems like it's premature to jump to focusing on ASI as a special case, and also I'm not sure why I'm supposed to care about these definitional changes?
"So it's fair to say that we are already not physically present in the world, the things that are physically present are better described as kludgy and imperfect simulators" - I mean, yes, I see your point, but also, I am present implicitly in the structure of the physically-existing things. There is some set of arrangements-of-matter that I'd consider me, and others I would not. I don't know if the set's boundaries are quantitative or binary or what, but each member either encodes me, or not.
I think, under more conventional definitions of "help" and "decision," that telling me what to do, or showing me what I'm going to do, is kinda beside the point. A superintelligence that wanted to help me choose the best spouse might very well do something completely different, like hack someone's Waymo to bump my car right when we're both looking for someone new and in the right headspace to have a meet cute. I think that's mostly like a fancier version of a friend trying to set people up by just sitting them next to each other at a dinner party, which I would definitely classify as helping (if done skillfully). Real superintelligences help with butterflies.