Superintelligence can solve all problems, even as it won't necessarily do so. But not quite. In the modern world, you exist within physics, and all you do obeys its laws. Yet it's still you who decides what you do. If there is a simulation of your behavior, that doesn't change the attribution of the reason for it. When the fabric of reality is woven from superintelligent will rather than only physical law, the reason for your own decisions is still you, and it's still not possible for something that is not you to make decisions that are yours.
Potential for manipulation or physical destruction doesn't distinguish the role of superintelligence from that of the physical world. To make decisions in the world, you first need to exist, in an environment where you are able to function. Being overwritten or changed into something else, either with brute force or subtle manipulation or social influence, is a form of not letting this premise obtain.
A solved world is post-instrumental, all things that are done for a purpose could be done by AI to more effectively reach that purpose. It could even be more effective than you at figuring out what your decisions are going to be! This is similar to what an optimizing compiler does, the behavior of machine code is still determined by the meaning of the source code, even if that source code is erased and only exists conceptually. With humans, the physical implementation is similarly not straightforward, all the proteins and synaptic vesicles are more akin to machine code than a conceptually reasonable rendering of a person. So it's fair to say that we are already not physically present in the world, the things that are physically present are better described as kludgy and imperfect simulators.
In this framing, superintelligence is capable of being a better simulator of you, but it gets no headway in being capable of deciding your behavior. A puzzle associated with determinism is to point out that superintelligence can show you a verifiable formal proof of what you're going to do. But that doesn't really work, since deciding to ignore the proof and doing whatever makes the situation you observed go away as counterfactual, while in actuality you are still determining your own actions. Only if you do have the property of deferring to external proofs of what you'll do, will it become the case that an external proof claiming something about your behavior is the reason for that behavior. But that's no way to live. Instead, it's you who controls the course of proofs about your behavior, not those who are writing the proofs.
Ability to predict how outcome depends on inputs + ability to compute the inverse of the prediction formula + ability to select certain inputs => ability to determine the output (within limits of what the influencing the inputs can accomplish). The rest is just an ontological difference on what language to use to describe this mechanism. I know that if I place a kettle on a gas stove and turn on the flame, I will get the boiling water, and we colloquially describe this as bowling the water. I do not know all the intricacies of the processes inside the water, and I am not directly controlling individual heat exchange subprocesses inside the kettle, but if would be silly to argue that I am not controlling the outcome of the water getting boiled.