I disagree with "not at all", to the extent that the Matrix has probably much less computing power than the universe it runs on. Plus, it could have exploitable bugs.
This is not a question worth asking for us mere mortals, but a wannabe super-intelligence should probably think about it for at least a nanosecond.
Hell, it's definitely worth us thinking about it for at least half a second. Probably a lot more than that. It could have huge implications if we discovered that there was evidence of any kind of powerful agent affecting the world, Matrix-esque or not. Maybe we could get into heaven by praying to it, or maybe it would reward us based on the number of paperclips we created per day. Maybe it wouldn't care about us, maybe it would actively want to cause us pain. Maybe we could use it, maybe it poses an existential risk. All sorts of possible scenarios there, and the only way to tell what actions are appropriate is to examine... the... evidence... oh right. There is none, because in reality we don't live in the Matrix and there isn't any superintelligence out there in our universe. So we file away the thought, with a note that if we ever do run into evidence of such a thing (improbable events with no apparent likely cause) that we should pull it back out and check. But that's not the same as thinking about it. In reality, we don't live in that world, and to the extent that is true then the answer to "what do we do about it" is "exactly what we've always done."
Matrixy scenarios have been discussed here and elsewhere (including xkcd and smbc) quite a bit, actually. Given that a superintelligence would be incomprehensible to mere mortals, it's kind of pointless to discuss this eventuality seriously. Plus you risk joining the dark world of anthropics. However, we can understand a similar situation from the other end: life of animals in the shadow of humans. Some of these humans are paperclip maximizers (displace animals to make room for crops), and others have an animal CEV value system (various animal rights groups). This has also been discussed a lot, of course. Unfortunately the only benefit of the latter discussion that I can see is that of a low inferential distance analogy in the fight to raise awareness of the potential dangers of "living in the shadow of superintelligence" to humans.
I observe that even animal rights groups are somewhat unlikely to assert the right of lions to eat humans, which an anthropomorphized lion would likely consider a fundamental value. So there's a limit to our Friendliness. :)
They're also somewhat unlikely to assert the right of the gazelles to not get eaten, which the gazelles would definitely consider a fundamental value. The occasional one does, but it's generally pretty far from their CEV.
the right of lions to eat humans
I'm pretty sure that lions prefer gazelles to humans, and generally don't care too much as long as there is enough food. Maybe a decent zoo is what the lion CEV amounts to :)
What are the SI's goals? We'd expect to see some conditions that are (almost) always achieved -- that's what an SI is. Maybe: the laws of physics.
But that's not a human-relevant goal except in the broadest sense that we must make our way through a physics-constrained world. It would be far more interesting if there were an SI whose goals closely related to humans' goals. Is there?
I like Kurzweil's comment on the simulation theory, which was something along the lines of (not an exact quote) "Well, if we are living in a simulation, then probably our best bet at not getting turned off is just to not be boring."
We definitely do live in a world where there are representatives all along the intelligence spectrum. We also live in a world where the difference in the mental life of an infant and an adult is so great we adults can't remember or convey what it was like to be infants. Then there's humans and the other primates. Plenty of intelligence disparity relations already in play. If there is a level up it will have the same wonderful and terrible relation we have with each other already.
I find it highly unlikely that a superintelligence would care to create a medieval simulation with tons of suffering
Although it regularly discusses the possibility of superintelligences with the power to transform the universe in the service of some value system - whether that value system is paperclip maximization or some elusive extrapolation of human values - it seems that Less Wrong has never systematically discussed the possibility that we are already within the domain of some superintelligence, and what that would imply. So how about it? What are the possibilities, what are the probabilities, and how should they affect our choices?