Probably no, regardless of how our relationship with physics broadens and deepens, because of thermodynamics, which applies multiversally, on the metaphysical level.
We would have to build a perfect frictionless reversible computer at absolute zero, where we could live forever in an eternal beneficient cycle (I'm not a physicist but as far as I'm aware, such a device isn't conceivable under our current laws of physics.), while somehow permanently sealing away the entropy that came into existence before us, the entropy that we've left in our wake, and the entropy that we generated in the course of building the computer. I'm fairly sure there can be no certain way to do that. It's conceivable to me that there might be, for many laws of physics, once we have precise enough instruments, some sealing method that will work for most initial configurations. But, probably not.
But there will be other possible solutions, like crete a wormhole to another universe and thus escape the heat death of the universe. Surely, there could be many such ideas, and AI could spend billions years testing them.
The space of possible futures is a lot bigger than you think (and bigger than you CAN think). Here are a few possibilities (not representative of any probability distribution, because it's bigger than I can think too). I do tend to favor a mix of the first and last ones in my limited thinking:
The implications that any of these have as to goals (expansion, survival for additional time periods, creation of aligned agents that are better or more far-reaching than you, improvement of local state) is no different from the question of what are your personal goals as a human. Are you seeking immortality, seeking to help your community, seeking to create a better human replacement, seeking to create a better AI replacement, etc.? Both you and the theoretical AI can assign probability*effect weights to all options, and choose accordingly.
I agree with you claims about AI but with not your claims about what I think and or CAN think :) This was probably rhetoric from your side, but it may look offensive from the other side.
The question presupposes that by continuing living you fullfill your values better. It might be that after a couple millenia additional millenias don't really benefit that much.
I am presuming that if immortality is possible then the value of it is transfinite and thus any finite chance (infinidesimals migth still lose) means it overrides all other considerations.
In a way a translation to more human scale problem is "Are there acts you should take even if taking those actions would cost your life regardless of how well you think you can use your future life?" The way it would not be analogous would be that human lifes are assumed to be finite (note that if you genuinely think that there is a chance a particular human be immortal it is just the original question). This can lead to a stance where you estimate what a humanlife in good conditions could achieve without regard to your particular condition and if particular conditions allow you to take an even better option you could take it. This could lead to stuff like risking your life for relatively minor advantages in middle ages where death was very relevantly looming anyways. In those times it might have been relevant to "what I can achieve before I cause my own death?" and since then the option to trying to die of old age (ie not causing your own death actively) has become a relevant option that breaks the old way of framing the question. But if you take it seriously that shooting for old age is imperative it means that if there is a street that you estimate there is a 1% risk of being in a muggin situation with 1% chance of it ending with you getting shot it rules out using that street as a way to move.
In analogy as long as there is heat there will be computational uncerntainty which means that there will always be ambient risk about things going wrong. That is you might have a high certainty of functioning in some way indefinitely but working in a sane way is way less certain. And all action and thinking options deal in energy use and thus deal in increasing insanity risk.
It is easy to think of that as "utility function", but it doesn't mean that utility functions are always zero. So, we could have utility functions that make people behave like perfect utility function maximizers.
The question around scope insensitivity might play out (to us) as something like an agen
...The "end of the universe" can happen in some ways. One of them is the "big freeze" - the galaxies may go far from each other, the starts may die, and so on. In that way, there is no reason why the AI can't "live forever" - it might be a big computer float in the space, far away from anything, and it will be close system so the energy won't run away.
To make useful computations, AI still needs temperature difference, and it will lose energy by cooling. However, some think that in very cold universe computations will be much more efficient (up to 10 power 30 times) - https://arxiv.org/abs/1705.03394
However, it is not "immortal AI".
Useful work consumes negentropy. A closed system can only do so much useful work. (However, reversible computations may not require work.)
There are two points of view. Either such AI will be immortal and it will find ways to overcome the possible end of the Universe and will make an infinite amount of computations. For example, Tipler's Omega is immortal.
Or the superintelligent AI will die at the end, few billion years from now, and thus it will make only a finite amount of computations (this idea is behind Bostrom's "astronomical waste").
The difference has important consequences for the final goals for AI and for our utilitarian calculations. In the first case (possibility of AI's immortality) the main instrumental goal of AI is to find ways to survive the end of the universe.
In the second case, the goal of AI is to create as much utility as possible before it dies.