Let's assume computationalism and the feasibility of brain scanning and mind upload. And let's suppose one is a person with a large compute budget.
In this post I'll attempt to answer these quetions: How should one spend one's compute budget? How many uploads of oneself should one create? Should one terminate one's biological self? What will one's uploaded existence be like?
First, let's establish the correct frame in which to explore the questions relating to the act of upload. One is considering creating copies of oneself. So, what happens, subjectively, when one spins up a new copy of oneself? The copy is computationally identical to the original, and consciousness is computation, so each is equally oneself. But one is not both. This means that when one is creating a copy one can treat it as a gamble: there's a 50% chance they find themselves in each of the continuations.
What matters to one is then the average quality of one's continuations. One does not benefit from the creation of many great continuations over just one great continuation, because one can never experience more than one continuation.
Therefore, one should spend all of one's resources on creating the single best possible continuation. And one should terminate one's biological self: the real world is strictly worse than the ASI-curated personal utopia one's upload will experience, so terminating the biological self increases the average quality of one's continuations. To make this point clear: if one did not terminate the biological self, one would subjectively have a 50% chance of ending up in the less-favourable biological psychological continuation.
But we can figure out more about this continuation. One wants to make sure one is spending one's compute selfishly (to whatever extent one is altruistic, one should allocate a proportionate amount of one's compute budget to ASI). But value drift is a danger here: one's life expectancy is going to be absurdly high, maybe undecillions of years of personal utopia, so by default one's beliefs will evolve, and one's pre-ASI memories will be forgotten... every aspect of oneself will evolve.
Before even a small fraction of one's life has played out, one's copy will bear no relation to oneself. To spend one's compute on this person, effectively a stranger, is just altruism. One would be better off donating the compute to ASI.
So, one should get the ASI to alter oneself so that one has a more capacious memory, a more rigid personality, place one in a world more deeply-rooted in one’s pre-ASI history than would strictly maximise value. This way, one's identity can be retained.
What other questions about the person's post-ASI life can we answer? How about what this utopia one's upload will inhabit is like? Well, let's again establish a frame from which to explore our question. The world will be made by advanced ASI to be the best-possible world for one.
The ASI curator of one's world won't want one's experience to be held back by moral considerations for the other inhabitants of one's world. If it was obligated to create a perfect world in which everyone was a standard, human moral patient it would run into the usual paradoxes of utopia. I am sure that ASI could resolve these paradoxes, but the result would not be the best-possible world for one. So, the other inhabitants in one's world will be non-sentient.
However, the awareness of this fact might cheapen one's experience, so the ASI curator would ensure that one was not aware of the non-sentience of the other inhabitants of one's world.
To get some indication of what this world will be like, one can think of a superlative version of one's pre-ASI life. The stakes of the world will be higher, the emotional connections one makes will be stronger, and one will accomplish greater things. One will experience love, passion, etc. much more fully. So, I imagine one's world as being a place of vast galactic empires, inexhaustible lore, immense beauty. And that one's life will be full of grand struggle and triumph.
Suit yourself, but I happen to want to create many great continuations. I enjoy hearing about other people's happiness. I enjoy it more the better I understand them. I understand myself pretty well.
But I don't want to be greedy. I'm not sure a lot of forks of each person are better than making more new people.
Let me also mention that it's probably possible to merge forks. Simply averaging the weight changes in your simulated cortex and hippocampus will approximately work to share the memories across two forks. How far out that works before you start to get significant losses is an empirical matter. Clever modifications to the merge algorithm and additions to my virtual brain should let us extend that substantially; sharing memories across people is possible in broad form with really good translation software, so I expect we'll do that, too.
So in sum, life with aligned ASI would be incredibly awesome. It's really to imagine or predict exactly how it will unfold, because we'll have better ideas as we go.
WRT "cheapening" the experience, remember that we'll be able to twist the knobs in our brain for boredom and excitement if we want. I imagine some would want to do that more than others. Grand triumph and struggle will be available for simulated competitive/cooperative challenges; sometimes we'll know we're in a simulation and sometimes we'll block those memories to make it temporarily seem more real and important.
BUT this is all planning the victory party before fighting the war. Let's figure out how we can maximize the odds of getting aligned ASI by working out the complex challenges of getting there on both technical and societal levels.