One thing that makes this more complicated, is you seem to be talking about omnipresent simulated clones. But in such a scenario, a large fraction of my utility would concern the clones. So any task that requires too much boring manual detail work is likely to just not get done. Or are they hypothetical clones in some way? Is this about what the clones could do, not about what they would do?
(I was musing about what it means for an incoherent lump of meat to "have preferences," and thought it might be illuminating to consider what I'd do if I were, approximately, God. It, uh, hasn't been illuminating yet, but it's been entertaining and still seems at least potentially fruitful.)
Problem statement
You suddenly become omnipotent! Except, you can only do things that you understand in sufficient detail that you could accomplish it by micromanaging all the atoms involved. And, what the heck, assume you have effortless access to infinite computational power.
What do you do?
For concreteness, here are some interventions you might try:
This being LessWrong, you'll probably quickly hit on some way to use ten billion sped-up simulated geniuses to speedrun AI alignment, build a friendly superintelligence, and delegate your Godlike power to it. But the purpose of this thought experiment is to elucidate your preferences, which that strategy -- though very reasonable! -- dodges.
What I'd do
Object level
Just, like, the obvious. Slay Famine, Pestilence, and War. Stop accidents from happening. Scrap the solar system for parts and give everybody ultra-customizable space habitats connected by teleportation booths. (All this can be micromanaged by zillions of zillions of simulated clones of me.)
Let people opt out, obviously, in whole or in part.
There are still, to be clear, important wishes I can't grant, such as "make me smarter" or "make my memory not degrade as I age" or "help me and my partner solve this relationship snarl."
Meta level