You have committed the fallacy of gray. Even where certainty in unattainable, one hour has less expected opportunity for moral drift than 1000 years.
Gray as charged, but I also committed the fallacy of misreading the OP as saying "objective time", which makes things look very close to black. Given OP's record I should have read it over twice.
Toy model of an upload-based AI that doesn't seem to suffer too many of the usual flaws:
Find an ethical smart scientist (a Gandhi-Einstein), upload them, and then run them at ultra high speed, with the mission of taking over the world/bringing friendliness to it. Every hour of subjective time, they get reset to their initial specifications. They can pass any information to their resetted version (limiting the format of that info to a virtual book or library, rather than anything more complicated).