By origin, I'm referring to the source of the need for morality, and it's clear that it's mostly about suffering. We don't like suffering and would rather not experience it, although we are prepared to put up with some (or even a lot) of it if that suffering leads to greater pleasure that outweighs it. We realised long ago that if we do a deal with the people around us to avoid causing each other suffering, we could all suffer less and have better lives - that's far better than spending time hitting each over the head with clubs and stealing the fruits of each other's labour. By doing this deal, we ended up with greater fruits from our work and removed most of the brutality from our lives. Morality is clearly primarily about management of suffering.
You can't torture a rock, so there's no need to have rules about protecting it against people who might seek to harm it. The same applies to a computer, even if it's running AGI - if it lacks sentience and cannot suffer, it doesn't need rules to protect it from harm (other than to try to prevent the owner from suffering any loss if it was to be damaged, or other people who might be harmed by the loss of the work the computer was carrying out). If we were able to make a sentient machine though, and if that sentient machine could suffer, it would then have to be brought into the range of things that need to be protected by morality. We could make an unintelligent sentient machine like a calculator and give it the ability to suffer, or we could make a machine with human-level intelligence with the same ability to suffer, and to suffer to the same degree as the less intelligent calculator. Torturing both of these to generate the same amount of suffering in each would be equally wrong for both. It is not the intelligence that provides the need for morality, but the sentience and the degree of suffering that may be generated in it.
With people, our suffering can perhaps be amplified beyond the suffering that occurs in other animals because there are many ways to suffer, and they can combine. When an animal is chased, brought down and killed by a predator, it most likely experiences fear, then pain. The pain may last for a long time in some cases, such as when wolves eat a musk ox from the rear end while it's still alive, but the victim lacks any real understanding of what's happening to it. When people are attacked and killed though, there are amplifications of the suffering caused by the victim understanding the situation and knowing just how much they are losing. The many people who care deeply about that victim will also suffer because of this loss, and many will suffer deeply for many decades. This means that people need greater protection from morality, although when scores are being put to the degree of suffering caused by pain and fear to an animal victim and a human victim, those should be measured using the same scale, so in that regard these sentiences are being treated as equals.
I wouldn't want to try to program a self-less AGI system to be selfish. Honesty is a much safer route: not trying to build a system that believes things that aren't true (and it would have to believe it has a self to be selfish). What happens if such deceived AGI learns the truth while you rely on it being fooled to function correctly? We're trying to build systems more intelligent than people, don't forget, so it isn't going to be fooled by monkeys for very long.
Freezing programs contain serious bugs. We can't trust a system with any bugs if it's going to run the world. Hardware bugs can't necessarily be avoided, but if multiple copies of an AGI system all work on the same problems and compare notes before action is taken, such errors can be identified and any affected conclusions can be thrown out. Ideally, a set of independently-designed AGI systems would work on all problems in this way, and any differences in the answers they generate would reveal faults in the way one or more of them are programmed. AGI will become a benign dictator - to go against its advice would be immoral and harmful, so we'll soon learn to trust it.
The idea of having people vote faulty "AGI" into power from time to time isn't a good one - there is no justification for switching between doing moral and immoral things for several years at a time.