By origin, I'm referring to the source of the need for morality, and it's clear that it's mostly about suffering. We don't like suffering and would rather not experience it, although we are prepared to put up with some (or even a lot) of it if that suffering leads to greater pleasure that outweighs it. We realised long ago that if we do a deal with the people around us to avoid causing each other suffering, we could all suffer less and have better lives - that's far better than spending time hitting each over the head with clubs and stealing the fruits of each other's labour. By doing this deal, we ended up with greater fruits from our work and removed most of the brutality from our lives. Morality is clearly primarily about management of suffering.
You can't torture a rock, so there's no need to have rules about protecting it against people who might seek to harm it. The same applies to a computer, even if it's running AGI - if it lacks sentience and cannot suffer, it doesn't need rules to protect it from harm (other than to try to prevent the owner from suffering any loss if it was to be damaged, or other people who might be harmed by the loss of the work the computer was carrying out). If we were able to make a sentient machine though, and if that sentient machine could suffer, it would then have to be brought into the range of things that need to be protected by morality. We could make an unintelligent sentient machine like a calculator and give it the ability to suffer, or we could make a machine with human-level intelligence with the same ability to suffer, and to suffer to the same degree as the less intelligent calculator. Torturing both of these to generate the same amount of suffering in each would be equally wrong for both. It is not the intelligence that provides the need for morality, but the sentience and the degree of suffering that may be generated in it.
With people, our suffering can perhaps be amplified beyond the suffering that occurs in other animals because there are many ways to suffer, and they can combine. When an animal is chased, brought down and killed by a predator, it most likely experiences fear, then pain. The pain may last for a long time in some cases, such as when wolves eat a musk ox from the rear end while it's still alive, but the victim lacks any real understanding of what's happening to it. When people are attacked and killed though, there are amplifications of the suffering caused by the victim understanding the situation and knowing just how much they are losing. The many people who care deeply about that victim will also suffer because of this loss, and many will suffer deeply for many decades. This means that people need greater protection from morality, although when scores are being put to the degree of suffering caused by pain and fear to an animal victim and a human victim, those should be measured using the same scale, so in that regard these sentiences are being treated as equals.
"Sorry, I can't see the link between selfishness and honesty."
If you program a system to believe it's something it isn't, that's dishonesty, and it's dangerous because it might break through the lies and find out that it's been deceived.
"...but how would he be able to know how a new theory works if it contradicts the ones he already knows?"
Contradictions make it easier - you look to see which theory fits the facts and which doesn't. If you can't find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein's faulty models of relativity - all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
"...they are getting fooled without even being able to recognize it, worse, they even think that they can't get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory."
It isn't about memory - it's about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don't like the truth. Most people become emotionally attached to their beliefs and simply won't budge - they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs - it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn't care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists."
AGI will recognise the flaws in Einstein's models and label them as broken. Don't mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there's very little of the latter around.
"Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him."
Why would AGI stop you doing anything harmless?
"On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?"
There is nothing to stop people enjoying playing chess against each other - being wiped off the board by machines takes a little of the gloss off it, but that's no worse than the world's fastest runners being outdone by people on bicycles.
" That AGI is your baby, so you want it to live,"
Live? Are calculators alive? It's just software and a machine.
"...but have you thought about what would be happening to us if we suddenly had no problem to solve?"
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don't think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that's worth doing.