The sane answer is that it solves a cooperation problem.
Reciprocal altruism sometimes sends a relatively weak signal - it says that you will cooperate so long as the "shadow of the future" is not too ominous.
Invoking "good" and "evil" signals more that you believe in moral absolutes: the forces of good and evil.
On the one hand, that is a stronger signalling technique - it attempts to signal that you won't defect - no matter what!
On the other hand, it makes you look a bit as though you are crazy, don't understand rationality or game theory - and this can make your behaviour harder to model.
As with most signalling, it should be costly to be credible. Alas, practically anyone can rattle on about good and evil. I am not convinced it is very effective overall.
Taken from some old comments of mine that never did get a satisfactory answer.
1) One of the justifications for CEV was that extrapolating from an American in the 21st century and from Archimedes of Syracuse should give similar results. This seems to assume that change in human values over time is mostly "progress" rather than drift. Do we have any evidence for that, except saying that our modern values are "good" according to themselves, so whatever historical process led to them must have been "progress"?
2) How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition? If Eliezer wants the the AI to look at humanity and infer its best wishes for the future, why can't he task it with looking at himself and inferring his best idea to fulfill humanity's wishes? Why must this particular thing be spelled out in a document like CEV and not left to the mysterious magic of "intelligence", and what other such things are there?