If Archimedes (or a random person) could live for a thousand years (together with the rest of humanity), could think a billion times faster, could learn all the FAI knows, etc, etc, then they'd very likely arrive at the same answer as a modern person.
First, this seems implausible (people really do have different desires and personalities), and second, if plausible, the starting location doesn't seem to matter. If you take Archimedes and a brain-damaged murderer and a chimpanzee and end up with similar outputs, what did you need the inputs for? They don't appear to have made much difference. If the answer is that you'll modify the chimp and the murderer to become like Archimedes, then why don't you just copy Archimedes directly instead of doing it by scrubbing out those individuals and then planting Archimedes inside?
Until CEV has a plan for dealing with envy, it strikes me as underplanned. Real humans interfere with each other- that's part of what makes life human.
The CEV of chimpanzees would not be the same as the CEV of humans.
Taken from some old comments of mine that never did get a satisfactory answer.
1) One of the justifications for CEV was that extrapolating from an American in the 21st century and from Archimedes of Syracuse should give similar results. This seems to assume that change in human values over time is mostly "progress" rather than drift. Do we have any evidence for that, except saying that our modern values are "good" according to themselves, so whatever historical process led to them must have been "progress"?
2) How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition? If Eliezer wants the the AI to look at humanity and infer its best wishes for the future, why can't he task it with looking at himself and inferring his best idea to fulfill humanity's wishes? Why must this particular thing be spelled out in a document like CEV and not left to the mysterious magic of "intelligence", and what other such things are there?