I agree that trying to map all human values is extremely complex as articulated here [http://wiki.lesswrong.com/wiki/Complexity_of_value] , but the problem as I see it, is that we do not really have a choice - there has to be some way of measuring the initial AGI to see how it is handling these concepts.
I dont understand why we don’t try to prototype a high level ontology of core values for an AGI to adhere to - something that humans can discuss and argue about for many years before we actually build an AGI.
Law is a useful example which shows that human values cannot be absolutely quantified into a universal system. The law is constantly abused, misused and corrected so if a similar system were to be put into place for an AGI it could quickly lead to UFAI.
One of the interesting things about the law is that for core concepts like murder, the rules are well defined and fairly unambiguous, whereas more trivial things (in terms of risk to humans) like tax laws, parking laws are the bits that have a lot of complexity to them.
The value-loading problem is the problem of getting an AI to value certain things, that is, writing it's utility function. In solving this problem, you can either try to hard-code something into the function, like "paperclips good!". This is direct specification; writing a function that values certain things, but when we want to make an AI value things like "doing the right thing" this becomes unfeasible.
Instead, you could solve the problem by having the AI figure out what you want by itself. The idea is then that the AI can figure out the aggregate of human morality and act accordingly by simply being told to "do what I mean" or something similar. While this might require more cognitive work by the AI, it is almost certainly safer than trying to formalize morality ourselves. In theory this way of solving the problems avoids an AI that suddenly breaks down on some border case, for example a smilemaximizer filling the galaxy with tiny smileys instead of happy humans having fun.
This is all a loose paraphrasing from the last liveblogging event EY had in the FB group where he discusses open problems in FAI.