If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case. There's:
no reason to believe that "satisfying human values" in general falls into one of these classes;
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
no particularly compelling support for the UGC being true.
In addition, given that you haven't made any concrete appeal to complexity theory, it seems unfair to rely on the mere mention of mathematics to lend support to your argument.
If the UGC is true, then some classes of constraint satisfaction problems are hard to approximate in the worst case.
What do you mean by classes? I'm pretty sure the UGC applies to all sorts of constraint satisfaction problems, not just a certain kind of them.
no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
I agree. But what are the odds it's best case, either? I'm not saying that we're doomed and have no chance of evaluating things. I'm saying that this is a potential difficulty I hope someone looks into further.
I am not a computer scientist and do not know much about complexity theory. However, it's a field that interests me, so I occasionally browse some articles on the subject. I was brought to https://www.simonsfoundation.org/mathematics-and-physical-science/approximately-hard-the-unique-games-conjecture/ by a link on Scott Aaronson's blog, and read the article to reacquaint myself with the Unique Games Conjecture, which I had partially forgotten about. If you are not familiar with the UGC, that article will explain it to you better than I can.
One phrase in the article stuck out to me: "there is some number of colors k for which it is NP-hard (that is, effectively impossible) to distinguish between networks in which it is possible to satisfy at least 99% of the constraints and networks in which it is possible to satisfy at most 1% of the constraints". I think this sentence is concerning for those interested in the possibility of creating FAI.
It is impossible to perfectly satisfy human values, as matter and energy are limited, and so will be the capabilities of even an enormously powerful AI. Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem. Additionally, our values themselves are ill-formed. Human values are numerous, ambiguous, even contradictory. Given the complexities of human value systems, I think it's safe to say we're dealing with a particularly nasty variation of the problem, worse than what computer scientists studying it have dealt with.
Not all specific instances of complex optimization problems are subject to the UGC and thus NP hard, of course. So this does not in itself mean that building an FAI is impossible. Also, even if maximizing human values is NP hard (or maximizing the probability of maximizing human values, or maximizing the probability of maximizing the probability of human values) we can still assess a machine's code and actions heuristically. However, even the best heuristics are limited, as the UGC itself demonstrates. At bottom, all heuristics must rely on inflexible assumptions of some sort.
Minor edits.