Eliezer says that when he uses words like "moral", "right", and "should", he's referring to properties of a specific computation. That computation is essentially an idealized version of himself (e.g., with additional resources and safeguards).
When we use words like "right", we don't refer to "right", we refer to particular heuristics that allow us to make right decisions. These heuristics are indeed properties of us, and improved heuristics are properties of idealized versions of us. But "right" itself is not a property of humans, or the kind of thing we can build or imagine. It's a different kind of referent from the usual concepts, for some reason. We don't refer to "right", we use it through particular heuristics.
Consider an analogy with mathematical truth. When we use "truth", do we refer to particular heuristics that allow us to know which mathematical statements are true, or do we refer to something which exists largely independently of human beings, even though we're not really sure what it is exactly? The latter makes more sense to me.
I do not mean to imply that by "right" we must also refer to something that exists independently of human beings, but I think this analogy shows that we need more than "we can't seem to figure out what 'right' refers to, but clearly these heuristics have something to do with it" in order to conclude that the heuristics are what we mean by "right".
I think I've found a better argument that Eliezer's meta-ethics is wrong. The advantage of this argument is that it doesn't depend on the specifics of Eliezer's notions of extrapolation or coherence.
Eliezer says that when he uses words like "moral", "right", and "should", he's referring to properties of a specific computation. That computation is essentially an idealized version of himself (e.g., with additional resources and safeguards). We can ask: does Idealized Eliezer (IE) make use of words like "moral", "right", and "should"? If so, what does IE mean by them? Does he mean the same things as Base Eliezer (BE)? None of the possible answers are satisfactory, which implies that Eliezer is probably wrong about what he means by those words.
1. IE does not make use of those words. But this is intuitively implausible.
2. IE makes use of those words and means the same things as BE. But this introduces a vicious circle. If IE tries to determine whether "Eliezer should save person X" is true, he will notice that it's true if he thinks it's true, leading to Löb-style problems.
3. IE's meanings for those words are different from BE's. But knowing that, BE ought to conclude that his meta-ethics is wrong and morality doesn't mean what he thinks it means.