Well... according to the SEP, metaethics encompasses the attempt to understand the presuppositions and commitments of moral practice.
If I'm trying to engineer a system that behaves morally (which is what FAI design is, right?), it makes some sense that I'd want to understand that stuff, just as if I'm trying to engineer a system that excavates tunnels I'd want to understand the presuppositions and commitments of tunnel excavation.
That said, from what I've seen it's not clear to me that the actual work that's been done in this area (e.g., in the Metaethics Sequence) actually serves any purpose other than rhetorical.
I think that framing the issue of AI safety in terms of "morality" or "friendliness" is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won't necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched).
I think that AI safety would be probably best dealt with in the framework of safety engineering.
There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard: