I would love to see Luke (the other Luke, but maybe you, too) and hopefully others (like Yvain) explicate their views on meta-ethics, given how the Eliezer's Sequence is at best unclear (though quite illuminating). It seems essential because a clear meta-ethics seems necessary to achieve MIRI's stated purpose: averting AGI x-risk by developing FAI.
There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard: