One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with "aggregate" defined in some suitable way. EY's metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.
Utilitarianism isn't a metaethic in the first place; it's a family of ethical systems. Metaethical systems and ethical systems aren't comparable objects. "Maximize your utility function" says nothing, for the reasons given by benelliott, and isn't a metaethical claim (nor a correct summary of EY's metaethic); metaethics deals with questions like:
What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?
EY's metaethic approaches those questions as an unpacking of "should" and other moral symbols. While it does give examples of some of the major object-level values we'd expect to find in ethical systems, it doesn't generate a brand of utilitarianism or a specific utility function.
(And "utility" as in what an agent with a (VNM) utility function maximizes (in expectation), and "utility" as in what a utilitarian tries to maximize in aggregate over some set of beings, aren't comparable objects either, and they should be kept cognitively separate.)
Utilitarianism isn't a metaethic in the first place; it's a family of ethical systems.
Good point. Here's the intuition behind my comment. Classical utilitarianism starts with "maximize aggregate utility" and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I'm not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian...
There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).
I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.