You've all heard discussions of collective ethics vs. individualistic ethics. These discussions always assume that the organism in question remains constant. Your task is to choose the proper weight to give collective versus individual goals.
But the designer of transhumans has a different starting point. They have to decide how much random variation the population will have, and how much individuals will resemble those that they interact with.
Organisms with less genetic diversity place more emphasis on collective ethics. The amount of selflessness a person exhibits towards another person can be estimated according to their genetic similarity. To a first approximation, if person A shares half of their genes with people in group B, person A will regard saving their own life, versus saving two people from group B, as an even tradeoff. In fact, this generalizes across all organisms, and whenever you find insects like ants or bees, who are extremely altruistic, you will find that they share most of their genes with the group they are behaving altruistically towards. Bacterial colonies and other clonal colonies can be expected to be even more altruistic (although they don't have as wide a behavioral repertoire with which to demonstrate their virtue). Google kin selection.
Ants, honeybees, and slime molds, which share more of their genes with their nestmates than humans do with their family, achieve levels of cooperation that humans would consider horrific if it were required of them. Consider these aphids that explode themselves to provide glue to fill in holes in their community's protective gall.
The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy. The designer of posthumans (for instance, an AI designing its subroutines for a task), OTOH, actually has a decision to make -- where should that balance be set? How much variation should there be in the population (whether of genes, memes, or whatever is most important WRT cooperation)?
A strictly goal-oriented AI would supervise its components and resources so as to optimize the trade-off between "exploration" and "exploitation". (Exploration means trying new approaches; exploitation means re-using approaches that have worked well in the past.) This means that it would set the level of random variation in the population according to certain equations that maximize the expected speed of optimization.
But choosing the level of variation in a population has dramatic ethical consequences. Creating a more homogenous population will increase altruism, at the expense of decreasing individualism. Choosing the amount of variation in population strictly by maximizing the speed of optimization would mean rolling the dice as to how much altruism vs. individualism your society will have.
In light of the fact that you have a goal to solve, and a parameter setting that will optimize solving that goal; and you also have a fuzzy ethical issue that has something to say about how to set that same parameter; anyone who is not a moral realist must say, Damn the torpedos: Set the parameter so as to optimize goal-solving. In other words, simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals.
Are you comfortable with that?
As near as I can tell from your link, this Naturalistic Fallacy means disagreeing with G. E. Moore's position that "good" cannot be defined in natural terms. It seems to be a powerful debating trick to convince people that disagreeing with you is a fallacy.
Further, Phil's statement does not even define "good", it describes how people define "good". It is not a fallacy to describe a behavior that commits a fallacy.
I wonder, would you have realized these issues yourself, if you had tried to explain how the fallacy applies to the statement? Or would it have helped me to realize that you meant something else, and there is indeed a problem here?
Apologies, I should have made it clearer that I was referring to the naturalistic fallacy in it's casual sense, which denies the validity of drawing moral conclusions directly from natural facts. (I assumed that this usage was common enough that I didn't need to spell it out; that assumption was clearly false.)
Pace your second paragraph, it seemed to me that Phil was trying to do this, and others seem to have interpreted the post in this way too. But I admit that part of the vagueness in my phrasing was due to the fact that I was (and still am) having trouble figuring out exactly what Phil is trying to say.