You've all heard discussions of collective ethics vs. individualistic ethics. These discussions always assume that the organism in question remains constant. Your task is to choose the proper weight to give collective versus individual goals.
But the designer of transhumans has a different starting point. They have to decide how much random variation the population will have, and how much individuals will resemble those that they interact with.
Organisms with less genetic diversity place more emphasis on collective ethics. The amount of selflessness a person exhibits towards another person can be estimated according to their genetic similarity. To a first approximation, if person A shares half of their genes with people in group B, person A will regard saving their own life, versus saving two people from group B, as an even tradeoff. In fact, this generalizes across all organisms, and whenever you find insects like ants or bees, who are extremely altruistic, you will find that they share most of their genes with the group they are behaving altruistically towards. Bacterial colonies and other clonal colonies can be expected to be even more altruistic (although they don't have as wide a behavioral repertoire with which to demonstrate their virtue). Google kin selection.
Ants, honeybees, and slime molds, which share more of their genes with their nestmates than humans do with their family, achieve levels of cooperation that humans would consider horrific if it were required of them. Consider these aphids that explode themselves to provide glue to fill in holes in their community's protective gall.
The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy. The designer of posthumans (for instance, an AI designing its subroutines for a task), OTOH, actually has a decision to make -- where should that balance be set? How much variation should there be in the population (whether of genes, memes, or whatever is most important WRT cooperation)?
A strictly goal-oriented AI would supervise its components and resources so as to optimize the trade-off between "exploration" and "exploitation". (Exploration means trying new approaches; exploitation means re-using approaches that have worked well in the past.) This means that it would set the level of random variation in the population according to certain equations that maximize the expected speed of optimization.
But choosing the level of variation in a population has dramatic ethical consequences. Creating a more homogenous population will increase altruism, at the expense of decreasing individualism. Choosing the amount of variation in population strictly by maximizing the speed of optimization would mean rolling the dice as to how much altruism vs. individualism your society will have.
In light of the fact that you have a goal to solve, and a parameter setting that will optimize solving that goal; and you also have a fuzzy ethical issue that has something to say about how to set that same parameter; anyone who is not a moral realist must say, Damn the torpedos: Set the parameter so as to optimize goal-solving. In other words, simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals.
Are you comfortable with that?
This does help clarify things. Unfortunately, Conchis is right, you're committing the naturalistic fallacy.
I think we can safely put the naturalistic fallacy in the "out-of-date philosophical claptrap" dustbin.