As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed. I understand that genetic fitness is relevant to morality because people must endure, but this doesn't seem to demand that the extent of morals be fitness. I would love a post that explains morality as inherently and solely about fitness.
This post flies from one topic to another very quickly, and I can't understand all the connections between topics. Why is the human designer of transhuman...
Insects and molds are more like cells than organisms. My T-cells routinely sacrifice themselves for my benefit, and there's nothing horrific about that.
It sounds an awful lot like you're looking to evolution for moral guidance. That's never a good idea.
Another way of looking at the issue:
Imagine an intellgent queen, with a range of different types of sterile workers. Since the queen is smart, she it isn't limited to using the same genome in each type of worker - she can just put in the genes that are useful. Does this diversity reduce the level of cooperation between the workers? Not really - the genes of each worker have but one way to immortality - help the queen to reproduce.
In other words, the hypothesis that the level of cooperation depends on the proportion of shared genes is only a convenient rule of thumb, and should not be taken as being a golden rule.
I'm still not getting the point even after the rewrite. Aren't collective goals best served by pure collectivists with no regard for their self-interest? (Note that real humans in a communist system are not pure collectivists.) A collectivist can always just copy the individualist strategy when that is expected to best serve the collective goal.
It also looks to me like the post isn't careful enough in distinguishing fitness-maximizing vs. adaptation-executing.
So evolutionarily, the less diversity w/in members of a species the more their behavior is oriented towards their group. I'll take it here that that's the case (I don't know enough about the subject-matter to confidently judge). But I don't think this embodies an ethics. It's just the way evolution builds things, and just because evolution builds things one way doesn't mean that it is "ethical".
...Is it satisfactory to simply define the correct moral weight to place on collective versus individual goals, as that which results when you set y
Re: The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy. The transhuman, OTOH, actually has a decision to make -- where should that balance be set?
It seems like a rather vague question - since it doesn't specify a scale of measurement - or how far in the future we are looking. Look far enough forwards, and there may be only one organism - in which case the issue doesn't arise.
You've all heard discussions of collective ethics vs. individualistic ethics. These discussions always assume that the organism in question remains constant. Your task is to choose the proper weight to give collective versus individual goals.
But the designer of transhumans has a different starting point. They have to decide how much random variation the population will have, and how much individuals will resemble those that they interact with.
Organisms with less genetic diversity place more emphasis on collective ethics. The amount of selflessness a person exhibits towards another person can be estimated according to their genetic similarity. To a first approximation, if person A shares half of their genes with people in group B, person A will regard saving their own life, versus saving two people from group B, as an even tradeoff. In fact, this generalizes across all organisms, and whenever you find insects like ants or bees, who are extremely altruistic, you will find that they share most of their genes with the group they are behaving altruistically towards. Bacterial colonies and other clonal colonies can be expected to be even more altruistic (although they don't have as wide a behavioral repertoire with which to demonstrate their virtue). Google kin selection.
Ants, honeybees, and slime molds, which share more of their genes with their nestmates than humans do with their family, achieve levels of cooperation that humans would consider horrific if it were required of them. Consider these aphids that explode themselves to provide glue to fill in holes in their community's protective gall.
The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy. The designer of posthumans (for instance, an AI designing its subroutines for a task), OTOH, actually has a decision to make -- where should that balance be set? How much variation should there be in the population (whether of genes, memes, or whatever is most important WRT cooperation)?
A strictly goal-oriented AI would supervise its components and resources so as to optimize the trade-off between "exploration" and "exploitation". (Exploration means trying new approaches; exploitation means re-using approaches that have worked well in the past.) This means that it would set the level of random variation in the population according to certain equations that maximize the expected speed of optimization.
But choosing the level of variation in a population has dramatic ethical consequences. Creating a more homogenous population will increase altruism, at the expense of decreasing individualism. Choosing the amount of variation in population strictly by maximizing the speed of optimization would mean rolling the dice as to how much altruism vs. individualism your society will have.
In light of the fact that you have a goal to solve, and a parameter setting that will optimize solving that goal; and you also have a fuzzy ethical issue that has something to say about how to set that same parameter; anyone who is not a moral realist must say, Damn the torpedos: Set the parameter so as to optimize goal-solving. In other words, simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals.
Are you comfortable with that?