The only real way one can escape the dictate of genes, memes and other selected upon replicators is to change their selective environment in such a way that one's current values (or somewhat improved upon values) are favoured. It is a difficult problem we need to engineer ourselves out of, that we barley consider at all.
For some time now I've considered evolution in the long term as threatening to human values and well being as a uFAI, and not taking its effects into account the great consistent failing of even the most successful human societies.
The only real way one can escape the dictate of genes, memes and other selected upon replicators is to change their selective environment in such a way that one's current values (or somewhat improved upon values) are favoured. It is a difficult problem we need to engineer ourselves out of, that we barley consider at all.
It doesn't sound like much of an "escape". You are apparently proposing engineering the environment so that the heritable information you favour comes to dominate. Those things would then be new genes or memes.
We could plausi...
The jacket text for Keith Stanovich's The Robot's Rebellion sums up the book well:
The book is an excellent introduction to the first stage of Yudkowskian philosophy: We are robots in a mechanistic universe running on a swiss army knife of cognitive modules. But at least we finally noticed we're robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead. These values are complex and often arbitrary, but we can use our reflective capacities to extrapolate our values based on "higher-order" desires, a desire for preference consistency, and other considerations. All this is argued for at length in Stanovich's book. The only thing missing is a discussion of what to do about all this when AI arrives.