The next biggest source of human values comes from the theory of self-organising systems. The brain is probably the most important self-organising system involved. It mostly has desires that arise by virtue of it being a large reinforcement learning system.
That's the sort of thing I had in mind. Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes "want". Of course if you place a human being in "the ancestral environment" then you will get learned values that serve the "aim of the genes" reasonably well - but not perfectly. In the modern environment, less so. The brain sometimes wants its own reward signals per se, and more often wants certain distal events that have been favored over the learning process.
Having thus discovered certain activities to be meaningful and rewarding, people go on to tell each other about them. This strongly shapes the meme environment.
How noble or ignoble this is, may be in the eyes of the beholder. It doesn't look so ignoble to me.
Because our conceptual framework is learned from experience, what we learn to seek is not necessarily what our genes "want". Of course if you place a human being in "the ancestral environment" then you will get learned values that serve the "aim of the genes" reasonably well - but not perfectly. In the modern environment, less so.
The idea of values coming from genes does not say anything about whether those desires are adaptive in the modern environment. Humans desire fat and sugar. Those desires are built in - coded in g...
The jacket text for Keith Stanovich's The Robot's Rebellion sums up the book well:
The book is an excellent introduction to the first stage of Yudkowskian philosophy: We are robots in a mechanistic universe running on a swiss army knife of cognitive modules. But at least we finally noticed we're robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead. These values are complex and often arbitrary, but we can use our reflective capacities to extrapolate our values based on "higher-order" desires, a desire for preference consistency, and other considerations. All this is argued for at length in Stanovich's book. The only thing missing is a discussion of what to do about all this when AI arrives.