GabrielDuquette comments on Value evolution - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (111)
But creatures that can communicate and cooperate are capable of converging over time on greater subjective well-being for most. CEV is a fine moon to miss if it means we'll still end up amongst the stars.
EDIT: Your posts are thought-provoking as all-get-out. Here's my provoked thought: what if the AI implements something like a moral Total Perspective Vortex, which effects a kind of empathic selection pressure? What does a future society comprised of Amanda Knoxes look like?
I'm not convinced of that. The act of trying to converge requires learning and exploring ideas. This creates new concepts and new understandings, and situates morality in a higher-dimensional space with more known consequences to consider, and enables more-complex social structures. All these things (judging from history) make morality more complex faster than they iron out the inconsistencies.
(I don't understand the last sentence - is Amanda Knox supposed to be particularly virtuous?)
Even within a highly complex space of moral possibilities, there will be large moral clusters of near-equilibrium as well as delusional-but-momentarily-stable ones. I got ahead of myself by mentioning Amanda Knox. I think what I really wanted to ask was this: if a particular gene/environment interaction produces near-equilibrium, what happens if everybody else gets vaporized/eaten by zombies/etc? Does the de facto Utopia diversify itself back into ambivalence over time? Can this be prevented/managed?