Wei_Dai comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 19 November 2011 07:20:24PM *  12 points [-]

Have you considered evolution?

It sounds like I might have skipped a few inferential steps in this post and/or chose a bad title. Yes, I'm assuming that if we are selfish, then evolution made us that way. The post starts at the followup question "if we are selfish, how might that selfishness be implemented as a decision procedure?" (i.e., how would you program selfishness into an AI?) and then considers "what implications does that have as to what our values actually are or should be?"

Comment author: Stuart_Armstrong 21 November 2011 10:27:06AM 4 points [-]

What I meant by my post is that starting with random preferences, those that we designate as selfish survive. So what we intuitively think of selfishness - me-first, a utility function with an index pointing to myself - arises naturally from non-indexical starting points (evolving agents with random preferences).

If it arose this way, then it is less mysterious as to what it is, and we could start looking at evolutionary stable decision theories or suchlike. You don't even have to have evolution, simply "these are preferences that would be advantageous should the AI be subject to evolutionary pressure".