Clarity comments on Superintelligence 21: Value learning - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (33)
I don't like Dewey's portfolio approach to utility functions. It goes something like this:
That's roughly how I think about my goals, and it's definately not very good for sustaining long term positive relationships with people. My outwardly professed goals can change in what appears to be a change of temperment. When the probability that I can secure one equally valuable goal slightly surpasses another goal, then the limits of me attention kick in and I overinvest in the new goal relative to the effort that ought to be accorded to in line with its utility function. So, a more intelligence system should either be able to prefess the heirachy of preferences (values) it has with greater sophistication than me, or to have a split attention unlike me.