Manfred comments on Value Stability and Aggregation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
Rather than "counterintuitive," I'd prefer "inhuman" or "unfriendly." If the creators had linear utility functions on the same stuff, HapMax would fit in just fine. If humans have a near-linear utility function on something, then an AI that has a linear utility function there will cause no catastrophes. I can't think of any problems unique to linear weighting - the problem is really when the weighting isn't like ours.