steven0461 comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 14 June 2011 10:43:46PM 4 points [-]

As I see it, humans have revealed behavioral tendencies and reflected preferences. I share your reservations about "revealed preferences", which if they differ from both would have to mean something in between. Maybe revealed preferences would be what's left after reflection to fix means-ends mistakes but not other reflection, if that makes sense. But when is that concept useful? If you're going to reflect on means-ends, why not reflect all the way?

Also note that the preferences someone reveals through programming them into a transhuman AI may be vastly different from the preferences someone reveals through other sorts of behavior. My impression is that many people who talk about "revealed preferences" probably wouldn't count the former as authentic revealed preferences, so they're privileging behavior that isn't too verbally mediated, or something. I wonder if this attributing revealed preference to a person rather than a person-situation pair should set off fundamental attribution error alarms.

If we have nothing to go by except behavior, it seems like it's underdetermined whether we should say it's preferences or beliefs (aliefs) or akrasia that's being revealed, given that these factors determine behavior jointly and that we're defining them by their effects. With reflected preferences it seems like you can at least ask the person which one of these factors they identify as having caused their behavior.

Comment author: Will_Newsome 14 June 2011 11:26:55PM -1 points [-]

I wonder if this attributing revealed preference to a person rather than a person-situation pair should set off fundamental attribution error alarms.

Good plausible hypothesis to cache for future priming, but I'm not sure I fully understand it:

preferences someone reveals through programming them into a transhuman AI

More specifically, what process are you envisioning here (or think others might be envisioning)?