gwern comments on Extreme risks: when not to use expected utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
Oh, sure. (Eliezer has a post on specific human inconsistencies from the OB days.) But this is a theoretical result, saying we can go from specific choices - 'revealed preferences' - to a utility function/set of cardinal preferences which will satisfy those choices, if those choices are somewhat rational. Which is exactly what billswift asked for.
(And I'd note the issue here is not what do humans actually use when assessing small probabilities, but what they should do. If we scrap expected utility, it's not clear what the right thing is; which is what my other comment is about.)