derekz comments on Expected futility for humans - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
No, of course it's not for "running your life", that would be the approach of constructing a complete model (the right stance for FAI, the wrong one for human rationality). It's for mending errors in your mind that runs your life.
The special place of expected utility maximization comes from the conjecture that any restriction for coherence of thought can be restated in terms of expected utility maximization. My example can obviously be translated as well, by assigning utility to outcome given possible states of binary X and Y, and probability to X. This form won't be the most convenient, the original one may be better, but it's still equivalent, the structure of what's required of coherent opinion is no stronger.
As I said, it's just a special case, with utility maximization not being the best form for thinking about it (as you noted, simple logic suffices here). The conjecture is that everything in decision-making is a special case of utility maximization.