Manfred comments on Expected utility and utility after time - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (24)
The typical way to handle this (for example, the AIXI model does this) is to just add up (integrate) the utility at each point in time and maximize the expected total. This sort of agent would quickly and without worry choose option 1.
Obviously this is not the way humans actually work. Which is not to say that's particularly bad, but at least we should work some way, or else we risk not working at all.