nyan_sandwich comments on Some potential dangers of rationality training - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.
Yeah, but that comes under expected utility.
What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.