Stuart_Armstrong comments on AI indifference through utility manipulation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
There is no "future utility" to lose for E. The utility of E is precisely the expected future utility of A.
The AI has no concept of effort, other than that derived from its utility function.
The best idea is to be explicit about the problem; write down the situations, or the algorithm, that would lead to the AI modifying itself in this way, and we can see if it's a problem.