Stuart_Armstrong comments on AI indifference through utility manipulation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
Good comment.
Knowing what U is, and figuring out if U will result in outcomes that you like, are completely different things! We have little grasp of the space of possible outcomes; we don't even know what we want, and we can't imagine some of the things that we don't want.
Yes, we do need to have some idea of what U is - or at least something (a simple AI subroutine applying the filter, an AI designing its next self-improvement) has to have some idea. But it doesn't need to understand U beyond what is needed to apply F. And since F is considerably simpler than what U is likely to be...
It seems plausible that F could be implemented by a simple subroutine even across self-improvement.