djm comments on AI: requirements for pernicious policies - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (3)
I don't know if the AI should be taking responsibility for testing its own policies, especially in the initial stages. We should have a range of tests that humans apply that the formative AI runs on each iteration so that we can see how it is progressing.
"testing" means establishing that they are high ranked in the objective function. An algorithm has to be able to do that, somehow, or there's no point in having an objective function.