djm comments on AI: requirements for pernicious policies - Less Wrong

7 Post author: Stuart_Armstrong 17 July 2015 02:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: djm 17 July 2015 04:28:57PM 0 points [-]

whether the AI can test these policies. Even if the AI can find pernicious policies that rank high on its objective function, it will never implement them unless it can ascertain this fact

I don't know if the AI should be taking responsibility for testing its own policies, especially in the initial stages. We should have a range of tests that humans apply that the formative AI runs on each iteration so that we can see how it is progressing.

Comment author: Stuart_Armstrong 20 July 2015 09:37:22AM 0 points [-]

"testing" means establishing that they are high ranked in the objective function. An algorithm has to be able to do that, somehow, or there's no point in having an objective function.