Unknowns comments on Why I will Win my Bet with Eliezer Yudkowsky - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (25)
I do not agree that you have to completely understand what it is doing. As long as it uses a fixed objective function that evaluates positions and outputs moves, and that function itself is derived from the game of chess and not from any premises concerned with the world, then it cannot do anything dangerous, even if you have no idea of the particulars of that function.
Also, I am not proposing that the function of an AI has to be this simple. This is a simplification to make the point easier to understand. The real point is that an AI does not have to have a goal in the sense of something like "acquiring gold", that it should not have such a goal, and that we are capable of programming an AI in such a way as to ensure that it does not.