ChristianKl comments on What should a friendly AI do, in this situation? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
The fact that they could game you theoretically is why it's important to give it a precommitment to not game you. To not even think about gaming you.
How do you give a superintelligent AI a precommitment?
How do you build a superintelligent AI in the first place? I think there are plenty of ways of allowing the programmers direct access to internal deliberations of the AI and see anything that looks like the AI even thinking about manipulating the programmers as a thread.
I'm not sure how you could even specify 'don't game me'. That's much more complicated than 'don't manipulate me', which is itself pretty difficult to specify.
This clearly isn't going anywhere and if there's an inferential gap I can't see what it is, so unless there's some premise of yours you want to explain or think there's something I should explain, I'm done with this debate.