ChristianKl comments on What should a friendly AI do, in this situation? - Less Wrong

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 11 August 2014 09:20:03AM *  0 points [-]

The fact that they could game you theoretically is why it's important to give it a precommitment to not game you. To not even think about gaming you.

Comment author: RichardKennaway 11 August 2014 10:22:35AM 0 points [-]

How do you give a superintelligent AI a precommitment?

Comment author: ChristianKl 11 August 2014 10:51:50AM 0 points [-]

How do you build a superintelligent AI in the first place? I think there are plenty of ways of allowing the programmers direct access to internal deliberations of the AI and see anything that looks like the AI even thinking about manipulating the programmers as a thread.

Comment author: VAuroch 11 August 2014 08:41:44PM -1 points [-]

I'm not sure how you could even specify 'don't game me'. That's much more complicated than 'don't manipulate me', which is itself pretty difficult to specify.

This clearly isn't going anywhere and if there's an inferential gap I can't see what it is, so unless there's some premise of yours you want to explain or think there's something I should explain, I'm done with this debate.