djm comments on What should a friendly AI do, in this situation? - Less Wrong

8 Post author: Douglas_Reay 08 August 2014 10:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: djm 09 August 2014 03:37:35AM 0 points [-]

Good question. You may think it would be a better overall outcome to show the manipulative one to shock the programmers into breaking the law to (possibly) halt the other AI, but then it is no longer an FAI if it does this.

Training an FAI should be kept free from any real world 'disaster scenario' that it may think it needs more power to solve, because the risk it itself becomes an UFAI is amplified for many reasons (false information for one)