Nisan comments on A taxonomy of Oracle AIs - Less Wrong

13 Post author: lukeprog 08 March 2012 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nisan 16 March 2012 04:00:35AM 0 points [-]

So, the reason we wouldn't fall for that one is that the therapy wouldn't pass the safety tests required by first-world governments. We have safety tests for all sorts of new technologies, with the stringency of the tests depending on the kind of technology — some testing for children's toys, more testing for drugs, hopefully more testing for permanent cognitive enhancement. It seems like these tests should protect us from a Question-Answerer as much as from human mistakes.

Actual unfriendly AI seems scarier because it could try to pass our safety tests, in addition to accomplishing its terminal goals. But a Question-Answerer designing something that passes all the tests and nevertheless causes disaster seems about as likely as a well-intentioned but not completely competent human doing the same.

I guess I should have asked for a disaster involving a Question-Answerer which is more plausible than the same scenario with the AI replaced by a human.