You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ancientcampus comments on An example of deadly non-general AI - Less Wrong Discussion

13 Post author: Stuart_Armstrong 21 August 2014 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: ancientcampus 23 August 2014 06:11:52PM 1 point [-]

I think this sums it up well. To my understanding, I think it would only require someone "looking over its shoulder", asking its specific objective for each drug and the expected results of the drug. I doubt a "limited intelligence" would be able to lie. That is, unless it somehow mutated/accidentally became a more general AI, but then we've jumped rails into a different problem.

It's possible that I'm paying too much attention to your example, and not enough attention to your general point. I guess the moral of the story is, though, "limited AI can still be dangerous if you don't take proper precautions", or "incautiously coded objectives can be just as dangerous in limited AI as in general AI". Which I agree with, and is a good point.