wnoise comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 23 April 2010 02:43:05PM 2 points [-]

There have already in this thread been a lot of problems listed with this. I'm going to add just two more: consider an otherwise pretty friendly AI that is curious about the universe and wants to understand the laws of physics. No matter how much the AI learns, it will conclude that it and humans misunderstand the basic laws of physics. The AI will likely spend tremendous resources trying to understand just what is wrong with its understanding. And given the prior of 1, it will never resolve this issue.

Consider also the same scenario but if there's an otherwise potentially friendly second AI that finds out about the way this other AI has been programmed. If this AI is at all close to what humans are like (again it is a mildly friendly AI) it will become paranoid about the possibility that there's some similar programming issue in it. It might also use this as strong evidence that humans are jerks. The second AI isn't going remain friendly for very long.

Comment author: Jack 23 April 2010 04:18:40PM 1 point [-]

It might also use this as strong evidence that humans are jerks.

If an AI doesn't rapidly come to this conclusion after less than thirty minutes of internet access it has a serious design flaw, no? :-)