You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

evand comments on Brief Question about FAI approaches - Less Wrong Discussion

3 Post author: Dolores1984 19 September 2012 06:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread.

Comment author: evand 19 September 2012 03:21:32PM 2 points [-]

(if you can't model arbitrary ordered information systems, you haven't got an AI)

If I replace "AI" with "general-purpose optimizer of sufficient power to be dangerous", are you still sure this statement is true?

Comment author: Dolores1984 19 September 2012 05:29:24PM 0 points [-]

Pretty sure. Not completely, but it does seem pretty fundamental. You cannot hard code the operation of the universe into an AI, and that means it has to be able to look at symbols going into the universe and symbols coming out, and say 'okay, what sort of underlying system would produce this behavior'? You can apply the same sort of thing to humans. If it can't model humans effectively, we can probably kill it.