You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on AI ontology crises: an informal typology - Less Wrong Discussion

6 Post author: Stuart_Armstrong 13 October 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 14 October 2011 01:04:59PM 0 points [-]

Evolutionary algorithms work well for incompletely defined situations; no emergence is needed to explain that behaviour.

I think "The AI fails to understand its new environment enough to be able to manipulate it to implement its values." is unlikely (that's my first scenario) as the AI is the one discovering the new ontology (if we know it in advance, we give it to the AI).

Not sure how to approach #2; how could you specify a Newtonian "maximise pleasure over time" in such a way that it stays stable when the AI discovers relativity (and you have to specify this without using your own knowledge of relativity, of course)?