Stuart_Armstrong comments on AI ontology crises: an informal typology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (12)
Evolutionary algorithms work well for incompletely defined situations; no emergence is needed to explain that behaviour.
I think "The AI fails to understand its new environment enough to be able to manipulate it to implement its values." is unlikely (that's my first scenario) as the AI is the one discovering the new ontology (if we know it in advance, we give it to the AI).
Not sure how to approach #2; how could you specify a Newtonian "maximise pleasure over time" in such a way that it stays stable when the AI discovers relativity (and you have to specify this without using your own knowledge of relativity, of course)?