You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gram_Stone comments on [Stub] Ontological crisis = out of environment behaviour? - Less Wrong Discussion

8 Post author: Stuart_Armstrong 13 January 2016 03:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (4)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gram_Stone 13 January 2016 10:25:04PM 3 points [-]

I wonder if it could be possible to permanently anchor an agent to its original ontology. To specify that the ontology with which it initialized is the perspective that it is required to use when evaluating its utility function. The agent is permitted the build whatever models it needs to build, but it's only allowed to assign value using the primitive concepts.

That actually seems like what humans do. Human confusions about moral philosophy even seem quite like an ontological crisis in an AI.