gwern comments on Ontological Crises in Artificial Agents' Value Systems by Peter de Blanc - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (2)
I've (tried to) read it several times. While I agree on the basic idea of finding isomorphisms by looking at bisimulations or bijections, and the minimizing differences sounds like a good idea inasmuch as it follows Occam's razor, a lot of it seems unmotivated and unexplained.
Like the use of the Kullback-Leibler divergence. Why that, specifically - is it just that obvious and desirable? It would seem to have some not especially useful properties like not being symmetrical (so would an AI using it would exhibit non-monotonic behavior in changing ontologies?), which don't seem to be discussed.