gwern comments on Ontological Crises in Artificial Agents' Value Systems by Peter de Blanc - Less Wrong

15 Post author: jimrandomh 21 May 2011 01:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (2)

You are viewing a single comment's thread.

Comment author: gwern 03 July 2011 10:00:23PM *  1 point [-]

I've (tried to) read it several times. While I agree on the basic idea of finding isomorphisms by looking at bisimulations or bijections, and the minimizing differences sounds like a good idea inasmuch as it follows Occam's razor, a lot of it seems unmotivated and unexplained.

Like the use of the Kullback-Leibler divergence. Why that, specifically - is it just that obvious and desirable? It would seem to have some not especially useful properties like not being symmetrical (so would an AI using it would exhibit non-monotonic behavior in changing ontologies?), which don't seem to be discussed.