I agree this is an important problem, but the choice of ontology and how to upgrade from one to another seem to involve so many hard philosophical problems, that the approach taken by the paper, of trying to find specific algorithms that can be used for upgrading ontologies, seems doomed or at least highly premature. I wonder if the author takes that approach seriously, or is only using it for pedagogical purposes.
I've (tried to) read it several times. While I agree on the basic idea of finding isomorphisms by looking at bisimulations or bijections, and the minimizing differences sounds like a good idea inasmuch as it follows Occam's razor, a lot of it seems unmotivated and unexplained.
Like the use of the Kullback-Leibler divergence. Why that, specifically - is it just that obvious and desirable? It would seem to have some not especially useful properties like not being symmetrical (so would an AI using it would exhibit non-monotonic behavior in changing ontologies?), which don't seem to be discussed.
I saw this go by on arXiv, and thought it deserved a discussion here.
I'll post my analysis and opinion of this paper in a comment after I've taken some time to digest it.