While writing up this post, the Embedded Agents post went up. It seems like this work could be conceptually relevant to three of the four areas of interest identified in that post, with Embedded World-Models being the odd man out because they explicitly skip the question of internal models.
Looking at this paper again in that vein, I am immediately curious about things like whether we can apply this iteratively to sub-systems of the system of interest. It seems like the answer is almost certainly yes.
I also take it more-or-less for granted that we can use these same ideas to define semantic information in relation to some arbitrary goal, or set of goals. It seems like putting the framework in an information-theoretic context is very helpful for this purpose. It feels like there should be some correspondence between the viability function and the partial achievement of goals.
Leaning on the information-theoretic interpretation again, I'm not even sure it would require any different treatment to allow for non-continuous continuation of the system (or goal). This way things like the hibernation of a tardigrade, hiring a contractor at a future date, and an AI reloading itself from backup are all approachable.
But the devil is in the details, so I will table these speculations until after seeing whether the rest of the paper passes the sniff test.
This is quite interesting. There's a lot of attempt to figure out how we get meaning out of information. I think of it mostly in terms of how consciousness comes to relate to information in useful ways, but that makes sense to me mostly because I'm working in a strange paradigm (transcendental phenomenological idealism). But I think I see a lot of the same sort of efforts to deal with meaning popping up in formal theories of consciousness even if meaning is the exact thing that those are driving at; I just see the two as so closely tied it's hard to address one without touching on the other.
This is a recent paper by Artemy Kolchinsky and David H. Wolpert, from the Santa Fe Institute. It was published in The Royal Society Interface on Oct 19. They propose a formal theory of semantic information, which is to say how to formally describe meaning. I am going over it in the style proposed here and shown here, approximately.
I will go through the sections in-line at first, and circle back if appropriate. Mostly this is because when I pasted the table of contents it very conveniently kept the links to the direct sections of the paper, which is an awesome feature.
Note: I have left the links below for completeness, and to make it easy to interrogate the funding/associations of the authors. The appendices have some examples they develop.
End: I am putting this up before delving into the body sections in any detail, not least for length and readability. If there is interest, I can summarize those in the comments.