I'll also give you two examples of using ontologies — as in "collections of things and relationships between things" — for real-world tasks that are much dumber than AI.
I'll give you an example of an ontology in a different field (linguistics) and maybe it will help.
This is WordNet, an ontology of the English language. If you type "book" and keep clicking "S:" and then "direct hypernym", you will learn that book's place in the hierarchy is as follows:
... > object > whole/unit > artifact > creation > product > work > publication > book
So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an "ontology", I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.
Now let's go and look at one of those posts.
https://arbital.com/p/ontology_identification/#h-5c-2.1 , "Ontology identification problem":
Consider chimpanzees. One way of viewing questions like "Is a chimpanzee truly a person?" - meaning, not, "How do we arbitrarily define the syllables per-son?" but "Should we care a lot about chimpanzees?" - is that they're about how to apply the 'person' category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.
My "tree of words" understanding: we classify things into "human minds" or "not human minds", but now that we know more about possible minds, we don't want to use this classification anymore. Boom, we have more concepts now and the borders don't even match. We have a different ontology.
From the same post:
In this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds.
My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a "diamond" should be "any carbon" or should be refined to "only carbon-12".
Let's take a few more posts:
https://www.lesswrong.com/posts/LeXhzj7msWLfgDefo/science-informed-normativity
The standard answer is that we say “you lose” - we explain how we’ll be able to exploit them (e.g. via dutch books). Even when abstract “irrationality” is not compelling, “losing” often is. Again, that’s particularly true under ontology improvement. Suppose an agent says “well, I just won’t take bets from Dutch bookies”. But then, once they’ve improved their ontology enough to see that all decisions under uncertainty are a type of bet, they can’t do that - or at least they need to be much unreasonable to do so.
My understanding: You thought only [particular things] were bets so you said "I won't take bets". I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.
https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
Ontology identification is the problem of mapping between an AI’s model of the world and a human’s model, in order to translate human goals (defined in terms of the human’s model) into usable goals (defined in terms of the AI’s model).
My understanding: AI and humans have different sets of categories. AI can't understand what you want it to do if your categories are different. Like, maybe you have "creative work" in your ontology, and this subcategory belongs to the category of "creations by human-like minds". You tell the AI that you want to maximize the number of creative works and it starts planting trees. "Tree is not a creative work" is not an objective fact about a tree; it's a property of your ontology; sorry. (Trees are pretty cool.)
I like the idea of tabooing "frame". Thanks for that.
First of all, in my life I mostly encounter:
I don't have experience with other bits of your list, so I won't comment on them.
Secondly, so far I haven't found "noticing frames" to be particularly useful for myself. I wrote Smuggled frames because at the time I wanted to be writing takes / insight porn / I don't know how to call it. I don't claim here that frames are important or unimportant in general; I just haven't noticed much of an impact on myself or others.
For what it's worth, I can come up with scenarios where noticing frames might be useful for somebody — but I don't want to be coming up with imaginary scenarios.
What I have found useful, on the other hand, is "knowing which frames I like".
I'll give two examples:
It's like.. you have the topic, you have the frame, and you have the style. Topic: art. Frame: incentives. Style: history+examples.
It turns out that I don't actually care about the topic (I think?), and care about the frame and the style. I'm happy with this approach.
Also, to answer your question about "probability" in a sister chain: yes, "probability" can be in someone's ontology. Things don't have to "exist" to be in an ontology.
Here's another real-world example:
The person's ontology is "right" and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn't be. You don't even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.
Actually, let's do a 2x2 matrix for all combinations of, let's say, "probability" and "luck" in one's personal ontology:
//
Now imagine somebody who replies to this comment saying "you could rephrase this in terms of beliefs". This would be an example of a person saying essentially "hey, you should've used [my preferred ontology] instead of yours", one where you use the concept of "belief" instead of "ontology". Which is fine!