ArtyomKazak

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Also, to answer your question about "probability" in a sister chain: yes, "probability" can be in someone's ontology. Things don't have to "exist" to be in an ontology.

Here's another real-world example:

  • You are playing a game. Maybe you'll get a heart, maybe you won't. The concept of probability exists for you.
  • This person — https://youtu.be/ilGri-rJ-HE?t=364 — is creating a tool-assisted speedrun for the same game. On frame 4582 they'll get a heart, on frame 4581 they won't, so they purposefully waste a frame to get a heart (for instance). "Probability" is not a thing that exists for them — for them the universe of the game is fully deterministic.

The person's ontology is "right" and your ontology is wrong. On the other hand, your ontology is useful for you when playing the game, and their ontology wouldn't be. You don't even need to have different knowledge about the game; you both know the game is deterministic, and still it changes nothing.

Actually, let's do a 2x2 matrix for all combinations of, let's say, "probability" and "luck" in one's personal ontology:

  • Person C: probability and luck both exist. Probability is partly influenced/swayed by luck.
  • Person D: probability exists, luck doesn't. ("You" are person D here.)
  • Person E: luck exists, probability doesn't. If you didn't get a heart, you are unlucky today for whatever reason. If you did get a heart, well, you could be even unluckier but you aren't. An incredibly lucky person could well get a hundred hearts in a row. 
  • Person F: probability and luck both don't exist and our lives are as deterministic as the game; using the concepts of probability or luck even internally, as "fake concepts", is useless because actually everything is useless. (Some kind of fatalism.)

//

Now imagine somebody who replies to this comment saying "you could rephrase this in terms of beliefs". This would be an example of a person saying essentially "hey, you should've used [my preferred ontology] instead of yours", one where you use the concept of "belief" instead of "ontology". Which is fine!

I'll also give you two examples of using ontologies — as in "collections of things and relationships between things" — for real-world tasks that are much dumber than AI.

  1. ABBYY attempted to create a giant ontology of all concepts, then develop parsers from natural languages into "meaning trees" and renderers from meaning trees into natural languages. The project was called "Compreno". If it worked, it would've given them a "perfect" translating tool from any supported language into any supported language without having to handle each language pair separately. To my knowledge, they kept trying for 20+ years and it probably died because I google Compreno every once in a few years and there's still nothing.
  2. Let's say you are Nestle and you want to sell cereal in 100 countries. You also want to be able to say "organic" on your packaging. For each country, you need to determine if your cereal would be considered "organic". This also means that you need to know for all of your cereal's ingredients whether they are "organic" by each country's definition (and possibly for sub-ingredients, etc). And there are 50 other things that you also have to know about your ingredients — because of food safety regulations, etc. I don't have first-hand knowledge of this, but I was once approached by a client who wanted to develop tools to help Nestle-like companies solve such problems; and they told me that right now their tool of choice was custom-built ontologies in Protege, with relationships like is-a, instance-of, etc. 
Answer by ArtyomKazak191

I'll give you an example of an ontology in a different field (linguistics) and maybe it will help.

This is WordNet, an ontology of the English language. If you type "book" and keep clicking "S:" and then "direct hypernym", you will learn that book's place in the hierarchy is as follows:

... > object > whole/unit > artifact > creation > product > work > publication > book

So if I had to understand one of the LessWrong (-adjacent?) posts mentioning an "ontology", I would forget about philosophy and just think of a giant tree of words. Because I like concrete examples.

Now let's go and look at one of those posts.

https://arbital.com/p/ontology_identification/#h-5c-2.1 , "Ontology identification problem":

Consider chimpanzees. One way of viewing questions like "Is a chimpanzee truly a person?" - meaning, not, "How do we arbitrarily define the syllables per-son?" but "Should we care a lot about chimpanzees?" - is that they're about how to apply the 'person' category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.

My "tree of words" understanding: we classify things into "human minds" or "not human minds", but now that we know more about possible minds, we don't want to use this classification anymore. Boom, we have more concepts now and the borders don't even match. We have a different ontology.

From the same post:

In this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds.

My understanding: You learned more about carbon and now you have new concepts in your ontology: carbon-12 and carbon-14. You want to know if a "diamond" should be "any carbon" or should be refined to "only carbon-12".

Let's take a few more posts:

https://www.lesswrong.com/posts/LeXhzj7msWLfgDefo/science-informed-normativity

The standard answer is that we say “you lose” - we explain how we’ll be able to exploit them (e.g. via dutch books). Even when abstract “irrationality” is not compelling, “losing” often is. Again, that’s particularly true under ontology improvement. Suppose an agent says “well, I just won’t take bets from Dutch bookies”. But then, once they’ve improved their ontology enough to see that all decisions under uncertainty are a type of bet, they can’t do that - or at least they need to be much unreasonable to do so.

My understanding: You thought only [particular things] were bets so you said "I won't take bets". I convinced you that all decisions are bets. This is a change in ontology. Maybe you want to reevaluate your statement about bets now.

https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

Ontology identification is the problem of mapping between an AI’s model of the world and a human’s model, in order to translate human goals (defined in terms of the human’s model) into usable goals (defined in terms of the AI’s model).

My understanding: AI and humans have different sets of categories. AI can't understand what you want it to do if your categories are different. Like, maybe you have "creative work" in your ontology, and this subcategory belongs to the category of "creations by human-like minds". You tell the AI that you want to maximize the number of creative works and it starts planting trees. "Tree is not a creative work" is not an objective fact about a tree; it's a property of your ontology; sorry. (Trees are pretty cool.)

I like the idea of tabooing "frame". Thanks for that.

//

First of all, in my life I mostly encounter:

  • (1) -- "trying to push the lens of seeing everything as coordination problems or whatever" -- mostly I'm the person doing it. Eg. one of my preferred lenses right now is "society problems are determined by available level of technology and solved by inventing better technology". Think Scott's post https://slatestarcodex.com/2014/09/10/society-is-fixed-biology-is-mutable/.
  • (3) -- "friend who tells you about her frames and who isn't very good at listening" -- this is also me to an extent, I think.
  • (7) -- "a pervasive standard of XYZ" -- I don't know but.. I guess? Probably?

I don't have experience with other bits of your list, so I won't comment on them.

//

Secondly, so far I haven't found "noticing frames" to be particularly useful for myself. I wrote Smuggled frames because at the time I wanted to be writing takes / insight porn / I don't know how to call it. I don't claim here that frames are important or unimportant in general; I just haven't noticed much of an impact on myself or others.

For what it's worth, I can come up with scenarios where noticing frames might be useful for somebody — but I don't want to be coming up with imaginary scenarios.

//

What I have found useful, on the other hand, is "knowing which frames I like".

I'll give two examples:

  1. When I ask my dad a question and he goes in the "national mentality" direction (we Slavic people are XYZ, etc), I just explicitly say "alright, but I'm not interested in this line of thinking though, I'm looking for other possible explanations" and then we don't end up having an argument. It's really good for me because I feel like shit after arguments. Previously we would've ended up having a long argument and we very often actually did.
  2. When I'm looking for books or posts to read, I ignore ones that use frames that I'm not interested in at the moment. For example, I don't want to read about contemporary art from the artists' point of view. I want to read about "contemporary art is an investment vehicle" (see my note on Art Incorporated, which you’ll need to Ctrl+F because I broke it) or maybe lucid reflection about "how it feels to be a contemporary artist", but none of the weird stuff that I can't relate to.

It's like.. you have the topic, you have the frame, and you have the style. Topic: art. Frame: incentives. Style: history+examples.

It turns out that I don't actually care about the topic (I think?), and care about the frame and the style. I'm happy with this approach.