I'm not clear on what the term abstraction really means. It seems to mean different things in different contexts.

I'm a programmer, and when I think of abstraction, the first thing I think of is something that feels like composition. Suppose you are a parent and want to teach your child how to brush their teeth. To do that, you break the task "brush your teeth" into subtasks like 1) prepare your toothbrush, 2) actually brush your teeth, 3) rinse your mouth and 4) clean your toothbrush. In programming, you would break the brushTeeth method into helper methods like prepareToothbrush, actuallyBrushTeeth, rinseMouth and cleanToothbrush. And you would call brushTeeth an abstraction, because it is composed of subtasks. My understanding is that this doesn't just apply to programming: in the real world, if you told your child "brush your teeth", that would also be an abstraction for "prepare your toothbrush, actually brush your teeth, rinse your mouth, and then clean your toothbrush".

But there's something else that I think of when I think of the term "abstraction". Suppose I have a program that has people, dogs, and cats. All three have heights, weights and names, so I create an animal class, and I say that people, dogs and cats are more specific versions of an animal. My understanding is that this too would be an abstraction. In programming, and in the real world. My understanding is that the idea of an animal is an abstraction over the idea of a human, dog or cat.

But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second. And we would say that the idea of an animal is a more general version of the idea of a human, dog and cat, but we wouldn't say that "brush your teeth" is a more general version of preparing your toothbrush or rinsing your mouth.

The first example seems to be about composition, and the second about generality, so I'm confused as to what abstraction really is.

And if we look at yet more contexts, there seems to be more inconsistency in how the term is used. In semantics and philosophy, "abstract" means something that you can't perceive with one of your five senses; the opposite of "concrete". A tennis ball is concrete because you can see it and touch it, but "tennis" is an abstract concept that you can't see or touch. But this use of the word doesn't distinguish between degrees of abstraction. According to this use of the word, something either is abstract, or it isn't. But with the first two examples, there's clearly levels of abstraction.

And the semantics and philosophy version of abstraction seems like it matches the use of the term in the context of art. In the context of art, art is abstract when it doesn't depict concrete things. Eg. a painting that tries to depict fear is abstract because fear is an abstract concept, not a concrete thing, whereas a portrait of Lisa Gherardini (the Mona Lisa) is not abstract art, because it depicts a concrete thing.

As for the term "abstract" in the context of academic papers, that seemings like it might just be a butchering of the term. It's a summary. If you were to summarize the task of brushing your teeth, you wouldn't say "here's a container that is composed of the subtasks". If you were to summarize the idea of an animal, that would just be a different thing than saying that it is a more general version of a human, dog and cat.

In the context of math, abstract math is a field of study. I don't know much about it, but from the googling I've done, it seems to be about, well, dealing with questions that are more abstract. More abstract in the sense of generalness, like how an animal is more general than a human. For example, algebra is more abstract than arithmetic. Arithmetic says that 1 + 1 = 2, 2 + 2 = 4 , and 3 + 3 = 6. Algebra, says that n + n = 2n. n is more general than 1 in the same sense that "animal" is more general than "human". Well, it seems that way at least.

Bret Victor has an essay called Up and Down the Ladder of Abstraction where he talks about abstraction as moving away from the concrete by removing concrete properties. The more concrete properties you remove, the more abstract it becomes. He uses the example of a car moving along a curved road according to a certain algorithm. The concrete version of this is a given car on a given curved road pointed in a given direction. You can make it more abstract by looking at the system not at a given point in time, but at all possible points of time, and that will give you a line describing where the car would be at all points in time.

This view of abstraction seems consistent with the idea of abstraction being about making something more general. We said "let's not look at the car being at this point in particular, let's look at all possible points more generally". With the math example we said "let's not look at the number 1 in particular, let's look at any possible number more generally". With the animal example, we said "let's not look at a humans in particular, let's look at organisms with any possible [whatever] more generally".

These usages of the term "abstraction" seem to be about how general a thing is, where something that is more general has less properties specified, and something that is more specific has more properties specified. But if that is what abstraction is about, why call it abstraction? Why not call it generalization? And if it is indeed about specifying less properties, what word do we use to distinguish between concrete things in the physical world that we can detect with our senses, and things that we can't detect with our senses, that we'd call abstract in the context of semantics and philosophy?

I don't mean to imply that abstraction actually is used inconsistently. Just that it seems that way to me, and I definitely notice I am confused.

New Answer
New Comment

1 Answers sorted by

johnswentworth

270

I've thought a lot about this exact problem, because it's central to a bunch of hard problems in biology, ML/AI, economics, and psychology/neuroscience (including embedded agents). I don't have a full answer yet, but I can at least give part of an answer.

First, the sort of abstraction used in pure math and CS is sort of an unusual corner case, because it's exact: it doesn't deal with the sort of fuzzy boundaries we see in the real world. "Tennis" is a fuzzy abstraction of many real-world activities, and there are edge cases which are sort-of-tennis-but-maybe-not. Most of the interesting problems involve non-exact abstraction, so I'll mostly talk about that, with the understanding that math/CS-style abstraction is just the case with zero fuzz.

I only know of one existing field which explicitly quantifies abstraction without needing hard edges: statistical mechanics. The heart of the field is things like "I have a huge number of tiny particles in a box, and I want to treat them as one abstract object which I'll call "gas". What properties will the gas have?" Jaynes puts the tools of statistical mechanics on foundations which can, in principle, be used for quantifying abstraction more generally. (I don't think Jaynes had all the puzzle pieces, but he had a lot more than anyone else I've read.) It's rather difficult to find good sources for learning stat mech the Jaynes way; Walter Grandy has a few great books, but they're not exactly intro-level.

Anyway, (my reading of) Jaynes' answer to the main question: abstraction is mainly about throwing away or ignoring information, in such away that we can still make strong predictions about some aspects of the underlying concrete system. Examples:

  • We have a gas consisting of some huge number of particles. We throw away information about the particles themselves, instead keeping just a few summary statistics: average energy, number of particles, etc. We can then make highly precise predictions about things like e.g. pressure just based on the reduced information we've kept, without having to think about each individual particle. That reduced information is the "abstract layer" - the gas and its properties.
  • We have a bunch of transistors and wires on a chip. We arrange them to perform some logical operation, like maybe a NAND gate. Then, we throw away information about the underlying details, and just treat it as an abstract logical NAND gate. Using just the abstract layer, we can make predictions about what outputs will result from what inputs. Even in this case, there's fuzz - 0.01 V and 0.02 V are both treated as logical zero, and in rare cases there will be enough noise in the wires to get an incorrect output.
  • I tell my friend that I'm going to play tennis. I have ignored a huge amount of information about the details of the activity - where, when, what racket, what ball, with whom, all the distributions of every microscopic particle involved - yet my friend can still make some strong predictions based on the abstract information I've provided.
  • When we abstract formulas like "1+1=2" or "2+2=4" into "n+n=2n", we're obviously throwing out information about the value of n, while still making whatever predictions we can given the information we kept. This is what generalization is all about in math and programming: throw out as much information as you can, while still maintaining the core "prediction".
  • Finally, abstract art. IMO, the quintessential abstract art pieces convey the idea of some thing without showing the thing itself - I remember one piece at SF's MOMA which looks like writing on a blackboard but contains no actual letters/numbers. In some sense, that piece is anti-abstract: it's throwing away information about our usual abstraction - the letters/numbers - but retaining the rest of the visual info of writing on a chalkboard. By doing so, it forces us to notice the abstraction process itself.
10 comments, sorted by Click to highlight new comments since:

Abstraction can be understood as a relationship between models. You can have a model of what it means to "brush your teeth", and you can have models of what it means to "prepare toothbrush", etc... The models of the subtasks can be composed into a model of the whole sequence, and the abstraction relationship tells you how this model is a realization of the abstract "brush your teeth" model. Similarly, we can have a model of what an "animal" is, and models for "dogs", "humans", "cats". The abstraction relationship tells you how each of these models are realizations of the "animal" model. Removing properties is an easy way to create models with this relationship, but it's not the only way. For example, you can also replace continuous time with discrete time.

You can chain this relationship to create a hierarchy, and for humans, the most concrete models we typically use are the ones that are "modeling" our raw sensory experiences. I think that adequately explains why people use it this way, but that this isn't what abstraction is.

I tackle this question in details in my NoUML essay.

You can think of an abstraction as an indivisible atom, a dot on some bigger diagram that on its own has no inner structure. So abstractions have no internal structure but they are rather useful in terms of their relations to other abstractions. In your examples there are 4 outgoing composition arrows going from “brushTeeth” to 4 other abstractions: “prepareToothbrush”, “actuallyBrushTeeth”, “rinseMouth” and “cleanToothbrush” and 3 incoming generalization arrows going from “people”, “dogs” and “cats” abstractions to “animal”.

Now imagine you have this huge diagram with all the abstractions in your programming system and their relations to each other. Imagine also that “brushTeeth” and “animal” abstractions on this diagram are both unlabeled for some reason. You would be able to infer their names easily just by looking at how these anonymous abstractions relate to the rest of the universe.

Everyone seems to have a slightly different idea about what the term abstraction means, so here is mine.

As agents, we try to understand (and predict) the world by building mental models of it. But given that the world is vast and our brains are puny, we have to make do with extremely simplified models which also apply to as many facets of the world as possible. In mathematics something like the category theory or algebraic geometry gives you the level of abstraction that encompasses in very few symbols/concepts/ideas a large chunk of the underlying math. In programming polymorphism is one way to abstract things, but in general any idea that lets you accomplish more with less is a good candidate for an abstraction. That's how programming languages (and human languages before that), data structures and optimal algorithms got invented, and they are all abstractions. Of course, this can go meta multiple levels, with progressively more power.

Edit: we tend to project human ideas on the world all the time, and identifying abstract ideas with reality happens almost automatically, e.g. "numbers are a real thing!" without realizing that a differently built mind might use a completely different approach to solving the same problem. Does AlphaZero use the abstractions of beginning/middlegame/ending or attack/defense when learning to play chess better than any human? I don't know, but likely not in the same way. Moreover, a slightly different neural network, or even the same one exposed to a different sequence of stimuli might develop a different set of abstractions internally.

I think there is no reason to expect a single meaning of the word. You did a good job in enumerating uses of 'abstraction' and finding its theme (removed from specific). I don't understand what confusion remains though.

I'm not sure whether or not there actually is a single meaning of the word or not. I get the impression from hearing other people talk about it that there is a single meaning, and that I'm not understanding what that single meaning is. But if it is the case that there is in fact no single meaning, I indeed wouldn't have any confusion remaining, aside from maybe not having as good an understanding of how the different meanings relate as I would like.

I get the impression from hearing other people talk about it that there is a single meaning, and that I'm not understanding what that single meaning is

People are often wrong about that.

A person who understands this effect can use it to exploit people, and when they do it is called "equivocation", to use two different senses of the same word in quick enough succession that nobody notices the words aren't really pointing at the same thing, to then use the inconsistencies between the word senses to approach impossible conclusions.

I wish I could drop a load of examples but I've never been good at that. This deserves a post. This deserves a paper, there are probably whole philosophical projects that are based on the pursuit of impossible chimeras held up by prolonged, entrenched equivocation...

One such essay about a concept that is either identical to equivocation, or somewhere in the vicinity (I've never quite been able to figure out which, but I think it's supposed to be subtly different) is Scott's post about Motte and Bailey, which includes lots of examples

Regarding the first two examples: "brush your teeth" and "animal" are also in another aspect quite different: The first example is a task, a instruction, or a command. "animal" on the other hand is the name of a concept, and it can be used to form the monadic predicate "is an animal". Which names a property which members of a set (the set of animals) satisfy. Maybe the difference between composition and generalization becomes clearer (or disappears) if we compare composite/generalized predicates only or composite/generalized instructions only.

An additional point regarding the aspect of touchability: Grammatically, the main difference between the terms "tennis" and "tennis ball", seems to be the fact that the former is not countable, while the latter is. Because of this, you can easily make a predicate out of the later by applying the copula "is a": "is a tennis ball". This is not possible for "tennis": "is a tennis" doesn't make sense. So you can't associate tennis with a particular set of things which you could touch. Similar point holds for "fear". This seems to hold for most "abstract" properties: They are expressed by non-countable nouns.

(An interesting exception is the concept of a number. "Number" is a countable noun, but the concept may be called abstract nonetheless. This could hint at yet a different sense of abstractness.)

Similar to uncountable nouns, adjectives, like "large", are generally also not countable. They seem intuitively also to be judged abstract, especially when you convert them into properties by naming them via a (uncountable) noun: "Largeness" is not a thing you could touch.

However, the case is not so clear for verbs: "walks" can be converted to a noun by speaking of a "walk", which seems to be a countable property of a process. As a consequence, it seems not particularly abstract. In conclusion, abstractness in the "philosophical" sense seems to be closely related to uncountable nouns.

Just a quick, pedantic note.

But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second.

Actually, from an extensive point of view, that is exactly how you would define "animal": as the set of all things that are animals. So it is in fact composed of humans, dogs and cats -- but only partly, as there are lots of other things that are animals.

This is just a pedantic point since it doesn't cut to the heart of the problem. As johnswentworth noted, man-made categories are fuzzy, they are not associated with true or false but with probabilities, so "animal" is more like a test, i.e. a function or association between some set of possible things and . So "animals" isn't a set, the sets would be "things that are animals with probability " for every .

Moving from animals to another example: If you are a half-bald person you do not belong to the set of bald people with probability 0.5. Probability is a epistemic concept, but the vagueness (fuzzyness) of the concept of baldness is not epistemic, but semantic. No amount of information makes you more or less bald. Therefore, for fuzzy concepts, there is no probability of membership of a set, but a degree of membership of a set. Which is again a number between 0 and 1, but it is not a probability. There is actually a very unpopular logic which is based on this notion of fuzzy sets: Fuzzy logic. It's logical constants behave different from their equivalents in probability theory. E.g. commonly: A and B = MIN(A, B); A or B = MAX(A, B).