Russian mathematician V.I. Arnold had a semi-famous rant against taking this inversion too far. Example quote:
What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? "Oh, curse this maths" - concludes the student (who, possibly, becomes the Minister for Science in the future).
We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.
This is all the definition there is. The so-called "axioms" are in fact just (obvious) properties of groups of transformations. What axiomatisators call "abstract groups" are just groups of transformations of various sets considered up to isomorphisms (which are one-to-one mappings preserving the operations). As Cayley proved, there are no "more abstract" groups in the world. So why do the algebraists keep on tormenting students with the abstract definition?
The 'art' in picking the correct theorem in B seems related to structural realism. ie figuring out where we are importing structure from and how as we port across representations.
Was this intended to gesture at this process:
1) Mathematics (Axioms -> Theorems), 2) Reverse Mathematics? (Theorems -> (sets of axioms* from which it could be proved)
or this process:
2) See what may be proved in System A. 2) Create system B out of what was proved in system A, and prove things.
*made as small as possible
This post describes a pattern of abstraction that is common in mathematics, which I haven't seen described in explicit terms elsewhere. I would appreciate pointers to any existing discussions. Also, I would appreciate more examples of this phenomenon, as well as corrections and other insights!
Note on prerequisites for this post: in the opening example below, I assume familiarity with linear algebra and plane geometry, so this post probably won't make much sense without at least some superficial knowledge of these subjects. In the second part of the post, I give a bunch of further examples of the phenomenon, but these examples are all independent, so if you haven't studied a particular subject before, that specific example might not make sense, but you can just skip it and move on to the ones you do understand.
There is something peculiar about the dependency of the following concepts in math:
In the Euclidean geometry of R2 (the plane) and R3 (three-dimensional space), one typically goes through a series of steps like this:
In other words, we take angle and distance as primitive, and derive the inner product (which is the dot product in the case of Euclidean spaces).
But now, consider what we do in (abstract) linear algebra:
In other words, we have now taken the inner product as primitive, and derived angle, length, and distance from it.
Here is a shot at describing the general phenomenon:
Here is a table that summarizes this process:
In what sense is this pattern of generalization "allowed"? I don't have a satisfying answer here, other than saying that generalizing in this particular way turned out to be useful/interesting. It seems to me that there is a large amount of trial-and-error and art involved in picking the correct theorem to use as the B in the process. I will also say that explicitly verbalizing this process has made me more comfortable about inner product spaces (previously, I just had a vague feeling that "something is not right").
Here are some other examples of this sort of thing in math. In the following examples, the step of using B to define A does not take place (in this sense, the inner product case seems exceptional; I would greatly appreciate hearing about more examples like it).