Nice intro! I agree that the cross-product should be deprecated in favor of the wedge product in almost every physical application.
I like Geometric Algebra, but I find that its proponents tend to oversell it (you aren't doing that here, I just mean in general). Which is unfortunate, since it increases the entropy (on all sides) of pretty much all discussions of it. On the other hand, it does seem to add more energy toward people learning it.
Anyway, here are some of my observations on potential blindspots. I think the way it mixes types (e.g. the geometric product of two vectors is the sum of a bivector and a scalar) sometimes adds more confusion and complexity than it removes. As an example of this from Hestenes himself, from page 18 of this, he describes trying to find the right way to use GA to model kinematics (i.e. translations, rotations, and their combination: screws). At first, this seems like the perfect excuse to add a vector to a bivector, and get a coherent geometric meaning out of it! However, he found that it was actually better to add some null basis elements, so that translations and rotations both end up being bivectors. Another case where I think type conflation is happening is in the identification of the dual space with the primary space; these have different physical units (but to be fair, standard math is terrible about conflating these too)!
None of this is to say that there aren't a bunch of great insights from thinking about things from the GA viewpoint! In particular, I find thinking of spinors as exponentiated bivectors is especially enlightening! Just a note of caution about some blindspots of the community that I've noticed since first being interested in it.
Thank you for your insightful comment. The concept of a screw is new to me, so I'll have good look at the article you shared and I will try to think carefully about how physical units relate to types, as well as what constitutes true geometric meaning.
(casual but not informal, prerequisites are trigonometry, vectors, and complex numbers, subject as digested by a non-expert in an unrelated field of science)
Some background
How I got interested in geometric algebra
One evening I stayed after class to speak with my mechanics professor. He showed me a formulation of Maxwell's equations that looked like
∇F=J"Wait, what? Aren't there supposed to be four equations?" is a question I should have asked, had I known what a Maxwell was. A year later I had forgotten the exchange, but I happened to come across a swift introduction to geometric algebra on youtube. A comically dramatic flashback ensued and after a brief email correspondence, I ended up with Alan Macdonald's fantastic Linear and Geometric Algebra.
Motivation: representing rotations.
Hermione's hand shot up wildly from the desk right in front of her professor's.
"I know this! In Linearibus Essentia Mathematica by Grant Sanderson, chapter three, minute 3, second 48, he says that the ending coordinates of the basis vectors after a linear transformation uniquely describe the entire transformation! Put the coordinates of rotated basis vectors in a matrix, define the matrix product, and you're all set!"
Professor Vector smiled kindly, "That's spot on, Miss Granger. 10 points to Gryffindor! However, it is known that algorithmagically cheaper methods exist."
--
By the end of high school, it had been essentially imprinted upon me that there were two flavours of rotation: rotation matrixes and complex numbers.
Here's how to rotate the vector [1, 0] by 30° anti-clockwise using a matrix.
- Notice the coordinates of the basis vectors (read vertically) after a rotation by θdegrees must be
[cosθ−sinθsinθcosθ]- Replace θ by 30° and perform matrix left-multiplication on the vector
[√3/2−1/21/2√3/2][10]=[√3/21/2]And here's the same rotation, now using complex numbers.
Looking back on these notations, I feel like they do not capture the essence of what it means to rotate a vector. Matrix notation is really more adapted for robots than for humans. The issue is that unless you sit down and draw the resulting positions of the resulting basis vectors, you cannot tell what kind of transformation is being performed by glancing at the matrix. Complex numbers are already more compact, but it's not so obvious what the imaginary unit has to do with rotation in the first place.
Is there a better way?
Geometric Algebra
The promise of geometric algebra is the encapsulation of elementary linear transformations on points, lines, planes, and more into a useful and transparent algebra.
What follows is a short introduction to the subject, following a selection of topics in Linear and Geometric Algebra by Alan Macdonald. We'll have a look at
1. Oriented lengths, areas, and volumes in 3D
An oriented length v, also called a vector, is an arrow. Its length, or norm, is some real number |v|. If we put two oriented lengths u and v head-to-tail, then we'll get a new oriented length u + v. We can also stretch it like this: 2v, and flip it like this: −v
An oriented area B is very similar to an oriented length. It's a segment of a plane but we decide that it has fingers and that it's pointing in one of two orientations, drawn as a swirly arrow, ↺ or ↻. We denote B 's area, which is called norm too, with |B|. Just like with oriented lengths, we can add two oriented areas A and B together: A+B, and scale them like so: 2B.
An oriented volume T is a segment of space, or blob, with an orientation ↺ or ↻ and volume |T|. We can add blobs like T1+T2, scale them like aT, Surprisingly, oriented volumes do not have a "direction" in 3D space. Even more surprisingly, they do in 4D. Why? (Hint: does an oriented length have a direction independent from its orientation in 1D space? What about 2D space?)
Importantly, oriented areas and volumes don't have a shape because all we know is their orientation, direction, and norm. Therefore, in a given plane, any square is equal to any circle of the same norm. This point may seem confusing at first, but we can see it as an extension of the definition "a vector is an equivalence class under equipollence of unordered pairs of points".
2. The outer product and geometric product
Take two vectors u and v. Their outer product u∧v is the oriented area generated if the two vectors formed the sides of a parallelogram in the unique plane containing u and v. Its orientation is the opposite of the orientation of v∧u.
There's a deep relationship between the cross × and outer ∧ products, in that the cross product is the vector orthogonal to the oriented area generated by the outer product.
Let's turn our attention to the shiny new geometric product of two vectors u and v:
uv=u⋅v+u∧vWe define it as the sum of the inner and outer products of two vectors. Ok, so as an example, let's consider vectors as members of R2 and see what these three products mean component-wise.
The inner product of two vectors is the familiar dot product which yields a scalar.
e1⋅e2=[10]⋅[01]=1⋅0+0⋅1=0The outer product of two vectors will yield a bivector which is just an oriented area. The bivector below is an oriented area of norm 1 in the plane spanned by the two standard basis vectors.
e1∧e2=[10]∧[01]Finally, the geometric product is the sum of the above. This sum yields a multivector, which is like a Halloween candy bag of different dimensional objects.
e1e2=0+[10]∧[01]Let θ be the smaller angle between u and v, so θ∈[−π,π] and let's use the figure above to motivate the actual definitions of the inner and outer products.
The inner product of two vectors gives a scalar that is largest when the two vectors are pointing in the same direction (θ=0), zero when they're orthogonal (θ=π/2 or −π/2) to each other, and smallest when they're pointing in opposite directions (θ=π).
u⋅v=|u||v|cosθThe outer product of two vectors returns an oriented area, so in some sense, it is proportional to the bivector e1∧e2. When the two vectors are pointing in the same or opposite directions, the area spanned is zero. When they're orthogonal to each other, the norm is maximal. The orientation is decided by how the area is swept, which is the information we can retrieve from the sign of θ. So we have
u∧v=|u||v|sinθ(e1∧e2)3. Generalizing complex numbers
The algebraic properties of complex numbers (and much more) are subsumed by geometric algebra. How? Let's look at complex numbers by trying to rebuild the imaginary unit i from the tools that we've discovered so far. We'll need two properties of the geometric product for this.
First, watch what happens when you swap the terms of a geometric product of two basis vectors.
e1e2=0+[10]∧[01]=−0−[01]∧[10]=−e2e1Second, the geometric product of a basis vector with itself will yield 1.
e1e1=[10]⋅[10]+[10]∧[10]=1+0=1And now, interpreting exponentiation as repeated geometric products, this falls out.
(e1e2)2=e1e2e1e2=−e1e1e2e2=−1=i2Behold, the unit imaginary (now in bold). From this point onwards, anything you'd like to do with complex numbers could be done with real vectors, which is excellent. What's more, the unit imaginary now represents an oriented area which, we'll see, is a direct answer to our original question of what i is doing in rotation.
So in particular, this Euler identity holds :
eiπ=cosπ+isinπ=−1+0i=−1And it turns out that the geometric product of two vectors even has a polar form
uv=|u||v|cosθ+|u||v|sinθ(e1∧e2)=|u||v|(cosθ+isinθ)=reiθ4. Doing rotations better
So you'd like to rotate a vector. The first question to be asked is around what?
Let's consider the 2D case where some vector u is lying in the plane i=e1e2 and is being rotated around the origin counter-clockwise θ degrees to become the vector v. What is the axis of rotation?
Well, it's hard to say! We could argue that u is rotating around some third axis poking through the center of the page, but the problem is that our space is 2-dimensional. To avoid any philosophy, let's just rotate the plane instead.
First, write down the polar form of uv and then solve for v, knowing that |u|=|v|
uv=|u||v|eiθ⟺u2v=u|u||v|eiθ⟺|u|2v=|u|2ueiθ⟺v=ueiθNice. Multiplying a vector in the plane i on the right by eiθ is the action of rotating by θ degrees, turning u into v. You can check that multiplying on the left by e−iθ does the same thing. So what about doing the same in 3D? What about rotating in some other arbitrary plane? Isn't this what we've all been waiting for?
Absolutely! We'll have to get our hands dirty for a moment, but we'll emerge on the other side with the most compact known algebraic representation of rotations in 3D.
First, let's generalize iθ to be any bivector of area θ, and call it an angle. Think of it like an oriented slice of pizza with the pointy end stuck at the origin. Here are some concrete examples.
e3e130°, e1(e2+e3)π, (e1−e2)π/4In a general rotation of a vector u about the origin, the vector may not even live on the plane of rotation. Like olives on a pizza, there's some bit of u that's stuck on the cheese and some bit that has leaped away. Geometrically, u can be written as the sum of its projection and rejection onto the angle i.
u=u∥+u⊥We're going to introduce some small changes to the previous rotation identity. First, we'll have to consider the two components of u separately and then, bare with me, split the exponential into two halves. Also, the rejection of u will be left totally unaffected because it's in line with the rotation's axis.
v=u∥eiθ+u⊥=u∥eiθ/2eiθ/2+u⊥e−iθ/2eiθ/2We would like to send one of those halves to the other side of u so that we can recombine u∥ and u⊥. Luckily, u∥ is in the same plane as i so u∥eiθ/2=e−iθ/2u∥
v=e−iθ/2u∥eiθ/2+e−iθ/2u⊥eiθ/2=e−iθ/2ueiθ/2That's it! Rotation by half-pizza slices on each side of u. Here's how to rotate e1 by the angle e1e3π/2, a quarter-turn in the e1e3 plane.
v=e−e1e3π/4e1ee1e3π/4=(cosπ/4+sinπ/4(e3e1))e1(cosπ/4+sinπ/4(e1e3))=(e1/√2+e3/√2)(1/√2+e1e3/√2)=e1/2+e3/2+e3/2−e1/2=e3Closing thoughts