I wonder, what do you think about the chapter about dual spaces, dual maps, annihilator, etc.? To me it seemed not too connected with everything else, and that's bad. If I remember correctly, the author uses duality just to prove a few results and then throws duality away and never uses it again. Also in real life (numerical linear algebra, machine learning, and stuff) I am not aware of any use for those concepts.
So for “general” operators, this is always true, but there do exist specific operators for which it isn’t.
I believe when mathematicians say that in general P(x) holds, they mean that for any x in the domain of interest P(x) holds. Perhaps you want to you typical instead of general here. E.g. there is a notion called typical tensor rank of tensors of given shape, which means a tensor rank which occurs with non-zero probability when a random tensor of given shape is sampled.
I've used duality at least a bit in my subsequent ML classes, so I was happy that the book covered it. I do think I remember it not being used super much in the rest of the book.
The one thing the duality section was connected to was Riesz representation theorem. Riesz states every finite linear functional φ has a unique vector f, such that for all v, φ(v) = <v,f>. It gives an isomorphism from functionals to vectors for a given norm, as the function is just multiplication with the vector.
It's not tied to the section on duals in the text, but the section on duals lets you appreciate the result more.
I wonder, what do you think about the chapter about dual spaces, dual maps, annihilator, etc.?
Nothing, because it wasn't in the material. I worked through the second edition of the book, and the parts on duality seem to be new to the third edition.
I believe when mathematicians say that in general P(x) holds, they mean that for any x in the domain of interest P(x) holds. Perhaps you want to you typical instead of general here. E.g. there is a notion called typical tensor rank of tensors of given shape, which means a tensor rank which occurs with non-zero probability when a random tensor of given shape is sampled.
Thanks for that, I changed it.
For the orthogonal decomposition, don't you need two scalars? E.g. . For example, in , let Then , and there's no way to write as
Ow. Yes, you do. This wasn't a typo either, I remembered the result incorrectly. Thanks for pointing it out, and props for being attentive enough to catch it.
Or to be more precise, you only need one scalar, but the scalar is for not , because isn't given. The theorem says that, given and , there is a scalar and a vector such that and is orthogonal to .
Love this article - I just wanted to make a quick correction to your last paragraph in the section where you talk about Chapter 6. The existence of orthonormal basis depends intimately (at least as proved in Axler's book) on the hypothesis that the inner product space is finite-dimensional. Thanks again for writing this great article!
Thanks! And yes, accurate correction. I've edited the post.
Also, welcome to LessWrong! (Unless you've had another account before.)
Your struggles with coordinate vectors sound familiar. I remember that, in the book I used, they introduced for a basis and called a coordinate vector of iff ( is left general on purpose). This explicit formulation as a simple bijective function cleared up the confusion for me. You can do neat things with it like turn an abstract map into a map from one basis to another: for linear .
But of course, from a practical standpoint, you don't think about vectors this way. As with proofs, once you heard a plausible argument and accepted it, there is no reason to go back to them. So, I am somewhat critical of your book not introducing matrix-vector multiplication, as this is an obvious centerpiece of linear algebra (especially considering that its historic roots lie in solving Ax=b).
That looks like it also works. It's a different philosophy I think, where LADR says "vectors and matrices are fundamentally different objects and vectors aren't dependent on bases, ever" and your view says "each basis defines a bijective function that maps vectors from the no-basis world into the basis-world (or from the basis1 world into the basis2 world)" but it doesn't insist on them being fundamentally different objects. Like if then they're the same kind of object, and you just need to know which world you're in (i.e. relative to which basis, if any, you need to interpret your vector to).
II don't think not having matrix-vector multiplication is an issue. The LADR model still allows you to do everything you can do in normal LA. If you want to multiply a matrix with a vector , you just make into the n-by-1 matrix and then multiply two matrices. So you multiply rather than . It forces you to be explicit about which basis you want the vector to be relative to, which seems like a good thing to me. If is the standard basis, then will have the same entries as , it'll just be written as rather than .
Just to share my two cents on the matter, the distinction between abstract vectors and maps on the one hand, and columns with numbers in them (confusingly also called vectors) and matrices on the other hand, is a central headache for Linear Algebra students across the globe (and by extension also for the lecturers). If the approach this book takes works for you then that's great to hear, but I'm wary of `hacks' like this that only supply a partial view of the distinction. In particular matrix-vector mulitplication is something that's used almost everywhere, if you need several translation steps to make use of this that could be a serious obstacle. Also the base map that limerott mentions is of central importance from a category-theoretic point of view and is essential in certain more advanced fields, for example in differential geometry. I'm therefore not too keen on leaving it out of a Linear Algebra introduction.
Unfortunately I don't really know what to do about this, like I said this topic has always caused major confusion and the trade-off between completeness and conciseness is extremely complicated. But do beware that, based on only my understanding of your post, you might still be missing important insights about the distinction between numerical linear algebra and abstract linear algebra.
I honestly don't think the tradeoff is real (but please tell me if you don't find my reasons compelling). If I study category theory next and it does some cool stuff with the base map, I won't reject that on the basis of it contradicting this book. Ditto if I actually use LA and want to do calculations. The philosophical understanding that matrix-vector multiplication isn't ultimately a thing can peacefully coexist with me doing matrix-vector multiplication whenever I want to. Just like the understanding that the natural number 1 is a different object from the integer number 1 peacefully coexists with me treating them as equal in any other context.
I don't agree that this view is theoretically limiting (if you were meaning to imply that), because it allows any calculation that was possible before. It's even compatible with the base map.
Ah, I see. So vectors are treated like abstract objects and representing them in a matrix-like form is an additional step. And instead of coordinate vectors, which may be confusing, you only work with matrices. I can imagine that this is a useful perspective when you work with many different bases. Thank you for sharing it.
Would you then agree to define where is the standard basis?
I wouldn't be heartbroken if it was defined like that, but I wouldn't do it if I were writing a textbook myself. I think the LADR approach makes the most sense – vectors and matrices are fundamentally different – and if you want to bring a vector into the matfrix world, then why not demand that you do it explicitly?
If you actually use LA in practice, there is nothing stopping you from writing . You can be 'sloppy' in practice if you know what you're doing while thinking that drawing this distinction is a good idea in a theoretical text book.
This book has previously been discussed by Nate Soares and TurnTrout. In this... review? report? detailed journal entry? ... I will focus on the things which stood out to me. (Definitely not meant to be understandable for anyone unfamiliar with the field.) The book has ten chapters; I did most of the exercises in chapters 6 to 10. I got through that half of the book in about 3 weeks, which was nice since the previous (harder and longer) textbook on topology I worked through took me about a year.
I've previously taken two introductory courses to liner algebra and came out thinking of the field as large and very unstructured, with lots of concepts and theorems floating around disconnectedly. What is a normal matrix? I might or might not have remembered the definition, but I certainly had no idea what it is good for. Three weeks of intense study with this book has improved my understanding dramatically. Now, the field seems structured and pretty intuitive (and I certainly won't forget what a normal operator is). It is truly hard to overstate how good of a job this book does teaching the subject, compared to what I've seen before. It's probably the best textbook on anything I've read so far.
Chapter 1 introduces complex numbers and vectors spaces.
Chapter 2 introduces finite-dimensional vector spaces. What stands out is just how nicely behaved they are. Every finite-dimensional real or complex vector space is isomorphic to either Rn or Cn. Take any k linearly independent vectors, and they span a subspace of dimension k. If they're not linearly independent, then one vector is in the span of the previous vectors. Any list of n linearly independent vectors make up a basis for the entire space. Basically every result one would want to hold when looking back onto the material does actually hold.
Like most fields in math, Linear Algebra ultimately cares about studying functions, and once again it only cares about a tiny subset of all possible functions. In this case it is linear maps, which are the focus of Chapter 3. These are functions T:V→W such that T(x+y)=T(x)+T(y) and T(αx)=αT(x) for all x,y∈V and α∈F, where V and W are vector spaces over some field F (in this book, always R or C).
Brief comparison to topology: The word "continuity" is only mentioned in the final chapter of this book, but every linear map can be shown to be continuous if the topologies on both vector space are given by an inner product. Actually, every inner product induces a norm, every norm induces a metric, any metric induces a topology, but none of the reverse steps are true. Put differently, metric topologies are particularly nice topologies, metrics from norms are particularly nice metrics, norms from inner-products are particularly nice norms, and inner products are the concept studied in chapters 6 and 7. So Linear Algebra only even bothers with the the super nicely behaved stuff.
I'll now detail my biggest confusion with LA before reading this book (this is something I've even brought up explicitly, but without getting a satisfying answer): if T:R2→R3 is a linear map, then it is actually possible to determine its behavior entirely by fixing bases (x,y) of R2 and (v,w,z) of R3 and storing six numbers
⎡⎢⎣a1,1a1,2a2,1a2,2a3,1a3,2⎤⎥⎦
which tell us that T(x)=a1,1v+a2,1w+a3,1z and T(y)=a1,2v+a2,2w+a3,2z. (This determines the behavior of T on all vectors because of linearity of T and the fact that (x,y) and (v,w,z) are bases). The matrix is dependent on the two bases chosen here (given these bases, the set of all linear maps from R2 to R3 is bijective to that of 3-by-2 matrices).
Linear maps aren't relative to bases. Vectors were introduced as elements of Rn, so not relative to bases either, but then they were introduced again in "coordinate form", where we write (a,b)B (B is our basis) to mean "a times the first basis vector plus b times the second," and suddenly a dependence to our basis has been created. My confusion was then: is the B in the index just a reminder that (a,b) is understood to be relative to that basis, or is it a function that maps the vector (a,b) to the vector (ax+by)? In the former case, how should one differentiate between coordinate vectors and non-coordinate vectors? (We've used the same notation.) And how do we specify the vectors in a basis? If I write (1,0) and that means "1 times the first basis vector, 0 times the second" while trying to specify my basis, then clearly that's no good. And the latter case is even worse: now matrix-with-vector-multiplication is ill-defined. Consider the standard basis B=((1,0),(0,1)) on R2 and the basis B′=((0,1),(1,0)) with both vectors reversed. Then we have
(1000)(10)B=(1000)(01)B′=(00)B′=(00)B
But, clearly
(1000)(10)B=(10)B
So we have a flat out contradiction. And it gets worse yet if A is a matrix and one defines a linear map f:V→V by f:v↦Av, which was actually done in one of my past lectures.
So that was the problem. In contrast, here's what the book does. Vectors are elements in Rn, not relative to bases. Maps are functions Rn→Rm, likewise not relative to bases. Matrices are a bunch of numbers put into an array, and one defines the matrix of a linear map T, denoted M(T,B,B′), in the way I've described above (so relative to two bases). Most importantly, there is no matrix-vector multiplication. There is matrix-matrix multiplication, which is of course fine because both matrices are relative to bases if they come from linear maps. To "apply" a matrix to a vector, one defines, given a vector v and a basis B, the matrix of a vector M(v,B) as the unique n-by-1 matrix
⎡⎢ ⎢⎣a1⋮an⎤⎥ ⎥⎦
whose entries are the scalars needed to write v as a sum of basis vectors of B. Then one proves that M(T,B,B′)⋅M(v,B)=M(T(v),B′). Note the square brackets: the book uses () brackets for lists (lists are the same as tupels, i.e. elements of Rnand vectors are lists), and [] brackets for matrices only, which is a solid convention. Does this approach resolve this problem to full satisfaction and without introducing any new difficulties? I think so.
Chapter 4 introduces polynomials. It doesn't have much LA in it, but understanding polynomials well is important for LA, because polynomials can be applied to operators (those are linear maps of the form T→T, i.e. from a vector space into itself). This is so because, given T:Fn→Fn, we can define (cT)(x)=c⋅T(x) if c∈F and T2(x)=T(T(x)) and T0=I (the identity map).
The other special thing about the book – and the justification the author gives for calling it "Linear Algebra Done Right" – is that determinants aren't introduced until the final chapter. This is very unusual: most courses, such as the lectures I took, frequently use determinants in proofs. But all of these proofs can be done without them. And they tend to be fairly simple, too; only a couple are longer than a page.
Doing this, one finds that there are striking parallels between properties about polynomials and properties about linear vector spaces, which seems a lot better for developing a useful intuition than what one gets by doing determinant-based proofs. The central theorem about operators introduced in Chapter 5, for example, states that every operator on a finite-dimensional complex vector space has an eigenvalue (that is, a scalar a∈F such that there exists a nonzero vector v such that Tv=av). This can be proven by essentially applying the fundamental theorem of algebra. One chooses any nonzero vector x∈V and constructs the list (x,Tx,T2x,...,Tnx), where n=dim(V). These vectors cannot be linearly independent because they are n+1 many, so there exist scalars αj such that ∑nj=0αjTjx=0. Now (or rather, after doing some preliminary work) the fundamental theorem of algebra kicks in, and we can reformulate this as (T−β1I)⋯(T−βtI)x=0 where the βj are the roots of the polynomial p(X)=∑nj=0αjXj. Since the RHS equals 0, one of the (T−βjI) isn't injective, and therefore T has the respective βj as an eigenvalue. This is done properly in the book in relatively little space. It also shows why the same theorem fails to hold on real vector spaces: the reformulation of the polynomial to(T−β1I)⋯(T−βtI)x=0 might no longer be possible.
Chapter 6 introduces inner products. These are the generalized formalization underlying the concept of angle between vectors. There is a famous inequality, called the Cauchy-Schwartz inequality, which states that |⟨x,y⟩|≤||x||⋅||y||. The book gives a very nice proof for this that I find easier to remember than the proofs I've seen previously. It is based on the orthogonal decomposition of a vector, where if x and y are any nonzero vectors, then x equals ay+z for some vector z and some scalar a such that y and z are orthogonal to each other. I hadn't heard of this before and it's quite useful. If one remembers the formula for the orthogonal decomposition, plugs it into the inner product and does the obvious things, the Cauchy-Schwartz inequality pops out at the other end.
The absolute most nicest things ever studied anywhere are orthonormal bases. These are lists of vectors (e1,...,en) such that a) they are all linearly independent, b) they span the entire space (follows from a) if there are dim(V) many), c) they are all orthogonal to each other and d) they all have length 1. Because this is LA, orthonormal bases always exist for finite-dimensional inner-product spaces and there is even an explicit algorithm to construct one from any existing non-orthonormal basis. Something really cool that the book does with this (which also requires a neat theorem about minimal distance of a vector to a subspace) is to construct the polynomial of degree at most 5 that is the most similar to the sin function on the interval [−π,π] (based on some reasonable definition of similarity involving integrals). I say this is really cool because I found it to be a surprisingly strong result, and a surprising kind of result coming out of linear algebra, given that there is really nothing linear about the sin function. On the interval [−π,π], this approximation looks indistinguishable to sin to the normal eye.
Back to orthonormal bases: what their existence means is that, whenever one tries to prove anything involving inner product spaces, one can simply say something like, "let {e1,...,en} be an orthonormal basis for V and let {e′1,...,e′m} be an orthonormal basis for W", and then proceed to reduce whatever argument one needs to the basis elements. This feels very powerful, even if arbitrarily difficult problems exist regardless.
With Chapter 7, we're getting to the big classification theorems. LA wants to study linear maps, and in this chapter it classifies all of the linear maps such that there exists an orthonormal basis relative to which they have diagonal form. This is the aforementioned spectral theorem. These are the normal operators on complex vector spaces, and the self-adjoint operators (which is a strictly stronger property) on real vector spaces. This highlights another recurring pattern: there are often two versions of each result, one super nice one about complex vector spaces, and a less nice, more cumbersome one about real vector spaces. This makes a lot of sense given that C is far more nicely behaved than R (this is kind of why we defined complex numbers in the first place). More specifically, it usually comes down to the fact that every polynomial of degree ≥1 has a root in C, but not necessarily in R, just like it did in the aforementioned result on the existence of eigenvalues. The book also draws an interesting parallel between self-adjoint operators and real numbers.
In Chapters 8 and 9, we return to the study of eigenvalues. Basically, the problem is as follows: given an operator T:V→V, one would like to find a decomposition of V into smaller subspaces V1,⋯,Vn such that a) each vector v∈V can be uniquely written as a weighted sum of the vi, i.e. v=∑nk=1αkvk with vk∈Vk and b) the operator T maps each subspace Vk into itself, i.e. T:Vk→Vk. If this is achieved, then the behavior of T is determined entirely by the behavior of the T|Vk (i.e. T restricted to Vk); and this is great because the T|Vk are much easier to handle than T. In fact, if the Vk are one-dimensional vector spaces (i.e. just straight lines, if interpreted geometrically) then T|Vk just multiplies each vector with a scalar. In other words, T|Vkvk=Tvk=avk. This is of course just the equation that says that a is an eigenvalues of T with nonzero eigenvector vk, and so that's the reason why eigenvalues are of interest. Conversely, given an eigenvalue a with eigenvector vk, the corresponding one-dimensional subspace is simply {avk|a∈F}.
As stated before, every operator on a complex vector spaces has some eigenvalue. But does it have n different eigenvalues, so that V=V1⊕⋯⊕Vn? (The symbol ⊕ means "direct sum".) In "most" cases, the answer is yes – by which I mean that, if one generates an operator randomly (by sampling the coefficients of its matrix relative to the standard basis randomly out of some large finite set), then as one increases the size of that set (for example by switching from 32 bit to 64 bit to 128 bit floating point arithmetic and so on), the probability of generating an operator for which this doesn't work converges to 0. And I believe it is also possible to define a non-discrete probability space such as [0,1]n2, give it the uniform distribution, and then prove that the set of operators for which this doesn't work has probability mass 0.
So for "typical" operators, this is always true, but there do exist specific operators for which it isn't. One such example is the operator T whose matrix wrt the standard basis is [1011]. This operator has the eigenvalue 1 but the only eigenvectors are of the form (0,x). Thus there exists one one-dimensional subspace that is invariant under T, namely {(0,x)|x∈F}, but not a second one, so there isn't any decomposition of V=C2 into two invariant subspaces.
To remedy this, Chapter 8 introduces generalized eigenvectors. Observe that the property Tx=ax can also be written as (T−aI)x=0 (where I is the identity map) or as (T|S−aI|S)=0, where S={ax|a∈F}, i.e. S is the one-dimensional vector space containing x. The equation above then says that, if we take T, restrict it to S and subtract a times the identity operator, we get the zero operator (so the 0 on the right denotes the null map Z:S→{0}). This reformulation leads to the non-obvious definition of a generalized eigenvector: given an eigenvalue a of T with corresponding one-dimensional subspace S, a generalized eigenvector of a is a vector x such that (T|S−aI|S)k=0 for some k∈N. Which is to say, subtracting a times the identity operator from T doesn't have to yield the null operator immediately, but it does have to yield the null operator if the resulting operator is applied several times.
In our example above, only vectors of the form (0,x) are eigenvectors, but all vectors are generalized eigenvectors, because T−1I is the operator whose matrix wrt to the standard basis is [0010], and this operator equals the null operator if it is applied twice, i.e. (T−1I)2=0.
If all of the above is done properly, then one gets something that comes very close to the perfect decomposition for any operator on a complex vector space. This is the theory behind the famous Jordan form for matrices: it is not quite a diagonal matrix (which corresponds to the perfect decomposition), but it almost is. It just has a bunch of additional 1s to deal with the generalized eigenvectors.
Chapter 9 then does a similar thing for real vector spaces. As always, it's more cumbersome and the result is weaker, but it's still quite strong.
At last, Chapter 10 introduces determinants, and it's actually kind of boring! All of the interesting LA has already been done, so this chapter feels like a mere afterthought. Again, one doesn't seem to need determinants to do basic LA.
The next big item on my list is Understanding Machine Learning, which is also from Miri's guide. I've so far neglected trying to get familiar with actual AI research, and it's time to remedy that, or at least get a solid grasp of the basics.