Geometric Algebra was a complete mystery to me until I finally realized: it is just polynomial multiplication, but with some quantities for which the order of multiplication matters, and which have a weird multiplication table: i*i = 1, i*j = -j*i. That's it. Most intros present geometric product of two vectors:
(x1*i + y1*j) * (x2*i + y2*j)
as some deep mysterious thing, but its just the same FOIL polynomial multiplication you learned in freshman algebra:
(x1*i + y1*i)(x2*i+y2*j) = x1*x2*i*i + x1*y2*i*j + y1*x2*j*i + y1*y2*j*j
= (x1*x2 + y1*y2) + (x1*y2 - y2*x1)*i*j
The quantity in the first parenthesis, above, is the our old familiar dot product. The quantity in the second parenthesis is our old friend the cross product, but expressed in a new dimension whose basis is i*j, and which--unlike the cross product--generalizes to any number of dimensions. In GA its called the "wedge product".
Once you "get" that, you find that doing things like deriving rotation formulas, etc, become easy, because you can apply all the skills you developed in algebra to solving geometric problems.
Your comment is interesting to me. A few days ago someone asked why math classes don’t the how and why and rather tend to just present the formulas for the operations and tell people to compute. Here you are focusing on what the operations do rather than on why they work. It’s an interesting contrast and one that teachers of mathematics have to balance. The two questions,
How does it work?
Why does it work?
Can’t always both be answered well in a given course.
// focusing on what the operations do rather than why //
YMMV, of course, but in general, I always found it easier to understand the why once I understood the what and the how, rather than the other way around.
Yes. Usually, though, when someone complains about the teaching of mathematics they say we focus too much on how to do the operations and not enough on why they work the way they do. I agree it is much easier to understand why after knowing how.
Where can I use it?
Relevant:
https://www.smbc-comics.com/comic/why-i-couldn39t-be-a-math-...
This is because "how does it work" and "why does it work" are Fourier transforms of each other!
The lecturer typically do know quite well the how and why, but teaching these points takes a lot of time (you also have to explain a lot about the practical application until you are able to explain why this mathematical structure is helpful for the problem).
Since lecturers are typically very short of time in lectures, they teach the mathematics in a concise way and trust the students to be adults, capable of going to the library, and reading textbooks about the how and why by themselves if they are interested in this. At least in Germany, there is the clear mentality that if you are not capable of doing this, a university is a wrong place for you; you should better get vocational training instead.
I think there's a big difference between "presenting a formula" and "presenting rules for doing calculations". People are very good at extrapolating complicated results from a few simple rules: that's why taking derivatives and doing (elementary) integrals is fairly easy and also fairly easy to remember a long time after you've taken a calculus course. On the other hand, a bunch of literal miscellaneous formulas is very hard to hold on to --- for instance that's how introductory physics is taught, a bunch of disjoint relationships that you have to make sense of in your mind to make any use of.
In fact all anyone really wants for vector calculus is a bunch of "tools" they can use that will generally give the right answer if applied mindlessly. I think that's why GA is relatively popular, because it says how to do basic geometric operations (rotations, reflections, etc) without any thought.
One of the things that it takes the longest to learn in mathematics is that most things are defined in the dumbest way possible.
In particular, if (over a vector space V) you want to define a bilinear product m:V x V -> V, this is exactly the same thing as deciding on m just over pairs of basis vectors. If I were to call that "the universal property of the tensor product", you'd probably say "uh huh".
It's annoyingly one of those things that once you understand, you can't see how you didn't understand it before. e.g. the tensor algebra (aka free algebra) over a vector space is "just" the dumbest possible multiplication: if i and j are basis vectors, then i*j = i⊗j. No further simplification possible. j*i*i*j = j⊗i⊗i⊗j, etc. with associativity and distributivity and linearity: (5i+3j)*k = 5i⊗k + 3j⊗k, etc.
Then, if you have a dot product, a Clifford algebra is "just" the tensor algebra with a single simple extra reduction rule: For a vector v, v*v = v⋅v. So now e.g. `kjiijl = kj(i⋅i)jl = kjjl = k(j⋅j)l = kl = k⊗l`, which can't be reduced further.
The real magic is that it turns out you can prove[0] that ij=-ji for two basis vectors i,j, so e.g. (ij)^2 = ijij = -ijji = -ii = -1. So `ij` is a square root of -1, as is `ik` and `jk` (but they're different square roots of -1), and you get things like complex numbers or quaternions for free, and suddenly a ton of geometry appears (with `ij` corresponding to a chunk of plane in the same way a basis vector `i` is a chunk of line/arrow. `ijk` becomes a chunk of volume. etc.).
But it's all "just" the dumbest possible multiplication plus v*v = v⋅v.
[0] Proof: (i+j)^2 = i^2+ij+ji+j^2 = 1+ij+ji+1 = 2+ij+ji. But also (i+j)^2 = (i+j)⋅(i+j) = i⋅(i+j) + j⋅(i+j) = 2. So 2 = 2+ij+ji, so ij=-ji.
In your proof, doesn't i⋅(i+j) + j⋅(i+j) = (1+ij) + (ji+1) = 2 + ij + ji, not 2?
What am I missing? I think ij = -ji is an independent axiom.
It's dot products there, which are also distributive. So i⋅i + i⋅j + j⋅i + j⋅j = 1 + 0 + 0 + 1 = 2.
We got dot products from the fact that v^2 = v⋅v for any vector v (so in particular, i+j). Then dot products are linear so you can expand that out. So basically the proof compares using FOIL on geometric products to FOIL on dot products.
Note that `ij` is not a vector; it's a tensor (or "multivector" and in particular a "blade" in the GA lingo). The dot product reduction only applies to vectors from the original space. But i, j, and i+j are vectors.
For simplicity and practically this is all over real vector spaces. You can make similar definitions with other fields, but you have to be careful when working mod 2.
For simplicity I'm also using an orthonormal basis. You also need to be a bit more careful if working with e.g. special relativity where your timelike basis vector t has t⋅t=-1 (though you can see things still work).
Pedantic nitpick: it's not the dumbest possible multiplication; it's the dumbest possible bilinear multiplication. There are "freer" free products on vectors than tensor product; the freest possible one is their Cartesian product as sets, which just makes a tuple out of them ij = (i, j). If you regarded the resulting sets as a vector space, it would not be true that i0 + 0j = (i, 0) + (0, j) ≠ (i, j). The tensor product is the freest that is well-behaved as a vector space in the sense that it respects the underlying scalar operations from its arguments.
In another comment, they pointed out https://youtu.be/htYh-Tq7ZBI?si=lOmsCL2DoqUCQgh1&t=1540 in which Freya did an excellent job reducing the number of axioms to 1: If you define multiplying a vector by itself to be equal to the length of the vector, squared, then everything else falls out of simple polynomial multiplication. It's quite lovely.
"contraction axiom" for those looking to google
Or "Clifford algebra" for the generic mathematical structure.
The second term is not a cross product. It's the exterior product (or bivector). Cross product only works in 3D. Exterior product can work in any higher dimension.
A cross product of two 3D vectors is another vector, perpendicular to the plane containing the two vectors. The exterior product is a 2-vector (thus bivector) that sweeps the parallelogram between the two vectors; it's on the plane containing the two vectors. In 3D, the cross product vector is perpendicular to bivector plane.
The exterior product is the "correct" version of the cross product. I'm OK with the good guys seizing the name.
Cross product works in dimension 2^k-1 for k in 0,1,2,3. (real / singleton, complex / duonion, quaternion, octonion) It is trivial in 0D and 1D, defined uniquely in 3D, and unique up to some arbitrary negations in 7D.
https://en.m.wikipedia.org/wiki/Seven-dimensional_cross_prod...
Thanks for sharing that intuition