This Week I Learned: May 13, 2022

So, I’m a PhD student in CS, and I was feeling a bit embarrassed about being rusty on my calculus and various aspects of high-school algebra and analysis-adjacent stuff. I haven’t needed to think about complex numbers for almost fifteen years now. I definitely had a decent grasp of them at the time, but a few things like the interpretation of multiplication and the meaning of the complex conjugate always felt more like “dumb accidents” shaking out of the algebra of the multiplication, at least the way I was taught in high school.

I’ve also spent some time working through an abstract algebra textbook on my own, when I’m not super busy with grad school. I like group theory a whole lot, but working with fairly straightforward groups when I make my own examples, I never really truly grasped why group actions and representation theory are a big deal. The correspondences I could work out were fairly obvious ones.

Anyway, I was laying awake in bed a few nights ago, and was trying to remember the geometric interpretation of the multiplication of complex numbers. The next day I sat down in a coffee shop with a pencil and a notebook and started playing around. Once I noted the similarity between multiplying by i and a specific linear transform, I realized I could construct an isomorphism between C under multiplication and a specific subgroup of GL(2,R), and was delighted to discover I’d worked out the matrix-based definition of complex numbers on my own (I had never seen or heard of it!).

Thinking about multiplying complex numbers in terms of composing linear transforms (in particular the Abelian subgroup of GL(2,R) consisting of rotation and scaling) immediately cleared up a whole bunch of things for me. The intuition of the relationship between the algebraic and geometric view of the complex conjugate shakes out immediately, which made me happy, and made the definition of the multiplicative inverse seem much less arbitrary than 19-year-old me felt it was. Also, I was like “ohhh that’s what representation theory is about.”

At another level, I learned not to be embarrassed about reviewing seemingly elementary stuff, because revisiting them from a more sophisticated perspective is surprisingly fruitful!
This week I learned how to construct a functor from an abelian category to the category of pointed sets with the property that a morphism is a monomorphism/epimorphism if and only if the corresponding function is injective/surjective, respects the notions of kernels and images, is faithfully exact, and allows us to verify whether a diagram commutes by working with elements of pointed sets.

Notably, this is much weaker than the Freyd-Mitchell embedding theorem because there's not much structure in the category of pointed sets. Furthermore, this functor in neither full nor faithful (which I need to check). I do want to pursue the opportunity to change the codomain of our functor to be the category of abelian groups, and this should really test my understanding of colimits while giving us a slightly more nice functor.
I was trying to optimize my implementation of the Ackermann function. The old implementation used tail-call elimination, memoization, and a partial table of closed-form expressions based on the Wikipedia article, it used the usual recursive definition in the general case.

I noticed a pattern and started reading about Conway Chained Arrow notation, then I realized that a chain of length 3 is essentially the same thing as Knuth's Up Arrow notation with a number of arrows equal to the rightmost number. Then I realized another thing, that Knuth's notation is the same as a hyperoperation whose "degree" is the number of arrows plus 2.

This means that the best way to simplify and optimize the A fn is to define it as follows:

A(m, n) = H(m, 2, n + 3) - 3

This delegates all the "recursive work" to whatever implementation of Hyper-operations is used. HP is simpler to define and easier to optimize, so I see this as an absolute win lol.

Turns out Wikipedia *already* pointed that out, in the "Definitions" section, but I thought it was a 3rd definition that returned different results compared to the classic definition and the modern (Peter) definition... **bruh**
by
I recently learned the following perspective on the connection between the vector space V and its algebraic dual V\*:

* We can treat V as the set of all finitely-supported functions from its basis B to the base field. This is by definition - B is a basis of V if V is a direct sum of B copies of the base field.
* We can treat V\* as the set of all functions from the basis B. This follows from the universal property of free modules - every function on the basis can be extended to a linear function on the whole space.

It is now obvious that V and V\* are isomorphic if and only if V is finite dimensional.

0 like 0 dislike