0 like 0 dislike
0 like 0 dislike
This Week I Learned: May 13, 2022

4 Answers

0 like 0 dislike
0 like 0 dislike
So, I’m a PhD student in CS, and I was feeling a bit embarrassed about being rusty on my calculus and various aspects of high-school algebra and analysis-adjacent stuff. I haven’t needed to think about complex numbers for almost fifteen years now. I definitely had a decent grasp of them at the time, but a few things like the interpretation of multiplication and the meaning of the complex conjugate always felt more like “dumb accidents” shaking out of the algebra of the multiplication, at least the way I was taught in high school.

I’ve also spent some time working through an abstract algebra textbook on my own, when I’m not super busy with grad school. I like group theory a whole lot, but working with fairly straightforward groups when I make my own examples, I never really truly grasped why group actions and representation theory are a big deal. The correspondences I could work out were fairly obvious ones.

Anyway, I was laying awake in bed a few nights ago, and was trying to remember the geometric interpretation of the multiplication of complex numbers. The next day I sat down in a coffee shop with a pencil and a notebook and started playing around. Once I noted the similarity between multiplying by i and a specific linear transform, I realized I could construct an isomorphism between C under multiplication and a specific subgroup of GL(2,R), and was delighted to discover I’d worked out the matrix-based definition of complex numbers on my own (I had never seen or heard of it!).

Thinking about multiplying complex numbers in terms of composing linear transforms (in particular the Abelian subgroup of GL(2,R) consisting of rotation and scaling) immediately cleared up a whole bunch of things for me. The intuition of the relationship between the algebraic and geometric view of the complex conjugate shakes out immediately, which made me happy, and made the definition of the multiplicative inverse seem much less arbitrary than 19-year-old me felt it was. Also, I was like “ohhh that’s what representation theory is about.”

At another level, I learned not to be embarrassed about reviewing seemingly elementary stuff, because revisiting them from a more sophisticated perspective is surprisingly fruitful!
0 like 0 dislike
0 like 0 dislike
This week I learned how to construct a functor from an abelian category to the category of pointed sets with the property that a morphism is a monomorphism/epimorphism if and only if the corresponding function is injective/surjective, respects the notions of kernels and images, is faithfully exact, and allows us to verify whether a diagram commutes by working with elements of pointed sets.

Notably, this is much weaker than the Freyd-Mitchell embedding theorem because there's not much structure in the category of pointed sets. Furthermore, this functor in neither full nor faithful (which I need to check). I do want to pursue the opportunity to change the codomain of our functor to be the category of abelian groups, and this should really test my understanding of colimits while giving us a slightly more nice functor.
0 like 0 dislike
0 like 0 dislike
I was trying to optimize my implementation of the Ackermann function. The old implementation used tail-call elimination, memoization, and a partial table of closed-form expressions based on the Wikipedia article, it used the usual recursive definition in the general case.

I noticed a pattern and started reading about Conway Chained Arrow notation, then I realized that a chain of length 3 is essentially the same thing as Knuth's Up Arrow notation with a number of arrows equal to the rightmost number. Then I realized another thing, that Knuth's notation is the same as a hyperoperation whose "degree" is the number of arrows plus 2.

This means that the best way to simplify and optimize the A fn is to define it as follows:

`A(m, n) = H(m, 2, n + 3) - 3`

This delegates all the "recursive work" to whatever implementation of Hyper-operations is used. HP is simpler to define and easier to optimize, so I see this as an absolute win lol.

Turns out Wikipedia *already* pointed that out, in the "Definitions" section, but I thought it was a 3rd definition that returned different results compared to the classic definition and the modern (Peter) definition... **bruh**
by
0 like 0 dislike
0 like 0 dislike
I recently learned the following perspective on the connection between the vector space V and its algebraic dual V\*:

* We can treat V as the set of all finitely-supported functions from its basis B to the base field. This is by definition - B is a basis of V if V is a direct sum of B copies of the base field.
* We can treat V\* as the set of all functions from the basis B. This follows from the universal property of free modules - every function on the basis can be extended to a linear function on the whole space.

It is now obvious that V and V\* are isomorphic if and only if V is finite dimensional.

Related questions

0 like 0 dislike
0 like 0 dislike
79 answers
coL_Punisher asked Jun 21
Regretting majoring in math
coL_Punisher asked Jun 21
0 like 0 dislike
0 like 0 dislike
61 answers
_spunkki asked Jun 21
Just ordered a Klein Bottle from Cliff Stoll. He sent me about 2 dozen pictures of him packing it up. Why is he so cute :)
_spunkki asked Jun 21
0 like 0 dislike
0 like 0 dislike
21 answers
Brands_Hatch asked Jun 21
Is set theory dying?
Brands_Hatch asked Jun 21
0 like 0 dislike
0 like 0 dislike
2 answers
a_dalgleish asked Jun 21
Contributing to the right math area, If all areas are equally curious
a_dalgleish asked Jun 21
0 like 0 dislike
0 like 0 dislike
5 answers
BrianDenver7 asked Jun 21
Is there a nice way to recast riemannian geometry in terms of principal bundles?
BrianDenver7 asked Jun 21

24.8k questions

103k answers

0 comments

33.7k users

OhhAskMe is a math solving hub where high school and university students ask and answer loads of math questions, discuss the latest in math, and share their knowledge. It’s 100% free!