0 like 0 dislike
0 like 0 dislike
This Week I Learned: June 03, 2022

2 Answers

0 like 0 dislike
0 like 0 dislike
This week I began reading Tu's *An Introduction to Manifolds*, for the first time since I got it for Christmas last year. I'm only a couple of pages in so far, but the challenge so far is pitched just right. The hardest thing so far [EDIT: three "so far"s in a row! I *am* drunk, ffs] has been dealing with the confusing notation. On the very first page of actual mathematics, there are superscripts with subscripts, and I actually read it wrong when I wrote down the definition it involved. The bottom of that same page has the general form of a Taylor series of a general function from ℝ^? to ℝ^? (there was an *n* somewhere, and I'm a bit drunk atm, and he didn't put it in a formal definition or theorem or anything; the actual lemma was about a "good enough" Taylor series for functions that are merely C^(∞), the proof of which I picked up quite quickly) using the same notation, so that was a trip.

The exercise I tried yesterday reminded me why I'm doing maths, and I'll sketch it out for you because it was quite cool. The question concerned f(x), where f(x) = e^(-1/x) for x > 0 and f(x) = 0 for x <= 0. The point is that this is an example of a C^(∞)-smooth function which is not real-analytic everywhere (specifically, it's not real-analytic at 0 because all its derivatives there are 0, so the Taylor series there is identically 0, and for any neighbourhood of 0 there is a part of the function which is not 0, so Taylor series != f(x) for that neighbourhood), and it's canonical enough an example that Roger Penrose cited this exact same function to make the exact same point in his *The Road to Reality*.

The question began with a straightforward induction to prove that for x > 0, all derivatives of order k were of the form e^(-1/x) times a polynomial in 1/x of degree 2k, and I finished that off easily enough. The next part of the question was more interesting; it read:

> Prove that *f* is C^(∞) on ℝ and that *f*\^(k)(0) = 0 for all k >= 0

Everywhere but x = 0, this was trivial: either all derivatives would be 0 (for x < 0), which is continuous trivially, or they would be of the form I proved by induction in the previous part, which is continuous by the algebra of continuous functions. But at x = 0, there would be a problem. Not a difficult problem, since the question says all derivatives must be 0 there, but it wouldn't be completely trivial to show this. I decided to attack the problem with one-sided limits. From the left, there wouldn't be a problem; it only remained (as I saw it; I might be horribly wrong here) to prove that the right-sided limit would also be 0.

I initially went about this a stupid way; remembering my algebra of limits from my first real analysis class, I split the limit into the e^(-1/x) term and the polynomial in 1/x term. The former would go to 0 in the limit quite patently, and then if I could bound the polynomial in 1/x, I could get my limit. Immediately after thinking of this, I thought "well, how the fuck am I gonna do that??", because (in my estimation) it can't actually be done; all the powers of 1/x are gonna blow up massively in the vicinity of 0. But no fear! For I quickly realised I could do it with l'Hôpital's rule: put the polynomial in 1/x in the numerator and e^(1/x) in the denominator. I stumbled a little here, realising that one or two applications wouldn't do it, but it occurred to me that I had to "run out the clock" on the polynomial in the numerator: if I applied the rule 2k + 1 times, I would differentiate the numerator 2k + 1 times and get 0 there, and that was the limit I wanted.

As I say, I am a bit drunk recounting this, and I didn't get as far as *actually* applying l'Hôpital's rule like this: it might not work out like I said. But I think I'm good for it. Thanks for coming to my TED talk, lmao.
0 like 0 dislike
0 like 0 dislike
I learned a very elegant proof of Gauss' Lemma from number theory, which asserts that the Legendre Symbol (a/p) is equivalent to (-1)^μ, where μ represents the number of least negative residues modulo p.

I think the proof is elegant because it focused on the construction of a very discreet yet useful set, and utilizes quite a few techniques to finally reach a conclusion, via some very epic deductions.

Here's the proof as I understand it.

The assertion is that (a/p)=(-1)^μ.

We begin by constructing the set S={m, 2m, 3m,..., Nm}, where N=(p-1)/2 (an integer), and gcd(m,p)=1. We then construct another set T from S, elements of which are those in S, but reduced modulo p, so they would be confined within the open interval (-p/2, p/2).

We will determine 3 properties of T that will be used later in the proof. First note that for all t_i and t_j in T, they will be incongruent modulo p. This is because, returning to S, we noticed that gcd(m,p)=1, and we can immediately see that any reduction of elements from S, all of which are inequivalent, will still yield different elements in T. Next, it's obvious that all t_i will be nonzero, because none of the elements in S can possibly be multiples of p, as gcd(m,p)=1.

Finally, we make the statement that for t_1 and t_2 in T, t_1≠-t_2. This is seemingly not motivated, but it will be of use later. We first assume that t_1=-t_2, so that t_1+t_2=0. This implies that - since every t_i is a result of reduction modulo p from some element s_i - there exist some elements s_1 and s_2 that are the sum of t_1 and some multiple of p and t_2 and some multiple of p respectively. (Because reducing modulo p means subtracting - or in some cases adding - multiples of p) This we see that s_1+s_2=kp, for some integer k, or that s_1+s_2 is some multiple of p. Returning to the definition of elements of S, if s_1 and s_2 are in S, then they may be written as some coefficient j_1 and j_2, each multiplied by some m. Thus we have that mj_1+mj_2=kp or (j_1+j_2)m=kp. However, if we look at the restrictions on j_i or the coefficients of m in S, we see that the maximum value for j_i is (p-1)/2. Even with j_1=j_2=(p-1)/2, so that j_1+j_2=2[(p-1)/2]=p-1, it is impossible for (j_1+j_2)m=kp to be true, as the LHS never exceeds as a multiple or even meets p. Thus t_1≠-t_2.

Ok. Continuing from here, after establishing these three properties, we are prepared to understand the characteristics of T. If we consider all positive elements to be T, then we could have the integer elements on the open interval (0, p/2) or {1, 2, 3,..., N}. (recall that N=(p-1)/2) It satisfied all three preestablished properties of T, and |{1, 2, 3,..., N}|=|S|=(p-1)/2, so it works. Now we consider the general case on the interval (-p/2, p/2). If there are only (p-1)/2 elements in T, non-inclusive, and no element can be the inverse of itself, it must be the set T={±1, ±2, ±3,..., ±N}.

With this in consideration, we can now move toward the finish. We multiply all s_i and set it congruent to the multiplication of all t_i modulo p, so prod(t_i)≡prod(s_i) (mod p). Thus we see that (±1)(±2)(±3)•...•(±N)≡(m)(2m)(3m)•...•(Nm) (mod p). Notice that since all coefficients are relatively prime to the modulus, we can cancel all integers ranging from 1 to N, so that (±1)(±1)(±1)•...•(±1)≡(m)(m)(m)•...•(m) (mod p). Now, how many m are there? Well, S ranged from 1 to N, so there must be exactly N m's. Thus, we have that (±1)(±1)(±1)•...•(±1)≡m^{(p-1)/2} (mod p). Now, what may the LHS of this congruence be? Well, in the beginning of the proof, we defined μ to be the number of least negative residues modulo p. We can then say that the value of that long chain of ±1s can be expressed as (-1)^μ, where μ is the number of elements from T that are negative, hence 'least' - coming from the notion that they are absolutely reduced - 'negative residues modulo p'.

Recalling Euler's Criterion, which asserts that a^{(p-1)/2} ≡(a/p) (mod p) for some a (which is inexpressibly far easier to prove), we can then write that (a/p)≡(-1)^μ (mod p). However, there is one final step; notice that both LHS and RHS are binary, as they can either be 1 or -1. Thus, the modulo p is unnecessary, as it does not change the result, so we must have that (a/p)=(-1)^μ, which is what we wanted to prove.

QED

Anyway, yes, I thought this proof was cool, and that's what I have learned this week so far.

Related questions

0 like 0 dislike
0 like 0 dislike
79 answers
coL_Punisher asked Jun 21
Regretting majoring in math
coL_Punisher asked Jun 21
0 like 0 dislike
0 like 0 dislike
61 answers
_spunkki asked Jun 21
Just ordered a Klein Bottle from Cliff Stoll. He sent me about 2 dozen pictures of him packing it up. Why is he so cute :)
_spunkki asked Jun 21
0 like 0 dislike
0 like 0 dislike
21 answers
Brands_Hatch asked Jun 21
Is set theory dying?
Brands_Hatch asked Jun 21
0 like 0 dislike
0 like 0 dislike
2 answers
a_dalgleish asked Jun 21
Contributing to the right math area, If all areas are equally curious
a_dalgleish asked Jun 21
0 like 0 dislike
0 like 0 dislike
5 answers
BrianDenver7 asked Jun 21
Is there a nice way to recast riemannian geometry in terms of principal bundles?
BrianDenver7 asked Jun 21

24.8k questions

103k answers

0 comments

33.7k users

OhhAskMe is a math solving hub where high school and university students ask and answer loads of math questions, discuss the latest in math, and share their knowledge. It’s 100% free!