Figuring it out

This is just to test the \LaTeX commands. I seem to be able to insert mathematical symbols inside the text, like f , I or [a,b]. I should also be able to emulate the display environment. Let us write

\displaystyle f'(c)=\frac{f(b)-f(c)}{b-a}.

Yes, that is alright, but I had to align center manually. Can I write something meaningful, like a definition or a theorem, in a way that emphasize it?

Theorem (the mean value theorem) Let f [a,b] \rightarrow \mathbb{R} be a function that is continuous on the closed interval [a,b] and differentiable on the open interval (a,b), then there exists a number c in (a,b) such that

\displaystyle f'(c)=\frac{f(b)-f(c)}{b-a}.

I think I will have to number my equation manually, like this:

\displaystyle \frac{g'(c)}{f'(c)}=\frac{f(b)-f(c)}{g(b)-g(a)}. \ (2)

I guess this is tolerable for now.

That is all!


A perturbation method

I had planned on doing a post, or maybe a series of posts, on scaling, homogeneity and dimensional analysis in both mathematics and physics. I tried to write down a few ideas but it got out of control quickly. I will go back to it if I find a way to present all that clearly and concisely. For the moment I will try to explain a trick that I have found both useful and quite typical of the way one has to think while practising mathematical analysis. An example explains it best.

Let use consider the set of square matrices of a given size with complex coefficients, that is


The characteristic polynomial of a matrix is defined in the usual way


We wish to prove that the characteristic polynomial of a product of two matrices does not depend on the order of the factors (while the product itself does, of course), that is to say


Let us assume first that both matrices are non-singular. Then we can prove the formula above by a straightforward calculation, using only the fact that the determinant of a product is the product of the determinants.


So this is easy to prove for non-singular matrices. The next step is to realize that we can add to any matrix a small perturbation that make it non-singular. Indeed, if we consider the determinant

\det(M+\epsilon I_n) ,

we see that it is a  non-constant polynomial in the small parameter, and it is therefore non-zero, except for a finite number of values. We have the limit

\lim_{\epsilon \to 0}M_{\epsilon}=M


M_{\epsilon}=M+\epsilon I_n

in the sense that the limit holds true for each coefficient.

We can now reach the desired conclusion. Given two matrices, we have, except maybe for a finite number of values of the parameter


After one minute of reflexion, one sees that all coefficients of the characteristic polynomial are polynomial functions of the parameter. By passing to the limit as the parameter tends to zero, one gets the desired formula.

This is quite typical. If we want to prove something that is easy to establish in the generic case and problematic in a few special cases, we perturb slightly these special cases to obtain generic ones , then go back by making the perturbation tend to zero.

I have many other examples, but it is getting late. Also basic LaTeX in WordPress gives truly heinous results.  I know there are things that work better. I will try to find out about them and produce something readable.

That is all!

What’s the deal?

I’m a grad student in mathematics. I live near Paris (France). I hope writing this blog will help me put together my ideas about math and also improve my english. For the moment, I plan to post about things that are well known, but mostly through oral tradition. Unless otherwise stated, nothing here is original research. I will try to give credit when I can, but I mostly don’t remember who said what. I plan to keep things at a relatively elementary level in the near future anyway.

In times, I would welcome comments, especially (constructive) criticism about my math, my method of exposition, my writing style or my english.

That is all!