My colleague and friend, Mario Micheli, has co-authored (with P.Michor, D.Mumford) a paper “Sectional Curvature in terms of the Cometric, with Applications to the Riemannian Manifolds of Landmarks” (arxiv). This is a landmark (sic!) paper, that expresses the curvature of the manifold of landmarks in terms of the cometric, which is a much simpler formula (but by no means simple) than the expression in terms of the metric (cometric is the inverse of the metric).

Another interesting part is that authors decomposed the curvature into four terms, thus attempting to describe what exactly makes curvature positive/negative. The decomposition is given in physical terms (strain, force, compression). Armed with a strong physical intuition one might be able to decipher what exactly makes the curvature non-zero.

A more general paper on the expression of curvature in terms of cometric is coming out later, as I have been told by MM.

Had a great time attending the Shape FRG meeting in London.
Interesting trivia fact. The idea of using Teichmuller spaces as the way to parametrize curves
(as in paper “2D-Shape Analysis Using Conformal Mapping”, pdf) was suggested to David by Curt McMullen. Although David worked on moduli spaces he was not aware of the definition of Teichmuller spaces. After Curt McMullen explained it to him, David decided to parametrize shapes this way.

A few interesting quotes.
Comparing the role of Catholic Church in Europe and Chinese bureaucracy in the development of sciences David noted: “As well known, computing the solar and lunar eclipses is a bitch!”

David is always careful in using proper dimensions of quantities and proper scales. It is surprising to see such care in a mathematician of an amazing caliber like David. It appeared in the metrics he developed and in his remarks at the conference.
David: “I would suggest you to state explicitly which kernel you used and the size of it in all your publications. This way you leave a paper trace. This is called the scientific method.”

And David was really happy when a student told him that the axes in his 3D image were millimeters.
David: “In Nature and Science they always show the scale! Always!”
Laurent: “But this is just an MRI image with 1-by-1-by-1mm voxel size. So in this case it coincides with millimeters. But in general nobody cares.”
David: “Shit!”

In my last Differential Geometry class I have introduced my students to the Calculus of Variations. This is an important topic when one wants to minimize anything, especially when one wants to find geodesics. There is a very neat fact that I came across again and I wanted just to write it down.

In general for the reader interested in Differential Geometry I would like to refer you to the nicely written Differential Geometry notes by my friend Mario Micheli.
He is currently at UCLA. The notes are available here and here. The lectures with easy introduction to Calculus of Variations are 26, 27, 28.

I will be very brief in describing the nifty little thing. So the question is “find function $q(t)\in C^1[a,b]$ with fixed points, i.e. $q(a)=y_0$ and $q(b)=y_1$, such that this $q(t)$ has the graph of minimal length”. Everything boils down to minimizing the functional
$J(q) = \int_a^b \sqrt{1+\dot{q}(t)}dt$
with respect to function $q(t)$.

So the interesting bit comes when one computes the gradient of this function. One gets (again, for details refer to the links above, or do the computation yourself. “It is an easy and helpful exercise, muahaha”):
$\nabla J(q) = -\frac{\ddot{q}}{(1+\dot{q}^2)^{3/2}}$.

If one imagines the simple gradient descent algorithm on any chose path between points $(a,y_0)$ and $(b,y_1)$, then one can see that the steepest descent will have to follow in the direction $-\nabla J(q)$. That means that where function $q(t)$ is concave, then $\ddot{q}<0$. Therefore the steepest descent will be flattening the concave part. Similarly, it would be flattening the convex part of the function, where the second derivative is positive. Also, the more concave (convex) the function is, the bigger the gradient, thus the function gets pushed to the straight line very quickly.
(As you have guessed straight line is the solution to this problem).

It is very good exercise to implement steepest descent for this problem. And then follow it with conjugate gradient and the some quasi-newton method. That’s how I learned these optimization techniques. Cheers.

Lowering or raising indices or, in other words, canonical isomorphisms between a tangent bundle $TM$ and a cotangent bundle $T^*M$ are called musical isomorphisms.

I will post some reviews of books that I find useful in the course of my study. I am not sure how technical I want my reviews to be or how general and vague.
For the time being I’ll try to do both ðŸ™‚

The first book I would like to review is the book “Shapes and Diffeomorphisms” by Laurent Younes, Johns Hopkins University.
This book summarizes a lot of work done by the author at the Center for Imaging Science, where Michael Miller’s group does a lot of interesting and exciting work on medical images.

This is a wonderful book for anyone who would want to learn about shape representation and matching. Very thoroughly written. It starts with curves, how they could be represented, then moves onto surfaces. Talks about Euler-Lagrange (Euler-Arnold) equations on the groups of diffeomorphisms which is a fundamental equation for all the Computational Anatomy, or pattern matching in general. A wonderful overview of methods and techniques that are in use today. Author also discusses numerical implementation and issues that arise in the field.

Sometimes I find notation a bit too heavy, with many sub- and super-scripts and names for variables that are not intuitive (to me, at least). In particular in the Diffeomorphic Matching chapter (Chapter 11) first a general construction is presented and then it is demonstrated on several examples. The general notation is reused in the examples which I find a bit cumbersome. One can be better off rewriting a specific construction in the landmarks case introducing his own notation. After moving on to the general case it seems to make more sense. At least that’s how it worked for me.

Later I am going to post a computation, which is just an expanded proof of a theorem from the book.

I recommend this book to anyone trying to learn about Pattern Theory, and in particular the emerging applied discipline, Computational Anatomy.

P.S. If your institution is subscribed to Springer publishing house, then you can view this book for free online.

In the class I am teaching I tried to count number of independent components of the Riemann curvature tensor $R_{ijkl}$ accounting for all the symmetries. We’ll call it RCT in this note.
It turned out to be not so straightforward, so I decided to write it down here.
First of all, here are the three symmetries/skew-symmetries of $R_{ijkl}$:
$R_{ijkl}=-R_{jikl}$,
$R_{ijkl}=-R_{ijlk}$,
$R_{ijkl}=R_{klij}$.
The first two expressions tell us that $R_{ijkl}$ is skew-symmetric in the first two and the last two indices. And form the last we can deduce that $R$ is symmetric in the pairs $ij$ and $kl$.
Read the rest of this entry »

Here are the slides of the talk “What’s an infinite dimensional manifold and how can it be useful in hospitals?” that was given by my adviser, David Mumford, at the University of Coimbra, in September 2007.

It should give you a general idea of what the field is about. The field being Computational Anatomy, or Pattern Theory (a branch of it). Some familiarity with differential geometry concepts is expected, but one can just look for some nice pictures and graphs. My favorite is the multiple image of a galaxy due to the gravitational lensing. This is probably the most powerful and the most direct observation for the curvature of our space-time.

For some reason I am always getting confused which one is the right coset, which one is the left coset.
It depends from which side you look at it.
Let’s write it down, maybe this mechanical memory will help.

So, given a group $G$, and its subgroup $H$, consider an element $g\in G$.
Then, the left coset of $H$ in $G$ is denoted by
$gH = \{gh: h\in H \}$;
and the right coset of $H$ in $G$ is denoted by
$Hg = \{gh: h\in H \}$.

As I written it down it makes sense. This is the coset of the set $H$!
It get’s translated around by an element $g$ either from the left (thus we get left cosets),
or from the right (producing right cosets).

A nice post by wonderful Terry Tao about the Arnold formalism.
Sums it up in a concise way, where everything is in one place.

Recently I gave David the new Russian edition of his famous Red Book (unfortunately he already got one, from the publisher I assume :).

He said (not a very precise quote): “I am surprised this book became so popular. I was so disorganized. I am not even sure if all the theorems are correct there. I was just learning the damn thing! I guess that makes it good for a first time reader.”