This text is just a write-up of some fairly horrible calculations I have made recently. The aim is to find how an eigenvalue of the Laplace operator with Dirichlet boundary conditions, let us say the first to fix the ideas, varies when the boundary is deformed. The formula is very classical, and as far as I know was discovered by Jacques Hadamard. With some computations (quite a lot actually), it can be deduced from the Feynman-Hellman formula, which gives the first derivative of the eigenvalues of a self-adjoint operator.

**Statement of the problem**

Let be a bounded open set in ( being the dimension) with a boundary . One can think of the unit disk in or the unit ball in (in those cases the computations can actually be carried out). We consider the eigenvalue problem

We limit ourselves to the first eigenvalue (which is always simple according to a classical result). Now let us assume that is a vector field defined in , with compact support . For any real number , we define the mapping from to by

For small enough, this actually defines a smooth change of variable, as shown by the following lemma.

Lemma

There exists such that for , the mapping is a -diffeomorphism.

**Proof**

Let us fix a matrix norm on , for example the operator norm associated with the euclidean norm on .

For any ,

where is the identity matrix. Therefore, if

is invertible at any . In that case, is a local diffeomorphism, and in particular is an open set.

It remains to show that the mapping is a bijection. We have

Since

we get

(recall that ). This obviously implies that is injective. It also implies that is a closed set. Indeed, let be a sequence of points in converging to . There exists a (unique) sequence of points in such that . As a convergent sequence, satisfies a Cauchy criterion, and therefore so does thanks to the preceding inequality. There exists a limit of and since is continuous, . The set is both open and closed, and obviously nonempty. By connectedness, .

Let us note and the first eigenvalue of the problem

The case correspond to the initial problem. We want to compute .

**Pullback of the Laplacian**

If is a fonction on , we define, using standard notations of differential geometry, the function on wich is the \emph{pullback} of :

This operation gives us an unitary mapping from to :

with (this is easy to check with the change of variable formula). We then define the operator , depending on , on the \emph{fixed} Hilbert space by

The operator is selfadjoint, and by invariance of the Sobolev spaces under a change of variable, its domain is .

Let us now give an explicit formula for . We will use a weak formulation (i.e. consider as a distribution). Let us fix and (i.e. smooth and compactly supported in ). By the chain rule

and therefore if ,

We set . We have then

By the change of variable , we get

Since , we get the formula

We can then see as the first eigenvalue of the problem

We have a variable selfadjoint operator acting on a fixed Hilbert space an we want to compute the derivative of its first eigenvalue.

**The Feynman-Hellman formula**

**Formal approach**

We first find the formula without worrying about proving the various convergences and regularities or determining the operator domains. In a general setting, let be a Hilbert space and a family of selfadjoint operators acting on . Assume that for each , is an eigenpair, i.e.

Differentiating the previous inequality at , we get

Let us now take the scalar product of both sides with :

Using the fact that is selfadjoint and that , we find

and finally

**A result**

Let us now give a precise statement. It is surely not the most general possible, but it will suffice for our purpose.

Proposition

Let be a Hilbert space, a subspace of , and a family of selfadjoint operators on . We assume that, for all , as compact resolvent, the domain of is , and is the first eigenvalue of , with an associated eigenvector. We assume additionally that when and that there exists such thatwhen . Then is differentiable at and

**Proof**

The hypotheses are of course chosen so that we can mimic the formal computations. Let be smaller than . Then

and therefore

We take the scalar product of both sides with and get

We use the fact that and are selfadjoints and the equation and get, after cancellation,

Letting tend to gives the desired result.

**Application to the original problem**

**-convergence**

Using the Courant minimax principle, one can prove (and we will admit here) that when .

Let us choose such that and . Then, for , we choose such that , , and . Now let us pick . Since is a bounded sequence in , converges weakly in to some (up to a subsequence). Since is compactly embedded in , converges strongly to . Therefore and . It is easy to see that satisfies in the sense of distributions. Therefore is a normalized eigenfunction for associated with . Since we have assumed simple, and according to the conditions and , we find . We have shown that when .

**Strong operator convergence**

The convergence result for is obvious if we pay close attention to the form of . The operator is shown by formula \eqref{eq.PullLap} to be a selfadjoint, elliptic, second order differential operator with smooth coefficients. It can be written

where and are selfadjoint second order differential operators with smooth coefficients depending only on , and is a selfadjoint second order with smooth coefficients, bounded in the operator norm uniformly in ($R(t)$ is actually polynomial in ). Obviously and therefore, for any ,

when .

**Asymptotic expansion**

It remains to compute explicitly the operator . It is again simpler to use the weak formulation. Let be in and be in . We fist note that

and deduce

We get

and

Then

We get the quadratic form associated with the operator :

We get the second order differential operator

According to the proposition, the function is differentiable and

**Hadamard formula**

We can recover the classical Hadamard formula from the previous one with an integration by part. We have

and therefore

We get

On the other hand,

We obtain the classical Hadamard formula for the derivative: