Skip to main content
Notice removed Draw attention by Asaf Shachar
Bounty Ended with Deane Yang's answer chosen by Asaf Shachar
edited body; edited tags
Source Link
Asaf Shachar
  • 6.9k
  • 2
  • 22
  • 81

Theorem 2: Let $E$ and $F$ be rank $d$ oriented vector bundles over $\M$ with smooth metrics and compatible connections. Let $A:E \to F$ be a smooth bundle map. Define $\Cof A= (-1)^{d-1} \star_{F}^{d-1} (\wedge^{d-1} A) \star_{E}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{A,F}. $$$$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{E,F}. $$

Theorem 2: Let $E$ and $F$ be rank $d$ oriented vector bundles over $\M$ with smooth metrics and compatible connections. Let $A:E \to F$ be a smooth bundle map. Define $\Cof A= (-1)^{d-1} \star_{F}^{d-1} (\wedge^{d-1} A) \star_{E}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{A,F}. $$

Theorem 2: Let $E$ and $F$ be rank $d$ oriented vector bundles over $\M$ with smooth metrics and compatible connections. Let $A:E \to F$ be a smooth bundle map. Define $\Cof A= (-1)^{d-1} \star_{F}^{d-1} (\wedge^{d-1} A) \star_{E}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{E,F}. $$

focused the question more narrowly around bundles.
Source Link
Asaf Shachar
  • 6.9k
  • 2
  • 22
  • 81

Automatic transfer of pointwise metric computations to manifold-wisebundle computations

There is a well-known folklore saying that "any linear algebraic construction/statement can be lifted to vector bundles" (e.g tensor products, direct sums, quotients etc).

I am interested in a metric version of this phenomena:

Does every statement about inner-prodcut spaces admit a vector bundle analog?

Specifically, I am interested in "derivations-type" results:

On various occasions, I find myself computing derivationsneed to compute derivatives of certain "geometric quantities" associated with bundle maps over a manifold-related objects:. (examples are given below).

  1. Variational gradients of functionals over mappings between manifolds (Euler-Lagrange's equations).
  2. Derivatives of certain "geometric functions" (specific examples will be given immediately).

Often, I find that the key to successful computation isit's easier to start with a fiberwise/finitefinite dimensional analogous setting (which is an easier task)computation. The computation in the manifoldbundle context then becomes a rather trivialroutine adaptation of the original calculation, modulu some extra justifications (revolving around the compatiblity of connections with metrics).

QuestionSoft Question: Is there a way to "automate" this transfer? (I want to avoid repeating essentially the same calculation twice). In other words, is there a way to prove a "meta-theorem" which says that the result in the pointwise context carries over to the manifoldbundle context.?

Main Example: Calculating the derivative of the Jacobian of a smooth mapdeterminant.

Theorem 1: Let $f:\M \to \N$ be a smooth map between $d$-dimensional oriented Riemannian manifolds. Define $\Cof df= (-1)^{d-1} \star_{f^*TN}^{d-1} (\wedge^{d-1} df) \star_{TM}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det df)(V)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}} . $$

Specific question: Can we deduce the theorem from the proposition? (without using the proof of the proposition, like I am doing below). Of course, the analogy is not exact since $\Cof df$ is not the gradient of $\det(df)$.

One obvious way to achieve this would be to view $p \to \det(df_p)$ as the determinant of a changing map between fixedfixed vector spaces. This can be done by representing $df$ w.r.t orthonormal frames. However, one then needs to track the derivative of this matrix in terms of $V$ which looks cumbersome. (I would say that even if this approach would work, it is less aesthetic - an invariant way would be better).

Edit:

As pointed out by Deane Yang, there is a more general version of theorem $1$ which is the right "bundle-analog" of the finite-dim proposition:

Theorem 2: Let $E$ and $F$ be rank $d$ oriented vector bundles over $\M$ with smooth metrics and compatible connections. Let $A:E \to F$ be a smooth bundle map. Define $\Cof A= (-1)^{d-1} \star_{F}^{d-1} (\wedge^{d-1} A) \star_{E}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{A,F}. $$

The proof of theorem $2$ is exactly the same as the proof of theorem 1 (see below) - we just replace $df \to A$ everywhere (that proof does not use the fact $df$ is the differential of a map, just the bundle-structures).

The question still remains- can we use the statement of the proposition to deduce theorem $2$, without looking at the proof. (This is not a trivial consequence of the proposition, where the two vector spaces, while different, are fixed).

proof of the theoremTheorem $1$: We want to imitate the proof above:

We shall see that a miracle will happen - metricity shall come to our aid.

The "meta-theorem" should somehow bypass all that.

  $$ \det(df)= \star^d_{f^*T\N} \circ \bigwedge^d df \circ \star^0_{\TM}(1)= \star^d_{f^*T\N} \big( df(e_1) \wedge \dots \wedge df(e_d) \big),$$ So $$ V\det df = V \star^d_{f^*T\N}\big( df(e_1) \wedge \dots \wedge df(e_d) \big) \stackrel{(1)}{=} $$ $$ \star^d_{f^*T\N} \nabla_V \big( df(e_1) \wedge \dots \wedge df(e_d) \big)= \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge \nabla_V \big(df(e_i)\big) \wedge \dots \wedge df(e_d) \big) = \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge (\nabla_V df)e_i \wedge \dots \wedge df(e_d) \big) + $$ $$ \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge df(\nabla_{V}e_i) \wedge \dots \wedge df(e_d) \big) \stackrel{(2)}{=} $$ $$ \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df( \sum_{i=1}^d e_1 \wedge \dots \wedge \nabla_Ve_i \wedge \dots \wedge e_d)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df\big( \nabla_V (e_1 \wedge \dots \wedge e_i \wedge \dots \wedge e_d) \big)=\IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}. $$

Admittedly, this repeatition is not huge, but I have other examples on my mind where the computations are much longer, so a general "transfer-principle" would be nice to have.

(These examples are variational, so the setting is a little different, but I thought it would be easier to start with this example. In the variational setting I would like to have something like a transfer principle from the derivation of a fiberwise integrand to the variational gradient, but this requires some assumptions on the integrand of course, in order for metricity to come into play. Perhaps an invariance under isometries sould be enough? Maybe this should be discussed at a follow-up question, though).

Automatic transfer of pointwise computations to manifold-wise computations

On various occasions, I find myself computing derivations of manifold-related objects:

  1. Variational gradients of functionals over mappings between manifolds (Euler-Lagrange's equations).
  2. Derivatives of certain "geometric functions" (specific examples will be given immediately).

Often, I find that the key to successful computation is to start with a fiberwise/finite dimensional analogous setting (which is an easier task). The computation in the manifold context then becomes a rather trivial adaptation of the original calculation, modulu some extra justifications.

Question: Is there a way to "automate" this transfer (I want to avoid repeating essentially the same calculation twice). In other words, is there a way to prove a "meta-theorem" which says that the result in the pointwise context carries over to the manifold context.

Main Example: Calculating the derivative of the Jacobian of a smooth map.

Theorem: Let $f:\M \to \N$ be a smooth map between $d$-dimensional oriented Riemannian manifolds. Define $\Cof df= (-1)^{d-1} \star_{f^*TN}^{d-1} (\wedge^{d-1} df) \star_{TM}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det df)(V)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}} . $$

Specific question: Can we deduce the theorem from the proposition? (without using the proof of the proposition, like I am doing below). Of course, the analogy is not exact since $\Cof df$ is not the gradient of $\det(df)$.

One obvious way to achieve this would be to view $p \to \det(df_p)$ as the determinant of a changing map between fixed vector spaces. This can be done by representing $df$ w.r.t orthonormal frames. However, one then needs to track the derivative of this matrix in terms of $V$ which looks cumbersome. (I would say that even if this approach would work, it is less aesthetic - an invariant way would be better).

proof of the theorem: We want to imitate the proof above:

We shall see that a miracle will happen - metricity shall come to our aid.

The "meta-theorem" should somehow bypass all that.

$$ \det(df)= \star^d_{f^*T\N} \circ \bigwedge^d df \circ \star^0_{\TM}(1)= \star^d_{f^*T\N} \big( df(e_1) \wedge \dots \wedge df(e_d) \big),$$ So $$ V\det df = V \star^d_{f^*T\N}\big( df(e_1) \wedge \dots \wedge df(e_d) \big) \stackrel{(1)}{=} $$ $$ \star^d_{f^*T\N} \nabla_V \big( df(e_1) \wedge \dots \wedge df(e_d) \big)= \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge \nabla_V \big(df(e_i)\big) \wedge \dots \wedge df(e_d) \big) = \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge (\nabla_V df)e_i \wedge \dots \wedge df(e_d) \big) + $$ $$ \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge df(\nabla_{V}e_i) \wedge \dots \wedge df(e_d) \big) \stackrel{(2)}{=} $$ $$ \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df( \sum_{i=1}^d e_1 \wedge \dots \wedge \nabla_Ve_i \wedge \dots \wedge e_d)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df\big( \nabla_V (e_1 \wedge \dots \wedge e_i \wedge \dots \wedge e_d) \big)=\IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}. $$

Admittedly, this repeatition is not huge, but I have other examples on my mind where the computations are much longer, so a general "transfer-principle" would be nice to have.

(These examples are variational, so the setting is a little different, but I thought it would be easier to start with this example. In the variational setting I would like to have something like a transfer principle from the derivation of a fiberwise integrand to the variational gradient, but this requires some assumptions on the integrand of course, in order for metricity to come into play. Perhaps an invariance under isometries sould be enough? Maybe this should be discussed at a follow-up question, though).

Automatic transfer of pointwise metric computations to bundle computations

There is a well-known folklore saying that "any linear algebraic construction/statement can be lifted to vector bundles" (e.g tensor products, direct sums, quotients etc).

I am interested in a metric version of this phenomena:

Does every statement about inner-prodcut spaces admit a vector bundle analog?

Specifically, I am interested in "derivations-type" results:

On various occasions, I need to compute derivatives of certain "geometric quantities" associated with bundle maps over a manifold. (examples are given below).

Often, I find it's easier to start with a finite dimensional analogous computation. The computation in the bundle context then becomes a routine adaptation of the original calculation, modulu some extra justifications (revolving around the compatiblity of connections with metrics).

Soft Question: Is there a way to "automate" this transfer? (I want to avoid repeating essentially the same calculation twice). In other words, is there a way to prove a "meta-theorem" which says that the result in the pointwise context carries over to the bundle context?

Main Example: Calculating the derivative of the determinant.

Theorem 1: Let $f:\M \to \N$ be a smooth map between $d$-dimensional oriented Riemannian manifolds. Define $\Cof df= (-1)^{d-1} \star_{f^*TN}^{d-1} (\wedge^{d-1} df) \star_{TM}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det df)(V)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}} . $$

Specific question: Can we deduce the theorem from the proposition? (without using the proof of the proposition, like I am doing below).

One obvious way to achieve this would be to view $p \to \det(df_p)$ as the determinant of a changing map between fixed vector spaces. This can be done by representing $df$ w.r.t orthonormal frames. However, one then needs to track the derivative of this matrix in terms of $V$ which looks cumbersome. (I would say that even if this approach would work, it is less aesthetic - an invariant way would be better).

Edit:

As pointed out by Deane Yang, there is a more general version of theorem $1$ which is the right "bundle-analog" of the finite-dim proposition:

Theorem 2: Let $E$ and $F$ be rank $d$ oriented vector bundles over $\M$ with smooth metrics and compatible connections. Let $A:E \to F$ be a smooth bundle map. Define $\Cof A= (-1)^{d-1} \star_{F}^{d-1} (\wedge^{d-1} A) \star_{E}^1.$ Then for all $V \in \Gamma(\TM)$ $$ d(\det A)(V)= \IP{\Cof A}{\nabla_V A}_{A,F}. $$

The proof of theorem $2$ is exactly the same as the proof of theorem 1 (see below) - we just replace $df \to A$ everywhere (that proof does not use the fact $df$ is the differential of a map, just the bundle-structures).

The question still remains- can we use the statement of the proposition to deduce theorem $2$, without looking at the proof. (This is not a trivial consequence of the proposition, where the two vector spaces, while different, are fixed).

proof of Theorem $1$: We want to imitate the proof above:

We shall see that a miracle will happen - metricity shall come to our aid.  $$ \det(df)= \star^d_{f^*T\N} \circ \bigwedge^d df \circ \star^0_{\TM}(1)= \star^d_{f^*T\N} \big( df(e_1) \wedge \dots \wedge df(e_d) \big),$$ So $$ V\det df = V \star^d_{f^*T\N}\big( df(e_1) \wedge \dots \wedge df(e_d) \big) \stackrel{(1)}{=} $$ $$ \star^d_{f^*T\N} \nabla_V \big( df(e_1) \wedge \dots \wedge df(e_d) \big)= \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge \nabla_V \big(df(e_i)\big) \wedge \dots \wedge df(e_d) \big) = \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge (\nabla_V df)e_i \wedge \dots \wedge df(e_d) \big) + $$ $$ \star^d_{f^*T\N} \sum_{i=1}^d \big( df(e_1) \wedge \dots \wedge df(\nabla_{V}e_i) \wedge \dots \wedge df(e_d) \big) \stackrel{(2)}{=} $$ $$ \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df( \sum_{i=1}^d e_1 \wedge \dots \wedge \nabla_Ve_i \wedge \dots \wedge e_d)= \IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}+ \star^d_{f^*T\N} \bigwedge^d df\big( \nabla_V (e_1 \wedge \dots \wedge e_i \wedge \dots \wedge e_d) \big)=\IP{\Cof df}{\nabla_V df}_{\TM,f^*{\TN}}. $$

Admittedly, this repeatition is not huge, but I have other examples on my mind where the computations are much longer, so a general "transfer-principle" would be nice to have.

Elaborated on one approach for "solving" the problem.
Source Link
Asaf Shachar
  • 6.9k
  • 2
  • 22
  • 81

One obvious way to achieve this would be to view $p \to \det(df_p)$ as the determinant of a changing map between fixed vector spaces. This can be done by representing $df$ w.r.t orthonormal frames. However, one then needs to track the derivative of this matrix in terms of $V$ which looks cumbersome. (I would say that even if this approach would work, it is less aesthetic - an invariant way would be better).


proof of the proposition: Let $A_t$ be a smooth family of mappings in $\Hom(V,W)$: $A(0)=A,A'(0)=B$, and let $e_1,\dots,e_d$ be a positive orthonormal basis of $V$. $$ \det(A_t)= \star^d_W \circ \bigwedge^d A_t \circ \star^0_V(1)= \star^d_W \bigwedge^d A_t \big( e_1 \wedge \dots \wedge e_d \big)= \star^d_W \big( A_t e_1 \wedge \dots \wedge A_te_d \big) $$

proof: Let $A_t$ be a smooth family of mappings in $\Hom(V,W)$: $A(0)=A,A'(0)=B$, and let $e_1,\dots,e_d$ be a positive orthonormal basis of $V$. $$ \det(A_t)= \star^d_W \circ \bigwedge^d A_t \circ \star^0_V(1)= \star^d_W \bigwedge^d A_t \big( e_1 \wedge \dots \wedge e_d \big)= \star^d_W \big( A_t e_1 \wedge \dots \wedge A_te_d \big) $$

One obvious way to achieve this would be to view $p \to \det(df_p)$ as the determinant of a changing map between fixed vector spaces. This can be done by representing $df$ w.r.t orthonormal frames. However, one then needs to track the derivative of this matrix in terms of $V$ which looks cumbersome. (I would say that even if this approach would work, it is less aesthetic - an invariant way would be better).


proof of the proposition: Let $A_t$ be a smooth family of mappings in $\Hom(V,W)$: $A(0)=A,A'(0)=B$, and let $e_1,\dots,e_d$ be a positive orthonormal basis of $V$. $$ \det(A_t)= \star^d_W \circ \bigwedge^d A_t \circ \star^0_V(1)= \star^d_W \bigwedge^d A_t \big( e_1 \wedge \dots \wedge e_d \big)= \star^d_W \big( A_t e_1 \wedge \dots \wedge A_te_d \big) $$

Notice added Draw attention by Asaf Shachar
Bounty Started worth 50 reputation by Asaf Shachar
Source Link
Asaf Shachar
  • 6.9k
  • 2
  • 22
  • 81
Loading