Connections, The Kähler Condition, and Curvature – I. Connections in Riemannian Manifolds

I am a second year math major at UChicago, and I plan to write some blog posts here on complex geometry as a way of helping myself learn the material. I might also write some posts on other stuff.

* * *

First let us start with the case of a smooth manifold {M} of real dimension {m}. Let us look at a tangent vector {X} of {M} at the point {P}. It means the following. We take a curve {\gamma: (-1,1) \rightarrow M} with {\gamma(0) = P}. In local coordinates {\gamma} is given by {x^i = \gamma^i(t)}. The tangent vector {X} is given by a column vector {\xi} whose components {\xi^i} are {{{d\gamma^i}\over{dt}}(0)}. If we use another local coordinate system {y^i}, then the tangent vector {X} is given by a column vector {\eta} with components {\eta^i}. By the chain rule the two column vectors {\xi} and {\eta} are related by {\eta^i = \xi_j{{\partial y^i}\over{\partial x^j}}(P)}. Formally the transformation rule can be described by {\eta^i {\partial\over{\partial x^i}}}. Another way of looking at the formal expression {\xi^i {\partial\over{\partial x^i}}} is that one knows a tangent vector if and only if one knows how to take the partial derivative of any function in the direction of the tangent vector, because the components of the tangent vector are precisely the partial derivatives of the coordinate functions along the tangent vector. The formal expression {\xi^i {\partial\over{\partial x^i}}} is simply the partial differential operator in the direction of the tangent operator.

The set of tangent vectors form a bundle over {M} in the following sense. Suppose {M} is covered by coordinate charts {U_\alpha} with coordinate {x_\alpha^i}. Through the coordinate chart {\{U_\alpha, x_\alpha^i\}} we identify all tangent vectors at points of {U_\alpha} by column vectors {\xi}. So the totality of all tangent vectors at points of {U_\alpha} is given by {U_\alpha \times \mathbb{R}^m}. So the totality of all tangent vectors at points of {M} is obtained by taking the disjoint union of {U_\alpha \times \mathbb{R}^m} for all {\alpha} and then identify {(P, \xi_\alpha) \in U_\alpha \times \mathbb{R}^m} with {(P, \xi_\beta) \in U_\beta \times \mathbb{R}^m} by {\xi_\alpha^i = \xi_\beta^j {{\partial x_\alpha^i}\over{\partial x_\beta^j}}}, where {\xi_\alpha^i} is the {i}th component of {\xi_\alpha}. So we have a bundle with transition functions {\left({{\partial x_\alpha^i}\over{\partial x_\beta^j}}\right)}. This is the tangent bundle and we denote it by {T_M}.

Suppose now we have a tangent vector field {\xi^i(x){\partial\over{\partial x^i}}} on some open subset {U} of {M}. That is, we have a section of the tangent bundle {T_M} over {U}. We would like to be able to differentiate the tangent vector field {\xi^i(x){\partial\over{\partial x^i}}} along some given direction and get a tangent vector field. This is a question of how to differentiate the section of a vector bundle and get a section as a result. We would like to tackle this question in the general setting. Suppose we have a vector bundle {V} of rank {r} with transition functions {g_{\alpha\beta}} with respect to a covering {\{U_\alpha\}}. A section {s} over an open subset {U} is given by {\{s_\alpha\}} so that {s_\alpha} is an {r}-tuple of functions over {U \cap U_\alpha} and {s_\alpha = g_{\alpha\beta}s_\beta}. Here no summation is used and {s_\alpha, s_\beta} are column vectors and {g_{\alpha\beta}} is an {r \times r} matrix. For our differentiation we do not specify the direction and consider the total derivative. From {s_\alpha = g_{\alpha\beta}s_\beta} we have {ds_\alpha = g_{\alpha\beta}ds_\beta + dg_{\alpha\beta}s_\beta}. If the term {dg_{\alpha\beta}s_\beta} is not present, we get a section of {V} from {ds_\alpha} after we specify a direction. However, in general we have the term {dg_{\alpha\beta}s_\beta} and it is not possible to use {ds_\alpha} for differentiation and get a section. We have to use other ways to differentiate sections of a bundle.

We want the procedure of differentiation to obey the Leibniz formula for differentiating products. So to define differentiation it suffices to define differentiation for a local basis {e_\alpha} {(1 \le \alpha < r)} of {V}. We denote the (total) differential operator by {D}. Then {De_\alpha} is a {V}-valued 1-form and we can express it in terms of our local basis {e_\alpha} {(1 \le \alpha \le r)} and get {De_\alpha = \omega_\alpha^\beta e_\beta}, where {\omega_\alpha^\beta} is a 1-form. To be able to differentiate sections is equivalent to having an {r \times r} matrix valued 1-form {(\omega_\alpha^\beta)}. This matrix valued 1-form {(\omega_\alpha^\beta)} depends on the choice of {e_\alpha}. Suppose we have another local basis {\tilde{e}_\alpha} which is related to {e_\alpha} by {\tilde{e}_\alpha = g_\alpha^\beta e_\beta}. Then

\displaystyle D\tilde{e}_\alpha = D(g_\alpha^\beta e_\beta) = g_\alpha^\beta De_\beta + dg_\alpha^\beta e_\beta = g_\alpha^\beta \omega_\beta^\gamma e_\gamma + dg_\alpha^\beta e_\beta.

In matrix notations we have {D\tilde{e} = g\omega e + dg\,e}, where {\tilde{e}, e} are column vectors with components {\tilde{e}_\alpha, e_\alpha} and {g = (g_\alpha^\beta)} and {\omega = (\omega_\alpha^\beta)}. Thus {D\tilde{e} = (g\omega g^{-1} + dg\,g^{-1})\tilde{e}} and {\tilde{\omega} = g\omega g^{-1} + dg\,g^{-1}}. So to be able to differentiate sections is equivalent to having an {r \times r} matrix valued 1-form {\omega} for any local basis {e} so that the transformation rule {\tilde{\omega} = g\omega g^{-1} + dg\,g^{-1}} is satisfied. In that case we say that we have a connection.

When we have a connection for {V}, we have an induced connection on the dual bundle {V^*} of {V}. It is given as follows. Suppose {s^*} is a local section of {V^*} and {s} is a local section of {V}. Then {Ds^*} is defined so that {d\langle{s^*, s}\rangle = \langle Ds^*, s\rangle + \langle s^*, Ds\rangle} is satisfied, where {\langle{s^*, s\rangle}} means the evaluation of {s^*} at {s}. If {e_*^\alpha} is the dual basis of {e_\alpha}, then the connection {w_\alpha^{*\beta}} is simply equal to {-\omega_\alpha^\beta}.

Suppose we have an inner product {\langle\cdot, \cdot\rangle} along each fiber of {V}. We say that a connection {\omega} is compatible with the metric if for any section {s} over a curve {\gamma} in {M} with {Ds = 0} along {\gamma} the length {s} is constant along {\gamma}. This condition is equivalent to {d\langle s, t\rangle = \langle Ds, t\rangle + \langle s, Dt \rangle}. Clearly this equation implies that if {Ds = 0} along {\gamma} then {\langle s,s \rangle} is constant along {\gamma}. Conversely, for a point {P} and a tangent vector {X} at {P} we can find a curve {\gamma} passing through {P} with tangent vector {X} at {P}. We choose a frame field {e_\alpha} {(1 \le \alpha \le r)} along {\gamma} so that the frame field is orthonormal at {P} and each {e_\alpha} has zero covariant derivative along {\gamma}. Then the frame field {e_\alpha} {(1 \le \alpha \le r)} is orthonormal at each point of {\gamma}. Write {s = s^\alpha e_\alpha} and {t = t^\alpha e_\alpha}. Then evaluated at {X},

\displaystyle d\langle s,t\rangle = d\sum_\alpha s^\alpha t^\alpha = \sum_\alpha ds^\alpha t^\alpha + \sum_\alpha s^\alpha dt^\alpha = \langle Ds, t \rangle + \langle s, Dt \rangle.

By applying {d \langle s, t \rangle = \langle Ds, t\rangle + \langle s, Dt \rangle} to {s, t} belonging to a local orthonormal basis, we conclude that compatibility with the metric is equivalent to the connection {\omega} for an orthonormal basis is skew-symmetric.

Now we come back to the tangent bundle {T_M} of {M}. An inner product along the fibers of {T_M} is simply a Riemannian metric. We introduce a connection so that we can differentiate sections of {T_M} and get sections again. having a connection for {T_M} is equivalent to having a connection for the dual bundle {T_M^*} of {T_M}. A section of {T_M^*} is simply a 1-form. For a 1-form we have always the concept for exterior differentiation. We do not need any connection for exterior differentiation. The result of an exterior differentiation of a 1-form {s} is a 2-form {ds}. It is different from {Ds} coming from a connection, because {Ds} is a section of {T_M^* \otimes T_M^*} whereas {ds} is a section of {\wedge^2 T_M^*}. We would like to consider connections that relate {Ds} to {ds} in a natural way. A connection {\omega} is said to be torsion-free if the skew-symmetrization of {Ds} is {ds}. Suppose {e_\alpha} {(1 \le \alpha \le m)} is a local frame for {T_M} and {\omega^\alpha} {(1 \le i \le m)} is its dual frame. Let {\omega_\alpha^\beta} with {De_\alpha = \omega_\alpha^\beta e_\beta} be the matrix valued 1-form of the connection. Then from definition of the torsion-free condition we see that the connection is torsion-free if and only if {d\omega^\alpha + \omega_\beta^\alpha \wedge \omega^\beta = 0}.

Fundamental Theorem of Riemannian Geometry. There exists a unique torsion-free connection that is compatible with a Riemannian metric.

Proof. Let the Riemannian metric be given by {g_{ij}dx^idx^j}. We use {dx^i} as local basis for {T_M^*} and {{\partial\over{\partial x^i}}} as local basis for {T_M}. Write the connection {\omega} for {T_M} as {\omega_i^j = \Gamma_{ik}^j dx^k}. The coefficients {\Gamma_{ik}^j} are known as the Christoffel symbols. So {D(dx^i) = -\omega_i^j \otimes dx^j = -\Gamma_{ik}^j dx^k \otimes dx^i}. The torsion-free condition is simply the symmetry of {\Gamma_{ik}^j} in {i} and {k}. Compatibility with the Riemannian metric means

\displaystyle dg_{ij} = d\left\langle{{\partial\over{\partial x^i}}, {\partial\over{\partial x^j}}}\right\rangle = \left\langle{D{\partial\over{\partial x^i}}, {\partial\over{\partial x^j}}}\right\rangle + \left\langle{{\partial\over{\partial x^i}}, D{\partial\over{\partial x^j}}}\right\rangle

\displaystyle = \omega_i^k\left\langle{{\partial\over{\partial x^k}}, {\partial\over{\partial x^j}}}\right\rangle + \omega_j^k\left\langle{{\partial\over{\partial x^i}}, {\partial\over{\partial x^k}}}\right\rangle

\displaystyle = \omega_i^kkg_{kj} + \omega_j^k g_{ik}.

So

\displaystyle {\partial\over{\partial x^\ell}} = \Gamma_{i\ell}^k g_{kj} + \Gamma_{j \ell}^k g_{ik}.

To simplify notations we let {\Gamma_{j,\, i\ell} = \Gamma_{i\ell}^k g_{kj}} and let {\partial_\ell = {\partial\over{\partial x^\ell}}}. The theorem is reduced to proving the existence and uniqueness of {\Gamma_{k,\, i\ell}} symmetric in {i} and {\ell} which satisfy {\partial_\ell g_{ij} = \Gamma_{j,\,i\ell} + \Gamma_{i,\,j\ell}}. This is a linear algebra problem. We have {{1\over2}n^2(n+1)} unknowns and as many equations. It is in general difficult to handle such a large system of linear equations. Fortunately this large system can be decoupled into smaller sets of three equations in three unknowns. For a fixed triple {i, \ell, k}, since {\Gamma_{k,\,i\ell}} is symmetric in {i} and {\ell}, by permuting {i, \ell, k} we have only three unknowns {\Gamma_{k,\,i\ell}}, {\Gamma_{i,\,k\ell}}, and {\Gamma_{i\ell,\,k}}. Such permutations would generate from {\partial_\ell g_{ij} = \Gamma_{j,\,i\ell} + \Gamma_{i,\,j\ell}} three equations. We can now solve uniquely our three unknowns from these three equations. The three equations are

\displaystyle \partial_\ell g_{ij} = \Gamma_{j,\,i\ell} + \Gamma_{i,\,j\ell},

\displaystyle \partial_i g_{j\ell} = \Gamma_{\ell,\,ji} + \Gamma_{j,\,\ell i},

\displaystyle \partial_j g_{\ell i} = \Gamma_{i,\,\ell j} + \Gamma_{\ell,\,ij}.

The usual way is to add up the three equations so that we know the sum of the three unknowns and then subtract from it the sum of two unknowns obtained from any of the three equations. This is equivalent to subtracting one equation from the sum of the other two. So we subtract the third equation from the sum of the first two equations and we get

\displaystyle 2\Gamma_{j,\,\ell i} = \partial_\ell g_{ij} + \partial_i g_{j\ell} - \partial_j g_{\ell i}.

The unique connection is now given by

\displaystyle \Gamma_{jk}^i = {1\over2}g^{i\ell} (\partial_j g_{k\ell} + \partial_k g_{j\ell} - \partial_\ell{g_{jk}}).

This unique connection is called the Levi-Civita connection or the Riemannian connection.

\square

We would like to introduce the invariant formulation for the torsion-free condition. In the literature the standard notation for the covariant differential operator is {\nabla} instead of {D}. We will interchangeably use both {\nabla} and {D} to denote the covariant differential operator. The torsion-free condition given by the symmetry of the Christoffel symbols in the two covariant indices when expressed in therms of local coordinates. The Christoffel symbols are given by {\nabla\left({\partial\over{\partial x^i}}\right) = \omega_i^j {\partial\over{\partial x^j}}} and {\omega_i^j = \Gamma_{ik}^j dx^k}. In other words, {\nabla_k\left({\partial\over{\partial x^i}}\right) = \Gamma_{ik}^j {\partial\over{\partial x^j}}}. The torision-free condition is that {\nabla_k \left({\partial\over{\partial x^i}}\right)} is symmetric in {i} and {k}, i.e. {\nabla_X Y} is symmetric in {X} and {Y} when {X} and {Y} equal {{\partial\over{\partial x^j}}} and {{\partial\over{\partial x^i}}}. For general vector fields {X} and {Y} we do not have {\nabla_X Y - \nabla_Y X} from the torision-free condition, because for smooth functions {\varphi} and {\psi},

\displaystyle \nabla_{\varphi X} (\psi Y) - \nabla_{\psi Y}(\varphi X) = \varphi \psi(\nabla_X Y - \nabla_Y X) + \varphi(X \psi)Y - \psi(Y \varphi)X

and we cannot expect to get the general case by multiplying the special case {X = {\partial\over{\partial x^i}}}, {Y = {\partial\over{\partial x^j}}} by smooth functions and then summing up. The trouble is the discrepancy terms {\varphi(X\psi)Y - \psi(Y \varphi)X}. The Lie bracket {[X, Y]} yields the same discrepancy terms when {X} and {Y} are respectively multiplied by smooth functions {\varphi} and {\psi}, as follows.

\displaystyle [\varphi X, \psi Y] = \varphi \psi [X, Y] + \varphi(X\psi)Y - \psi(Y\varphi)X.

So we conclude that the torsion-free condition is equivalent to the vanishing of {\nabla_X Y - \nabla_Y X - [X, Y]} for any pair of tangent vectors {X} and {Y}. Moreover, we have seen that

\displaystyle \nabla_{\varphi X} (\psi Y) - \nabla_{\psi Y}(\varphi X) - [\varphi X, \psi Y] = \varphi \psi(\nabla_X Y - \nabla_Y X - [X, Y]).

When the torsion-free condition is not satisfied, we define {T(X, Y) = \nabla_X Y - \nabla_Y X - [X, Y]}. Then {T(\varphi X, \psi Y) = \varphi \psi T(X, Y)} and {T} is a tensor and is called the torsion tensor. In local coordinates write {T\left({\partial\over{\partial x^i}}, {\partial\over{\partial x^j}}\right) = T_{ij}^k {\partial\over{\partial x^k}}} with {T_{ij}^k = \Gamma_{ij}^k - \Gamma_{ji}^k}.

We give here a geometric interpretation of torsion. Take a point {P} of {M} and two tangent vectors {X} and {Y} at {P}. We take a curve {\gamma_X} going through {P} whose tangent at {P} is {X}. We transport {Y} for a distance {s} along {\gamma_X} and get a tangent vector {Y'} of {M} at {\gamma_X(s)}. Take a curve {\gamma_Y} through {\gamma_X(s)} whose tangent at {\gamma_X(s)} is {Y'}. Let {Q} be the point {\gamma_Y(t)}. Now reverse the roles of {X} and {Y}. We take a curve {\sigma_Y} going through {P} whose tangent at {P} is {Y}. We transport {X} for a distance {t} along {\sigma_Y} and get a tangent vector {X'} of {M} at {\sigma_Y(t)}. Take a curve {\sigma_X} through {\sigma_Y(t)} whose tangent at {\sigma_Y(t)} is {X'}. Let {R} be the point {\sigma_X(s)}. The limit of the vector {{1\over{st}}\overrightarrow{RQ}} (in any coordinate system) as {s} and {t} approach zero defines a tangent vector of {M} which is independent of the choice of the curves by order considerations. We claim that this tangent vector is {T(X, Y)}. Let us now verify. We choose a local coordinate system {x^i} {(1 \le i \le m)} at {P} so that {P} corresponds to the origin. Let {X^i} and {Y^i} be respectively the components of {X} and {Y} in terms of this local coordinate system. We choose as our curve {\gamma_X} the curve defined by the equations {\gamma_X^i(s) = X^i s}. The equation for parallel transport of {Y} along {\gamma_X} is {{d\over{ds}}Y^i + \Gamma_{jk}^i Y^j X^i = 0}. Hence {Y'^i \approx Y^i - \Gamma_{jk}^i Y^j X^k s}. Here higher orders are ignored in the computation. We can take as {\gamma_Y} the curve defined by the equations {\gamma_Y^i(t) = X^is + Y'^it}. Hence {x^i(Q) \approx X^i s + Y^i t - \Gamma_{jk}^i Y^j X^kst}. Likewise we get the expression for {R} by changing the roles of {X}, {s} and {Y}, {t}. Thus {{1\over{st}}\left(x^i(R) - x^i(Q)\right) \approx \left(\Gamma_{jk}^i - \Gamma_{jk}^i\right)X^jY^k = T(X, Y)^i}.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s