Gradient, Divergence, Curl, Laplacian

We give here as an example, the method to calculate the operators gradient, divergence, curl and laplacian in curvilinear coordinates.
We apply the method to the case of spherical coodinates, but the procedure is general.

Initialization

Needs["TensorCalculus4`Tensorial`"]

Needs["TContinuumMechanics2`TContinuumMechanics`"]

numequ = 1 ;

The flavors allow to distinguish between various basis.
The change of  flavor is given by ToFlavor[new, old]. ToFlavor[bar]  is equivalent to  ToFlavor[bar, Identity].

DeclareIndexFlavor[{black, Black}, {red, Red}, {hat, OverHat}]

DeclareBaseIndices[{1, 2, 3}] ;

Curvilinear Coordinates

DefineTensorShortcuts[{{, , u, v, w, x}, 1}, {{g, δ, β}, 2}] ;

TensorSymmetry[Γ, 3] = Symmetric[2, 3] ;

We considera general basis with a red flavor :

d[red @ i]

d[red @ i] == PartialD[xu[j], red @ i] d[j]

_i^i

_i^i == x_j^j_ (, i) _j^j

where _i^iis a canonical orthonormal basis

SetTensorValueRules[d[i], IdentityMatrix[3]]

SetTensorValueRules[gdd[i, j], IdentityMatrix[3]]

SetTensorValues[gdd[i, j], {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}}]

basis = d[red @ i] == PartialD[{x, δ, g, Γ}][Tensor[], xu[red @ i]]

_i^i == ∂/∂x_i^i

This red basis is completely determined by its metric tensor g_ (ij)^(ij)

d[i] . d[j]//ToFlavor[red]//EvaluateDotProducts[, g]

g_ (ij)^(ij)

Example : Spherical coordinates

{x_1^1→r, x_2^2→θ, x_3^3→φ}

CartesianToSpherical =  {xu[1] →xu[red @ 1] Sin[xu[red @ 2]] Cos[xu[red @ 3]], xu[2] →xu[red @ 1] Sin[xu[red @ 2]] Sin[xu[red @ 3]], xu[3] →xu[red @ 1] Cos[xu[red @ 2]]} ;

RuleRedToBlack = res/.Equal→Rule ;

(RedBasisVectors = {d[red @ 1], d[red @ 2], d[red @ 3]}/.RuleRedToBlack/.TensorValueRules[, g])//MatrixForm

SetTensorValueRules[d[red @ i], RedBasisVectors]

SetTensorValueRules[gdd[red @ i, red @ j], MetricValues]

SetTensorValueRules[guu[red @ i, red @ j], Inverse[MetricValues]]

      g_ (ij)^(ij) == ( {{1, 0, 0}, {0, (x_1^1)^2, 0}, {0, 0, Sin[x_2^2]^2 (x_1^1)^2}} )       (A 1)

TensorValueRules[, g]

The red basis is generally not normalized (or even sometimes not orthonormal, and we must then first diagonalize g).
So we choose to define an orthonormal "hat" basis (
g_ (Overscript[k,^]  Overscript[h,^])^(Overscript[k,^]  Overscript[h,^]) is the identity matrix) as follows :

SetTensorValues[gdd[hat @ i, hat @ j], IdentityMatrix[3]]

hatdown[i_, k_] = d[hat @ i] == βud[red @ k, hat @ i] d[red @ k]

reddown[i_, k_] = d[red @ i] == βud[hat @ k, red @ i] d[hat @ k]

hatup[i_, k_] = u[hat @ i] == βud[hat @ i, red @ k] u[red @ k]

redup[i_, k_] = u[red @ i] == βud[red @ i, hat @ k] u[hat @ k]

_Overscript[i,^]^Overscript[i,^] == _k^k β_ (kOverscript[i,^])^(kOverscript[i,^])

_i^i == _Overscript[k,^]^Overscript[k,^] β_ (Overscript[k,^] i)^(Overscript[k,^] i)

_Overscript[i,^]^Overscript[i,^] == _k^k β_ (Overscript[i,^] k)^(Overscript[i,^] k)

_i^i == _Overscript[k,^]^Overscript[k,^] β_ (iOverscript[k,^])^(iOverscript[k,^])

and we have the relation, where g_ (Overscript[k,^]  Overscript[h,^])^(Overscript[k,^]  Overscript[h,^]) is chosen the identity matrix,

resββ = %//EinsteinSum[]//ArrayExpansion[red @ i, red @ j] ;

g_ (ij)^(ij) == g_ (Overscript[k,^] Overscript[h,^])^(Overscript[k,^] Overscript[h,^]) β_ (Overscript[h,^] j)^(Overscript[h,^] j) β_ (Overscript[k,^] i)^(Overscript[k,^] i)

this allows to calculate β_ (Overscript[k,^]  i)^(Overscript[k,^]  i)as the square root of g_ (i  j)^(i  j)

(res = (gdd[red @ i, red @ j]//ArrayExpansion[red @ i, red @ j])/.TensorValueRules[, g])//MatrixForm

ges = Eigensystem[res] ;

sqrtg = Transpose[ges[[2]]] . DiagonalMatrix[ges[[1]]^(1/2)//PowerExpand] . Inverse[Transpose[ges[[2]]]]//Simplify ;

SetTensorValueRules[βud[hat @ i, red @ j], sqrtg]

SetTensorValueRules[βud[red @ i, hat @ j], Inverse[sqrtg]]

TensorValueRules[β]

Table[βud[hat @ i, red @ j], {i, 3}, {j, 3}]/.%//MatrixForm

( {{1, 0, 0}, {0, (x_1^1)^2, 0}, {0, 0, Sin[x_2^2]^2 (x_1^1)^2}} )

( {{1, 0, 0}, {0, x_1^1, 0}, {0, 0, Sin[x_2^2] x_1^1}} )

We verify that this is correct :

resββ//MatrixForm

%/.TensorValueRules[, g]/.TensorValueRules[β]//MatrixForm

( {{True, True, True}, {True, True, True}, {True, True, True}} )

(hatbasis = (hatdown[i, j]//EinsteinSum[]//ArrayExpansion[hat @ i])/.TensorValueRules[, g]/.TensorValueRules[β])//TableForm

Print["Or in {θ,φ} coordinates (orthonormality implies r=1)"]

%%/.TensorValueRules[x]//TableForm

RuleHat = (hatbasis/.Equal→Rule) ;

_Overscript[1,^]^Overscript[1,^] == {Cos[x_3^3] Sin[x_2^2], Sin[x_2^2] Sin[x_3^3], Cos[x_2^2]}
_Overscript[2,^]^Overscript[2,^] == {Cos[x_2^2] Cos[x_3^3], Cos[x_2^2] Sin[x_3^3], -Sin[x_2^2]}
_Overscript[3,^]^Overscript[3,^] == {-Sin[x_3^3], Cos[x_3^3], 0}

Or in {θ,φ} coordinates (orthonormality implies r=1)

_Overscript[1,^]^Overscript[1,^] == {Cos[φ] Sin[θ], Sin[θ] Sin[φ], Cos[θ]}
_Overscript[2,^]^Overscript[2,^] == {Cos[θ] Cos[φ], Cos[θ] Sin[φ], -Sin[θ]}
_Overscript[3,^]^Overscript[3,^] == {-Sin[φ], Cos[φ], 0}

and we can check here that the hat basis is orthonormal  and  _Overscript[i,^]^Overscript[i,^]==_Overscript[i,^]^Overscript[i,^],

(d[hat @ i] . d[hat @ j]//EvaluateDotProducts[, g]) == ((d[hat @ i] . d[hat @ j]//ArrayExpansion[hat @ i, hat @ j])/.RuleHat//Simplify//MatrixForm)

g_ (Overscript[i,^] Overscript[j,^])^(Overscript[i,^] Overscript[j,^]) == ( {{1, 0, 0}, {0, 1, 0}, {0, 0, 1}} )

Gradient of a potential

Gradient of a potential U in curvilinear coordinates :

"grad[U]" == (Tensor[U]//TGrad[, i]//ToFlavor[red])

%/.(redup[i, j]/.Equal→Rule)

ResultFrame[(%//EinsteinSum[])/.TensorValueRules[β]/.TensorValueRules[x]/.SphericalIndices]

grad[U] == U_ (, i) _i^i

grad[U] == U_ (, i) _Overscript[j,^]^Overscript[j,^] β_ (iOverscript[j,^])^(iOverscript[j,^])

Curl of a vector

Curl of a vector V in curvilinear coordinates :

Let,

  := vu[i] d[i]

"curl[]" == TCurl[, g, {i, j, k}, e][]//ToFlavor[red]//LeviCivitaSimplify[e, ε][g, red]

%[[1]] == (%[[2]]/.vd[red @ i_] :→Tensor[vd[hat @ h] βud[hat @ h, red @ i]]/.d[red @ i_] :→d[hat @ l] βud[hat @ l, red @ i]) ;

%[[1]] == (EinsteinSum[]//@(%[[2]]))/.TensorValueRules[β]//PartialDSimplify

res = %[[1]] == Collect[%[[2]], {d[hat @ 1], d[hat @ 2], d[hat @ 3]}]

curl[] == (v_j^j_ (; i) _k^k ε_ (ijk)^(ijk))/(g)^1/2

ResultFrame[res[[1]] == Collect[(res[[2]]/.TensorValueRules[x]/.SphericalIndices), {d[r], d[θ], d[φ]}, Simplify]//PartialDToDif]

Divergence of a vector

Divergence of a vector in spherical coordinates :

div[i_] = "div[]" == TDiv[, g, i][]

div[] == v_i^i_ (; i)

div[i]//ToFlavor[red]

divred = %[[1]] == (%[[2]]//ExpandCovariantD[{x, δ, g, Γ}, red @ k]//PartialDToDif)

div[] == v_i^i_ (; i)

div[] == v_i^i_ (, i) + v_k^k Γ_ (iik)^(iik)

0 == (CovariantD[gdd[i, j], k]//ExpandCovariantD[{x, δ, g, Γ}, h]//PartialDToDif)

0 == (guu[i, j] %[[2]]//Expand)

%//MetricSimplify[g]//TensorSimplify

ruleΓ = Solve[%, %[[2, 2, 2]]][[1]]//ToFlavor[red]

resdiv = divred/.%

0 == g_ (ij)^(ij) _ (, k) - g_ (hj)^(hj) Γ_ (hki)^(hki) - g_ (ih)^(ih) Γ_ (hkj)^(hkj)

0 == g_ (ij)^(ij) _ (, k) g_ (ij)^(ij) - g_ (hj)^(hj) g_ (ij)^(ij) Γ_ (hki)^(hki) - g_ (ih)^(ih) g_ (ij)^(ij) Γ_ (hkj)^(hkj)

0 == g_ (ij)^(ij) _ (, k) g_ (ij)^(ij) - 2 Γ_ (iik)^(iik)

{Γ_ (iik)^(iik) →1/2 g_ (ij)^(ij) _ (, k) g_ (ij)^(ij)}

div[] == v_i^i_ (, i) + 1/2 g_ (ij)^(ij) _ (, k) g_ (ij)^(ij) v_k^k

Derivative of the determinant of the metric tensor :

res1 = ruleΓ[[1, 2]] ; Print["As shown here, there is two equivalent expressions for the coefficient of "v_k^k ]

res2 = (1/2) PartialD[Tensor[red @ g], red @ k]/Tensor[red @ g] ;

Tensor[red @ g] = Det[gdd[red @ i, red @ j]//ArrayExpansion[red @ i, red @ j]] ;

res2val = res2/.TensorValueRules[g]//Expand ;

(res1//ToFlavor[red]//EinsteinSum[]//Expand)/.TensorValueRules[g] ;

% == %%

Tensor[red @ g] =.

res3 = res2/.Tensor[red[g]] →Tensor[X] Tensor[Y]/.{X→Tensor[red @ g]^(1/2), Y→Tensor[red @ g]^(1/2)} ;

res1 == res2 == res3

As shown here, there is two equivalent expressions for the coefficient of  v_k^k

True

1/2 g_ (ij)^(ij) _ (, k) g_ (ij)^(ij) == (g) _ (, k)/(2 g) == ((g)^1/2) _ (, k)/(g)^1/2

divred = resdiv/.res1→res3/.k→i

((g)^1/2 v_i^i) _ (, i)/(g)^1/2 == v_i^i_ (, i) + (((g)^1/2) _ (, i) v_i^i)/(g)^1/2//ReleaseHoldD//FullSimplify

div[] == v_i^i_ (, i) + (((g)^1/2) _ (, i) v_i^i)/(g)^1/2

True

ResultFrame["div[]" == HoldOp[PartialD][PartialD[(g)^1/2 v_i^i, red @ i]]/(g)^1/2]

      div[] == ((g)^1/2 v_i^i) _ (, i)/(g)^1/2      (A 4)

Volume element (g)^1/2:

Tensor[red @ g]^(1/2) == (vol = Tensor[red @ g]^(1/2)//VolumeSquare[red @ g]//PowerExpand)

(g)^1/2 == Sin[x_2^2] (x_1^1)^2

res0 = (divred/.vu[red @ i_] :→vu[hat @ h] βud[red @ i, hat @ h]//EinsteinSum[])/.TensorValueRules[β]

res = res0/.Tensor[red @ g]^(1/2) →vol/.(1/Tensor[red @ g]^(1/2) →1/vol)//DifToPartialD[{x, δ, g, Γ}]//UnnestTensor//Kronecker[δ]//PartialDSimplify

res == (res1//UnnestTensor//Kronecker[δ]//ExpandAll)

True

ResultFrame[res1[[1]] == Collect[(res1[[2]]/.TensorValueRules[x]/.SphericalIndices), {d[r], d[θ], d[φ]}]//PartialDToDif]

Laplacian of a scalar field

Laplacian of a scalar field in spherical coordinates :

"Δ[U]" == TLaplacian[, g, red @ j, red @ i][Tensor[U]]

Δ[U] == U_ (; ij) g_ (ij)^(ij)

"Δ[U]" == (TLaplacian[, g, red @ j, red @ i][Tensor[U]]//ExpandCovariantD[{x, δ, g, Γ}, red @ k]//PartialDToDif)

Δ[U] == U_ (; ij) g_ (ij)^(ij)

Or also from the expression (A 11) of the divergence,

ResultFrame[lapl[U] = "Δ[U]" == divred[[2]]/.vu[red @ i_] →guu[red @ i, red @ j] PartialD[Tensor[U], red @ j]]

Which can also be written in a more compact form :

res2 == (res//UnnestTensor//Kronecker[δ]//Expand)

(Csc[x_2^2]^2 ∂^2U/∂x_3^3∂x_3^3)/(x_1^1)^2 + ∂ (x_1^1)^2 ∂U/∂x_1^1/∂x_1^1/(x_1^1)^2 + (Csc[x_2^2] ∂Sin[x_2^2] ∂U/∂x_2^2/∂x_2^2)/(x_1^1)^2

True

ResultFrame["Δ[U]" == (laplU = (res/.TensorValueRules[x]/.SphericalIndices)//PartialDToDif//Expand)]

ClearTensorShortcuts[{{, , u, v, w, x}, 1}, {{g, δ, β}, 2}] ;


Created by Mathematica  (November 27, 2007) Valid XHTML 1.1!