Stokes Theorem

Stokes theorem, also known as Kelvin Stokes theorem [2] [3] after Lord Kelvin and George Stokes , for Curl’s fundamental theorem or simply Curl’s theorem , [4] is a theorem on vector calculus . Given a vector field , the theorem is related to the curl of the integral , the line integral of the vector field on some surface, of the vector field around the boundary of the surface. Classical Stokes’s theorem can be stated in one sentence: of a vector field over a loop. R3The line integration is equal to the flow of its curl through the enclosed surface.

Stokes theorem is a special case of the generalized Stokes theorem . [5] [6] In particular, a vector field on can be regarded as a 1 -form, in which case its curl is its outer derivative , a 2-form. R3

Theorem

  • Stokes Theorem

Be a smooth oriented surface in 3 along the Lt boundary . If a vector field is defined and a field has continuous first order partial derivatives , then

\Sigma\partial \Sigma{\displaystyle \mathbf {A} =(P(x,y,z),Q(x,y,z),R(x,y,z))}\Sigma
{\displaystyle \iint _{\Sigma }(\nabla \times \mathbf {A} )\cdot \mathrm {d} \mathbf {a} =\oint _{\partial \Sigma }\mathrm {A} \cdot \mathbf {d} \mathbf {l} .}
{\displaystyle {\begin{aligned}&\iint _{\Sigma }\left(\left({\frac {\partial R}{\partial y}}-{\frac {\partial Q}{\partial z}}\right)\,\mathrm {d} y\,\mathrm {d} z+\left({\frac {\partial P}{\partial z}}-{\frac {\partial R}{\partial x}}\right)\,\mathrm {d} z\,\mathrm {d} x+\left({\frac {\partial Q}{\partial x}}-{\frac {\partial P}{\partial y}}\right)\,\mathrm {d} x\,\mathrm {d} y\right)\\&=\oint _{\partial \Sigma }{\Bigl (}P\,\mathrm {d} x+Q\,\mathrm {d} y+R\,\mathrm {d} z{\Bigr )}.\end{aligned}}}

The main challenge in making an accurate statement of Stokes’ theorem is to define the notion of a limit. Surfaces such as the Koch snowflake , for example, are not well known to perform the notion of a Riemann-integrable limit, and surface measurements in the Lebesgue principle cannot be defined for a non- Lipschitz surface. One (advanced) technique is to pass a weak formulation and then apply the machinery of geometric measure theory ; See the Korea Formula for that approach . In this article, we instead use a more elementary definition, based on the fact that a limit can be identified for a full-dimensional subset of R. Let : [ a , b ] → R 2 be a piecewise smooth Jordan plane curve . The Jordan curve theorem implies that divides R into two components, one that is compact and the other that is non-compact. Let D denote the compact part; So D is bounded by . Now it is enough to transfer this notion of boundary to our surface in R 3 along a continuous map . But we already have such a map: parametrization That . _Suppose : D → R 3 is smooth, with = ( D) . If is the space condition defined by (t) = (γ ( t ) ) , [ note 1 ] then we call the range of , written . With the above notation, if F is any smooth vector field on 3 , then

{\displaystyle \oint _{\partial \Sigma }\mathbf {F} \,\cdot \,\mathrm {d} {\mathbf {\Gamma } }=\iint _{\Sigma }\nabla \times \mathbf {F} \,\cdot \,\mathrm {d} \mathbf {S} .}

Evidence

The proof of the theorem consists of 4 steps. We follow Green’s theorem , so the concern is how to reduce a three-dimensional complex problem (Stokes theorem) to a two-dimensional rudimentary problem (Green’s theorem). [9] When proving this theorem, mathematicians usually derive it as a special case of a more general result , called in terms of differential forms, proved using more sophisticated machinery. While powerful, these techniques require substantial background, so the proof below avoids them, and does not imply any knowledge beyond familiarity with basic vector calculus. [8]At the end of this section, a brief alternative proof of Stokes’ theorem is given, as a consequence of the generalized Stokes theorem.

Primary evidence

First stage of proof (parametrization of integrals)

As in the As theorem , we reduce the dimension by using natural parametrization of the surface. Let and gamma be as in that section, and note that by the change of variables

{\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,d\mathbf {l} }=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot \,d{\boldsymbol {\psi }}(\mathbf {y} )}=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\,d\mathbf {y} }}

where Jψ denotes the Jacobian matrix of.

Now let { u , v } be a base perpendicular to the coordinate directions of 2 . Assuming that the columns of y are exactly partial derivatives of at y , we can expand the previous equation into coordinates as

{\displaystyle {\begin{aligned}\oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,d\mathbf {l} }&=\oint _{\gamma }{\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{u}(\mathbf {e} _{u}\cdot \,d\mathbf {y} )+\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))J_{\mathbf {y} }({\boldsymbol {\psi }})\mathbf {e} _{v}(\mathbf {e} _{v}\cdot \,d\mathbf {y} )}\\&=\oint _{\gamma }{\left(\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(\mathbf {y} )\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(\mathbf {y} ))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(\mathbf {y} )\right)\mathbf {e} _{v}\right)\cdot \,d\mathbf {y} }\end{aligned}}}

The second step in the proof (defining the pullback)

The previous step tells that we define the function

{\displaystyle \mathbf {P} (u,v)=\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\right)\mathbf {e} _{u}+\left(\mathbf {F} ({\boldsymbol {\psi }}(u,v))\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\right)\mathbf {e} _{v}}

It is a pullback of F with , and, from above, it satisfies

{\displaystyle \oint _{\partial \Sigma }{\mathbf {F} (\mathbf {x} )\cdot \,d\mathbf {l} }=\oint _{\gamma }{\mathbf {P} (\mathbf {y} )\cdot \,d\mathbf {l} }}

We have successfully converted one side of Stokes’ theorem into a two-dimensional formula; Now we turn to the other side.

Third Step of the Proof (Second Equation)

First, calculate the partial derivatives that appear in Green’s theorem via the product rule :

{\displaystyle {\begin{aligned}{\frac {\partial P_{1}}{\partial v}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial v\,\partial u}}\\[5pt]{\frac {\partial P_{2}}{\partial u}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}+(\mathbf {F} \circ {\boldsymbol {\psi }})\cdot {\frac {\partial ^{2}{\boldsymbol {\psi }}}{\partial u\,\partial v}}\end{aligned}}}

Conveniently, the second term disappears into the difference by equality of mixed partials . therefore,

{\displaystyle {\begin{aligned}{\frac {\partial P_{1}}{\partial v}}-{\frac {\partial P_{2}}{\partial u}}&={\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial v}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial u}}-{\frac {\partial (\mathbf {F} \circ {\boldsymbol {\psi }})}{\partial u}}\cdot {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial v}}-{\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} ){\frac {\partial {\boldsymbol {\psi }}}{\partial u}}&&{\text{(chain rule)}}\\[5pt]&={\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\left(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right){\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\end{aligned}}}

But now take the matrix in that quadratic form—that is, . We claim that this matrix actually describes a cross product.

{\displaystyle J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} -(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )^{\mathsf {T}}}

To be precise, let be an arbitrary 3 × 3 matrix and let

{\displaystyle A=(A_{ij})_{ij}}
{\displaystyle \mathbf {a} ={\begin{bmatrix}A_{32}-A_{23}\\A_{13}-A_{31}\\A_{21}-A_{12}\end{bmatrix}}}

Note that x a × x is linear, so it is determined by its action on the base elements but by direct calculation

{\displaystyle {\begin{aligned}\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{1}&={\begin{bmatrix}0\\a_{3}\\-a_{2}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{1}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{2}&={\begin{bmatrix}-a_{3}\\0\\a_{1}\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{2}\\\left(A-A^{\mathsf {T}}\right)\mathbf {e} _{3}&={\begin{bmatrix}a_{2}\\-a_{1}\\0\end{bmatrix}}=\mathbf {a} \times \mathbf {e} _{3}\end{aligned}}}

Thus ( A – T ) x = a × x for any x . Substituting F for a , we get

{\displaystyle \left({(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}_{\psi (u,v)}-{(J_{{\boldsymbol {\psi }}(u,v)}\mathbf {F} )}^{\mathsf {T}}\right)\mathbf {x} =(\nabla \times \mathbf {F} )\times \mathbf {x} ,\quad {\text{for all}}\,\mathbf {x} \in \mathbb {R} ^{3}}

Now we can recognize the difference of a partial as a (scalar) triple product:

{\displaystyle {\begin{aligned}{\frac {\partial P_{1}}{\partial v}}-{\frac {\partial P_{2}}{\partial u}}&={\frac {\partial {\boldsymbol {\psi }}}{\partial u}}\cdot (\nabla \times \mathbf {F} )\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}\\&=\det {\begin{bmatrix}(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))&{\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)&{\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\end{bmatrix}}\end{aligned}}}

On the other hand, the definition of a surface integral includes a third product—that’s one!

{\displaystyle {\begin{aligned}\iint _{S}(\nabla \times \mathbf {F} )\cdot \,d^{2}\mathbf {S} &=\iint _{D}{(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))\cdot \left({\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)\times {\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\,du\,dv\right)}\\&=\iint _{D}\det {\begin{bmatrix}(\nabla \times \mathbf {F} )({\boldsymbol {\psi }}(u,v))&{\frac {\partial {\boldsymbol {\psi }}}{\partial u}}(u,v)&{\frac {\partial {\boldsymbol {\psi }}}{\partial v}}(u,v)\end{bmatrix}}\,du\,dv\end{aligned}}}

So, we get

{\displaystyle \iint _{S}(\nabla \times \mathbf {F} )\cdot \,d^{2}\mathbf {S} =\iint _{D}\left({\frac {\partial P_{2}}{\partial u}}-{\frac {\partial P_{1}}{\partial v}}\right)\,du\,dv}

The fourth stage of the proof (reduction of Green’s theorem)

Combining the second and third steps, and then applying Green’s theorem, completes the proof.

Evidence through Differential Forms

R → 3 can be identified bymeans of a map with differential 1-forms on R 3

{\displaystyle F_{1}\mathbf {e} _{1}+F_{2}\mathbf {e} _{2}+F_{3}\mathbf {e} _{3}\mapsto F_{1}\,dx+F_{2}\,dy+F_{3}dz}.

Write the differential 1 – form involving a function F as F. then one can calculate that

{\displaystyle \star \omega _{\nabla \times \mathbf {F} }=d\omega _{\mathbf {F} }}

where ★ is Hodge’s star and is the outer derivative. Thus, by the generalized Stokes theorem, d

{\displaystyle \oint _{\partial \Sigma }{\mathbf {F} \cdot \,d\mathbf {l} }=\oint _{\partial \Sigma }{\omega _{\mathbf {F} }}=\int _{\Sigma }{d\omega _{\mathbf {F} }}=\int _{\Sigma }{\star \omega _{\nabla \times \mathbf {F} }}=\iint _{\Sigma }{\nabla \times \mathbf {F} \cdot \,d^{2}\mathbf {S} }}

Application

In fluid dynamics

In this section we will discuss the lamellar vector field based on Stokes’ theorem.

Irrational Fields

Definition 2-1 (rotating area). A smooth vector field F open at a is irrotational if × F = 0 .

If the domain of F is only connected, then F is a conservative vector field.

Helmholtz’s theorem

In this section, we will introduce a theorem that is derived from Stokes’ theorem and characterizes vortex-free vector fields. In fluid dynamics this is called the Helmholtz theorem.

Theorem 2-1 (Helmholtz’s theorem in fluid dynamics). [5] [3] : 142 Let U R 3 be an open subset F with a flaky vector field and let 0 , 1 : [0, 1] → U be the loops piecewise smooth. If a function H : [0, 1] × [0, 1] → U be such that

  • [TLH0] H is smooth in pieces,
  • [TLH1] H ( t , 0) = 0 ( t ) for all t [0, 1] ,
  • [TLH2] H ( t , 1) = 1 ( t ) for all t [0, 1] ,
  • [TLH3] H (0, s ) = H (1, s ) for all s [0, 1] .

Then,

\int _{c_{0}}\mathbf {F} \,dc_{0}=\int _{c_{1}}\mathbf {F} \,dc_{1}

Some textbooks such as Lawrence [5] describe the relationship between 0 and 1 as “isotope” as described in Theorem 2–1 and the function H : [0, 1] × [0, 1] → U as ” c”.The symmetry between says “.0 and c1 “. However, “homotopic” or “homotopy” in the above sense differ (strongly) from the specific definitions of “homotopic” or “homotopy”; The subsequent default condition [TLH3]. So from now on we refer to homotopy as tubular homotopy (Resp. tubular-homotopic) in the sense of Theorem 2-1 .

Proof of the theorem

In the following, we abuse the notation and use ” + ” to combine paths in the fundamental group and ” – ” to reverse the orientation of the paths .

Let D = [0, 1] × [0, 1] and divide D into four line segments j .

{\displaystyle {\begin{aligned}\gamma _{1}:[0,1]\to D;\quad &\gamma _{1}(t)=(t,0)\\\gamma _{2}:[0,1]\to D;\quad &\gamma _{2}(s)=(1,s)\\\gamma _{3}:[0,1]\to D;\quad &\gamma _{3}(t)=(1-t,1)\\\gamma _{4}:[0,1]\to D;\quad &\gamma _{4}(s)=(0,1-s)\end{aligned}}}

So that

{\displaystyle \partial D=\gamma _{1}+\gamma _{2}+\gamma _{3}+\gamma _{4}}

From our assumption that 1 and 2 are piecewise smooth isotopes, a piecewise smooth isotope is H : D → M

{\displaystyle {\begin{aligned}\Gamma _{i}(t)&=H(\gamma _{i}(t))&&i=1,2,3,4\\\Gamma (t)&=H(\gamma (t))=(\Gamma _{1}\oplus \Gamma _{2}\oplus \Gamma _{3}\oplus \Gamma _{4})(t)\end{aligned}}}

Let S be the image of D under H. that

{\displaystyle \iint _{S}\nabla \times \mathbf {F} \,dS=\oint _{\Gamma }\mathbf {F} \,d\Gamma }

follows immediately from Stokes’ theorem. F is lamellar, hence disappears to the left, i.e.

{\displaystyle 0=\oint _{\Gamma }\mathbf {F} \,d\Gamma =\sum _{i=1}^{4}\oint _{\Gamma _{i}}\mathbf {F} \,d\Gamma }

As H is tubular, 2 = -Γ 4 . Thus the line integrates with 2 ( s ) and 4 ( s ) , canceling

{\displaystyle 0=\oint _{\Gamma _{1}}\mathbf {F} \,d\Gamma +\oint _{\Gamma _{3}}\mathbf {F} \,d\Gamma }

On the other hand, 1 = 1 and 3 = −Γ 3 , so that the desired equality is almost instant.

Conservative forces

Helmholtz’s theorem gives an explanation for why the work done by a conservative force in changing the position of an object is path independent. First, we introduce Lemma 2-2, which is a corollary and a special case of Helmholtz’s theorem.

Lemma 2-2. [5] [6] Assume that U R 3 is an open subset, containing a lamellar vector field and a piece-wise smooth loop 0 : [0 , 1] → U. Determine a point U , if there is a homogeneous (tube-like) H : [0, 1] × [0, 1] → U such that

  • [SC0] H is piecewise smooth ,
  • [sc1] h ( t , 0 ) = c0 ( t ) for all t [0, 1],
  • [sc2] h ( t , 1) = p for all t [0, 1],
  • [SC3] H (0, s ) = H (1, s ) = p for all s [0, 1].

Then,

\int _{c_{0}}\mathbf {F} \,dc_{0}=0

Lemma 2-2 follows from Theorem 2-1. In lemma 2–2, the existence of H satisfying [sc0] to [sc3] is important. If the U bus is connected, then such H exists. The definition of simply connected space is as follows:

Definition 2-2 (connected space only). [5] [6] Let m r n be non-empty and be path-connected. M is added only if, for a continuous loop, c : [0, 1] → M a continuous tubular homogeneous H : [0, 1] × [0, 1] → M exists , a fixed point from c up to p c that is,

  • [SC0′] h is constant ,
  • [SC1] h ( t , 0) = c ( t ) for all t [0, 1],
  • [sc2] h ( t , 1) = p for all t [0, 1],
  • [SC3] H (0, s ) = H (1, s ) = p for all s [0, 1].

The claim that “for a conservative force, the work done in changing the position of an object is path independent” may immediately follow. But remember that the simple-connection only guarantees the existence of a continuous symmetry satisfying [SC1-3]; We instead look for a piecewise smooth symmetry satisfying those conditions.

However, the difference in regularity is solved by the Whitney approximation theorem. [6] : 136,421 [11] Thus we get the following theorem.

Theorem 2-2. [5] [6] Let 3 be open and simply fused with an irrotational vector field F. c for all piecewise smooth loops : [0, 1] → U

{\displaystyle \int _{c_{0}}\mathbf {F} \,dc_{0}=0}

Maxwell’s equations

In the physics of electromagnetism, Stokes’ theorem provides a justification for the equality of the differential form of the Maxwell–Faraday equation and the Maxwell–Ampere equation and the integral form of these equations. For Faraday’s law, Stokes’ theorem applies to electric fields, . E

{\displaystyle \oint _{\partial \Sigma }\mathbf {E} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {E} \cdot \mathrm {d} \mathbf {S} }

For Ampere’s law, Stokes’ theorem applies to a magnetic field, .B

{\displaystyle \oint _{\partial \Sigma }\mathbf {B} \cdot \mathrm {d} {\boldsymbol {l}}=\iint _{\Sigma }\mathbf {\nabla } \times \mathbf {B} \cdot \mathrm {d} \mathbf {S} }