In calculus , and more generally in mathematical analysis , integration by parts or partial integration is a process that finds the integral of a product of functions of their integral in terms of derivative and antiderivative . It is often used to convert the antiderivative of the product of functions into an antiderivative for which a solution can be found more easily. The differentiation regime can be thought of as an integral version of the product rule .

If and while and , then integration by the formula of parts states that

{\displaystyle u=u(x)}{\displaystyle du=u'(x)\,dx}{\displaystyle v=v(x)}{\displaystyle dv=v'(x)dx}

{\displaystyle {\begin{aligned}\int _{a}^{b}u(x)v'(x)\,dx&={\Big [}u(x)v(x){\Big ]} _{a}^{b}-\int _{a}^{b}u'(x)v(x)\,dx\\[6pt]&=u(b)v(b)-u(a )v(a)-\int _{a}^{b}u'(x)v(x)\,dx.\end{aligned}}}

more intensively,

{\displaystyle \int u\,dv\ =\ uv-\int v\,du.}

Mathematician Brooke Taylor discovered integration by parts, first publishing the idea in 1715. [1] [2] More general formulas for integration by parts exist for Riemann–Stieltjes and Lebesgue–Stieltjes integrals . Analog sequences for discrete are called summation by parts .

**Theorem**

**Product of two Functions**

The theorem can be derived as follows. For two continuous differentiable functions u ( x ) and v ( x ), the product rule states:

{\displaystyle {\Big (}u(x)v(x){\Big )}'=v(x)u'(x)+u(x)v'(x).}

Integrating both sides with respect to *x ,*

{\displaystyle \int {\Big (}u(x)v(x){\Big )}'\,dx=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,}

and given that the indefinite integral is an anti-differentiation, gives

{\displaystyle u(x)v(x)=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,}

where we ignore the continuous writing of the integration . This gives the formula for integration by parts :

{\displaystyle \int u(x)v'(x)\,dx=u(x)v(x)-\int u'(x)v(x)\,dx,}

or in terms of difference

{\displaystyle du=u'(x)\,dx}, {\displaystyle dv=v'(x)\,dx,\quad }

{\displaystyle \int u(x)\,dv=u(x)v(x)-\int v(x)\,du.}

It should be understood as an equality of functions with an unspecified constant on each side. Taking the difference of each side between the two values x = a and x = b and applying the fundamental theorem of the calculus gives the definite integral version:

{\displaystyle \int _{a}^{b}u(x)v'(x)\,dx=u(b)v(b)-u(a)v(a)-\int _{a}^{b}u'(x)v(x)\,dx.}

The original integral uv ‘ dx contains the derivative V ‘; To apply the theorem, one must find v , the antiderivative of v ‘, then evaluate the resulting integral vu ‘ dx .

**Validity for less smooth operations**

u and v need not be continuously differentiable. Works from unity by parts if u is absolutely continuous and the function named v ‘ is Lebesgue integrable (but not necessarily continuous). [3] (If v has a point of discontinuity then its antidifferential v can not have a derivative at that point.)

If the integration interval is not compact , then it is not necessary for U to be absolutely continuous throughout the interval or for V ‘ to be in the Lebesgue integrable interval, as in a couple of examples (in which U and V are continuous and vary continuously). For example, if

{\displaystyle u(x)=e^{x}/x^{2},\,v'(x)=e^{-x}}

*u* is not completely continuous on the interval [1, ) , but still

{\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx}

As long as the limit is taken to mean as and as long as the two terms on the right hand side are finite. This is true only if we choose. Similarly, if

{\displaystyle \left[u(x)v(x)\right]_{1}^{\infty }}

{\displaystyle u (L) v (L) -u (1) v (1)}{\displaystyle L\to \infty }{\displaystyle v(x)=-e^{-x}.}

{\displaystyle u(x)=e^{-x},\,v'(x)=x^{-1}\sin(x)}

*v* is not Lebesgue integrable on the interval [1, gue) , but still

{\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx}

with the same explanation.

One can also easily come up with similar examples, in which *u* and *v* are *not* consecutive differentiators.

Further, if is a function of finite variation on the clause and is differentiable on then then

f(x)[a,b],\varphi (x)[a,b],

{\displaystyle \int _{a}^{b}f(x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }{\widetilde {\varphi }}(x)\,d({\widetilde {\chi }}_{[a,b]}(x){\widetilde {f}}(x)),}

where is denotes the signed measure corresponding to the function of the bounded variation, and are the extents of the function to which are the bounded variation and the differentiable, respectively.

{\displaystyle d(\chi _{[a,b]}(x){\widetilde {f}}(x))}{\displaystyle \chi _{[a,b]}(x)f(x)}{\displaystyle {\widetilde {f}}, {\widetilde {\varphi}}}{\displaystyle f,\varphi }{\displaystyle \mathbb {R} ,}

**Product of multiple functions**

The integration of the product rule for the three multiplication functions, *u* ( *x* ), *v* ( *x* ), *w* ( *x* ) gives a similar result:

{\displaystyle \int _{a}^{b}uv\,dw\ =\ {\Big [}uvw{\Big ]}_{a}^{b}-\int _{a}^{b}uw\,dv-\int _{a}^{b}vw\,du.}

In general, for *n* factors

{\displaystyle \left(\prod _{i=1}^{n}u_{i}(x)\right)'\ =\ \sum _{j=1}^{n}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x),}

which leads to

{\displaystyle \left[\prod _{i=1}^{n}u_{i}(x)\right]_{a}^{b}\ =\ \sum _{j=1}^{n}\int _{a}^{b}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x).}

**VISUALIZATION**

Consider a parametric curve by ( x , y ) = ( f ( t ), g ( t )). Assuming that the curve is locally one-to-one and integral , we can define

x(y)=f(g^{-1}(y))

y(x)=g(f^{-1}(x))

The area of the blue region is

{\displaystyle A_{1}=\int _{y_{1}}^{y_{2}}x(y)\,dy}

Similarly, the area of the red region is

{\displaystyle A_{2}=\int _{x_{1}}^{x_{2}}y(x)\,dx}

The total area of *A *_{1} + *A *_{2} is equal to the area of the larger rectangle, *x *_{2 }*y *_{2} , minus the area of the smaller rectangle, *x *_{1 }*y *_{1} :

{\displaystyle \overbrace {\int _{y_{1}}^{y_{2}}x(y)\,dy} ^{A_{1}}+\overbrace {\int _{x_{1}}^{x_{2}}y(x)\,dx} ^{A_{2}}\ =\ {\biggl .}x\cdot y(x){\biggl |}_{x_{1}}^{x_{2}}\ =\ {\biggl .}y\cdot x(y){\biggl |}_{y_{1}}^{y_{2}}.}

or, in terms of *t ,*

{\displaystyle \int _{t_{1}}^{t_{2}}x(t)\,dy(t)+\int _{t_{1}}^{t_{2}}y(t)\,dx(t)\ =\ {\biggl .}x(t)y(t){\biggl |}_{t_{1}}^{t_{2}}}

Or, in terms of indefinite integrals, it can be written as:

{\displaystyle \int x\,dy+\int y\,dx\ =\ xy}

rearrange:

{\displaystyle \int x\,dy\ =\ xy-\int y\,dx}

Thus integration by parts can be thought of as obtaining the area of the blue region from the area of the rectangles and the area of the red region.

This visualization also explains why integration by parts can help to find the integral of the inverse function f^{−1 }( x ) when the integral of the function f ( x ) is known. In fact, the functions x ( y ) and y ( x ) are inverses, and the integer x dy can be calculated from knowing the integral y dx as above in particular, it explains the use of integration by parts to integrate logarithmic and inverse trigonometric functions . In fact, if fIf an interval is a one-to-one differentiable function, then integration by parts can be used to derive a formula for the integral in terms of the integral of . This is demonstrated in the article, Integral part of inverse functions .

f^{-1}f

**Application**

**Finding Anti Derivatives**

The integration by parts is a heuristic rather than a purely mechanical process for solving integrals ; Given a single function to integrate, the typical strategy is to carefully separate this single function into the product of two functions u ( x ) v ( x ), such that the residual integral from integration by the formula of parts is compared to a single function Easy to evaluate. The following form is useful for adopting the best strategy:

{\displaystyle \int uv\ dx=u\int v\ dx-\int \left(u'\int v\ dx\right)\ dx.}

On the right, *u* is differentiated and *v* is integrated; Consequently it is useful to select *u* as a function that is simple when differentiated, or to select *v* as a function that is simple when integrated. Consider as a simple example:

{\displaystyle \int {\frac {\ln(x)}{x^{2}}}\ dx\ .}

Since the derivative of ln( *x* ) is1/*x*, makes a (ln( *x* )) part *u* ; Since the derivative of anti1/*x *^{2}is -1/*x*, makes a1/*x *^{2} *dx* part *dv* . The formula now yields:

{\displaystyle \int {\frac {\ln(x)}{x^{2}}}\ dx=-{\frac {\ln(x)}{x}}-\int {\biggl (}{\frac {1}{x}}{\biggr )}{\biggl (}-{\frac {1}{x}}{\biggr )}\ dx\ .}

derivative of 1 / x 2 can be found with the power rule and is 1 / x .

Alternatively, one can choose *u* and *v* such that the product *u* (∫ *v * *dx* ) is simplified due to cancellation. For example, suppose one wants to integrate:

{\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\ dx.}

If we *choose u* ( *x* ) = ln(|sin( *x* )|) and *v* ( *x* ) = sec ^{2} x , *u* uses the chain rule to differentiate 1/ tan x and v is integrated into tan x goes ; So the formula gives:

{\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\ dx=\tan(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}-\int \tan(x)\cdot {\frac {1}{\tan(x)}}\,dx\ .}

The integral simplifies to 1, so the antidifferential is x . Finding simplified combinations often involves experimentation.

In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; For example, in numerical analysis , it may be sufficient that its magnitude is small and therefore contributes only a small error term. Some other special techniques are demonstrated in the examples below.

**Polynomials and Trigonometric Functions**

to calculate

{\displaystyle I=\int x\cos(x)\ dx\ ,}

Let’s go:

{\displaystyle u=x\ \Rightarrow \ du=dx}

{\displaystyle dv=\cos(x)\ dx\ \Rightarrow \ v=\int \cos(x)\ dx=\sin(x)}

Then:

{\displaystyle {\begin{aligned}\int x\cos(x)\ dx&=\int u\ dv\\&=u\cdot v-\int v\,du\\&=x\sin(x)-\int \sin(x)\ dx\\&=x\sin(x)+\cos(x)+C,\end{aligned}}}

where C is a constant of integration .

to powers as high as *x*

{\displaystyle \int x^{n}e^{x}\ dx,\ \int x^{n}\sin(x)\ dx,\ \int x^{n}\cos(x)\ dx\ ,}

One can repeatedly evaluate integrals such as these using integration by parts; Each application of the theorem reduces the power of *x* by one.

**Exponential and Trigonometric Functions**

An example commonly used to check the working of integration by parts is

{\displaystyle I=\int e^{x}\cos(x)\ dx.}

Here, integration by parts is performed twice. let’s go first

{\displaystyle u=\cos(x)\ \Rightarrow \ du=-\sin(x)\ dx}

{\displaystyle dv=e^{x}\ dx\ \Rightarrow \ v=\int e^{x}\ dx=e^{x}}

Then

{\displaystyle \int e^{x}\cos(x)\ dx=e^{x}\cos(x)+\int e^{x}\sin(x)\ dx.}

Now, to evaluate the remainder of the integral, we again use integration by parts:

{\displaystyle u=\sin(x)\ \Rightarrow \ du=\cos(x)\ dx}

{\displaystyle dv=e^{x}\ dx\ \Rightarrow \ v=\int e^{x}\ dx=e^{x}.}

Then

{\displaystyle \int e^{x}\sin(x)\ dx=e^{x}\sin(x)-\int e^{x}\cos(x)\ dx.}

putting them together,

{\displaystyle \int e^{x}\cos(x)\ dx=e^{x}\cos(x)+e^{x}\sin(x)-\int e^{x}\cos(x)\ dx.}

The same integral appears on both sides of this equation. The integral can be simply added to add to both sides

{\displaystyle 2\int e^{x}\cos(x)\ dx=e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C,}

who rearranges

{\displaystyle \int e^{x}\cos(x)\ dx={\frac {1}{2}}e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C'}

where again C (and C = C / 2) is a constant of integration .

A similar method is used to find the integral of the secant cube .

**Work multiplied by unity**

Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. It works when the derivative of the function is known, and the integral of this derivative times *x is also known.*

The first example is ln( *x* ) d *x* . We write it like this:

{\displaystyle I=\int \ln(x)\cdot 1\ dx\ .}

Let’s go:

{\displaystyle u=\ln(x)\ \Rightarrow \ du={\frac {dx}{x}}}

{\displaystyle dv=dx\ \Rightarrow \ v=x}

Then

{\displaystyle {\begin{aligned}\int \ln(x)\ dx&=x\ln(x)-\int {\frac {x}{x}}\ dx\\&=x\ln(x)-\int 1\ dx\\&=x\ln(x)-x+C\end{aligned}}}

where *C* is the integration constant.

The second example is the inverse tangent function arctan ( *x* ):

{\displaystyle I=\int \arctan(x)\ dx.}

rewrite it as

{\displaystyle \int \arctan(x)\cdot 1\ dx.}

leave now:

{\displaystyle u=\arctan(x)\ \Rightarrow \ du={\frac {dx}{1+x^{2}}}}

{\displaystyle dv=dx\ \Rightarrow \ v=x}

Then

{\displaystyle {\begin{aligned}\int \arctan(x)\ dx&=x\arctan(x)-\int {\frac {x}{1+x^{2}}}\ dx\\[8pt]&=x\arctan(x)-{\frac {\ln(1+x^{2})}{2}}+C\end{aligned}}}

Using a combination of the inverse series rule method and the natural logarithmic integral condition.

**LIATE RULES**

A rule of thumb has been proposed, which involves choosing the function that comes first in the following list as *u : *^{[4]}

**L** – logarithmic functions : etc.

{\displaystyle \ln(x),\ \log _{b}(x),}

**I** – Inverse trigonometric functions (including hyperbolic analogs): etc.

{\displaystyle \arctan(x),\ \operatorname {arcsec}(x),\ \operatorname {arsinh} (x),}

**A** – Algebraic functions : etc.

{\displaystyle x^{2},\ 3x^{50},}

**T** – trigonometric functions (including hyperbolic analogs): etc.

{\displaystyle \ sin (x), \ \ tan (x), \ \operatorname {sech} (x),}

**E** – Exponential Functions : etc.

{\displaystyle e^{x},\ 19^{x},}

The function that has to be *DV* , whichever comes last on the list. This is because functions lower in the list usually have easier antiderivatives than functions above them. The rule is sometimes written as “description” where *d* stands for *dv* and the top function of the list is chosen as *dv* .

To demonstrate the LIATE rule, consider the integral

{\displaystyle \int x\cdot \cos(x)\,dx.}

Following the LIATE rule, *u* = *x* , and *dv* = cos( *x* ) *dx* , so *du* = *dx* , and *v* = sin( *x* ), which makes the integral

{\displaystyle x\cdot \sin(x)-\int 1\sin(x)\,dx,}

which is equal to

{\displaystyle x\cdot \sin(x)+\cos(x)+C.}

In general, one tries to choose *u* and *dv such that du is easier than u* and *dv* is easier to integrate. If instead cos( *x* ) was chosen as *u* , and *x dx* was chosen as *dv* , we would have the integral

{\displaystyle {\frac {x^{2}}{2}}\cos(x)+\int {\frac {x^{2}}{2}}\sin(x)\,dx,}

Which, after a recursive application of the integration by parts formula, would clearly result in an infinite iteration and lead nowhere.

Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in “ILATE” order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate

{\displaystyle \int x^{3}e^{x^{2}}\,dx,}

will be a set

{\displaystyle u=x^{2},\quad dv=x\cdot e^{x^{2}}\,dx,}

So that

{\displaystyle du=2x\,dx,\quad v={\frac {e^{x^{2}}}{2}}.}

Then

{\displaystyle \int x^{3}e^{x^{2}}\,dx=\int \left(x^{2}\right)\left(xe^{x^{2}}\right)\,dx=\int u\,dv=uv-\int v\,du={\frac {x^{2}e^{x^{2}}}{2}}-\int xe^{x^{2}}\,dx.}

In the end, the result

{\displaystyle \int x^{3}e^{x^{2}}\,dx={\frac {e^{x^{2}}\left(x^{2}-1\right)}{2}}+C.}

Integration by parts is often used as a tool for proving theorems in mathematical analysis.

**Wallis Products**

For Wallis Infinite Product *\pi*

{\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)\\[6pt]&={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots \end{aligned}}}

Integration by parts can be achieved using .

**Gamma Function Identification**

The gamma function is an example of a special function, defined as an improper integral for . Integration by parts shows this as an extension of the factorial function: *z > 0*

{\displaystyle {\begin{aligned}\Gamma (z)&=\int _{0}^{\infty }e^{-x}x^{z-1}dx\\[6pt]&=-\int _{0}^{\infty }x^{z-1}\,d\left(e^{-x}\right)\\[6pt]&=-{\Biggl [}e^{-x}x^{z-1}{\Biggl ]}_{0}^{\infty }+\int _{0}^{\infty }e^{-x}d\left(x^{z-1}\right)\\[6pt]&=0+\int _{0}^{\infty }\left(z-1\right)x^{z-2}e^{-x}dx\\[6pt]&=(z-1)\Gamma (z-1).\end{aligned}}}

since

{\displaystyle \Gamma (1)=\int _{0}^{\infty }e^{-x}\,dx=1,}

When A is a natural number, i.e. , repeated application of this formula gives the factorial:

{\displaystyle z=n\in \mathbb {N} }{\displaystyle \Gamma (n+1)=n!}

**Use in harmonic analysis**

Integration by parts is often used in harmonic analysis, particularly Fourier analysis, to show that quickly oscillating integrals with sufficiently smooth integrals decay quickly. The most common example of this is its use in showing that the decay of a function’s Fourier transform depends on the legibility of that function, as described below.

**Fourier transform of derivative**

If *f* is a *k* -fold continuous differential function and all derivatives up to *k* become zero at infinity, then its Fourier transform satisfies

({\mathcal {F}}f^{(k)})(\xi )=(2\pi i\xi )^{k}{\mathcal {F}}f(\xi ),

where *f *^{( k )} is *k* the derivative of th *f* . (The exact constant on the right depends on the convention of the Fourier transform used.) This is proved by taking into account that

{\frac {d}{dy}}e^{-2\pi iy\xi }=-2\pi i\xi e^{-2\pi iy\xi },

So using integration by parts on the Fourier transform of the derivative we get

{\displaystyle {\begin{aligned}({\mathcal {F}}f')(\xi )&=\int _{-\infty }^{\infty }e^{-2\pi iy\xi }f'(y)\,dy\\&=\left[e^{-2\pi iy\xi }f(y)\right]_{-\infty }^{\infty }-\int _{-\infty }^{\infty }(-2\pi i\xi e^{-2\pi iy\xi })f(y)\,dy\\[5pt]&=2\pi i\xi \int _{-\infty }^{\infty }e^{-2\pi iy\xi }f(y)\,dy\\[5pt]&=2\pi i\xi {\mathcal {F}}f(\xi ).\end{aligned}}}

Applying this inductively gives a result of *normal **k* . A similar method can be used to find the Laplace transform of the derivative of a function.

**Fourier transform decay**

The above result tells us about the decay of the Fourier transform, since it follows that if *f* and *f *^{( k )} are integral then

{\displaystyle \vert {\mathcal {F}}f(\xi )\vert \leq {\frac {I(f)}{1+\vert 2\pi \xi \vert ^{k}}},{\text{ where }}I(f)=\int _{-\infty }^{\infty }{\Bigl (}\vert f(y)\vert +\vert f^{(k)}(y)\vert {\Bigr )}\,dy.}

In other words, if *f* satisfies these conditions then its Fourier transform is at least 1/| *|* _ . In particular, if *k*2 then the Fourier transform is integrable.

The proof makes use of the fact, which is immediate from the definition of the Fourier transform, that

\vert {\mathcal {F}} f (\xi) \vert \leq \int _ {- \infty} ^ {\infty} \vert f (y) \vert \, dy.

Using the same considerations on the similarities mentioned at the beginning of this subsection, it is obtained

\vert (2\pi i\xi )^{k}{\mathcal {F}}f(\xi )\vert \leq \int _{-\infty }^{\infty }\vert f^{(k)}(y)\vert \,dy.

Summarize these two inequalities and then divide by 1 + | 2 *K **| *^{_} _ Returns the declared inequality.

**Use in Operator Theory**

One use of integration by parts in operator theory is to show that -∆ (where is the Laplace operator) is a positive operator on L2 (see lp space ^{) }*. *If *f* is supported smoothly and compactly, then using integration by parts, we have

{\displaystyle {\begin{aligned}\langle -\Delta f,f\rangle _{L^{2}}&=-\int _{-\infty }^{\infty }f''(x){\overline {f(x)}}\,dx\\[5pt]&=-\left[f'(x){\overline {f(x)}}\right]_{-\infty }^{\infty }+\int _{-\infty }^{\infty }f'(x){\overline {f'(x)}}\,dx\\[5pt]&=\int _{-\infty }^{\infty }\vert f'(x)\vert ^{2}\,dx\geq 0.\end{aligned}}}

**Other applications**

- Determination of boundary conditions in Sturm-Liouville theory
- Deriving the Euler-Lagrange equation in the calculus of variations

**Repeated integration by parts**

Considering the second derivative of the formula for partial integration into the integral over the LHS, a repeated application to the integral over the RHS has been suggested: *v*

{\displaystyle \int uv''\,dx=uv'-\int u'v'\,dx=uv'-\left(u'v-\int u''v\,dx\right).}

Repeatedly extending this concept of partial integration to derivatives of degree *n, one leads to*

{\displaystyle {\begin{aligned}\int u^{(0)}v^{(n)}\,dx&=u^{(0)}v^{(n-1)}-u^{(1)}v^{(n-2)}+u^{(2)}v^{(n-3)}-\cdots +(-1)^{n-1}u^{(n-1)}v^{(0)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\\[5pt]&=\sum _{k=0}^{n-1}(-1)^{k}u^{(k)}v^{(n-1-k)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\end{aligned}}}

This concept can be useful when successive integrals of k are readily available (for example, in the plain exponent or sine and cosine, as in the Laplace or Fourier transform), and when the *nth* derivative is missing (for example , as a polynomial function with degree ) ) the latter condition stops repeating partial integration, as the RHS-integral disappears.

{\displaystyle v^{(n)}}u(n-1)

{\displaystyle \int u^{(0)}v^{(n)}\,dx\quad }~~and~~{\displaystyle \quad \int u^{(\ell )}v^{(n-\ell )}\,dx\quad }{\displaystyle \quad \int u^{(m)}v^{(n-m)}\,dx\quad {\text{ for }}1\leq m,\ell \leq n}

get related. This can be interpreted as arbitrary “transfer” derivatives and within the integral, and turns out to be useful (see Rodrigues formula). *vu*

**Tabular integration by parts**

The essential process of the above formula can be summarized in a table; The resulting method is called “tabular integration” ^{[5]} and was featured in the film *Stand and Deliver . *^{[6]}

For example, consider the integral

{\displaystyle \int x^{3}\cos x\,dx\quad }~~take ~~more~~{\displaystyle \quad u^{(0)}=x^{3},\quad v^{(n)}=\cos x.}

Begin listing in column **A** the function and its subsequent derivatives until they reach zero. Then list the function in column **B** and its subsequent integrals until the size of column **B** is the same as the size of column **A. **The result is as follows:

{\displaystyle u^{(0)}=x^{3}}{\displaystyle u ^ {(i)}}{\displaystyle v^{(n)}=\cos x}{\displaystyle v^{(n-i)}}

# I | clue | A: Derivative U ^{( i )} | B: Integral V ^{( N – I )} |
---|---|---|---|

0 | + | x^{3} | cos x |

1 | – | 3x^{2} | sin x |

2 | + | 6x | – cos x |

3 | – | i | -sin x |

4 | + | i | cos x |

The product of the entries in row *I* of columns **A** and **B** gives the relevant integral in step *I* during repeated integration by parts with the corresponding sign . Step *i* = 0 gives the original integration. For a complete result in step *i* > 0 the *i* th integral must be added to all previous products ( 0 *j* < *i ) of the j* th entry of column A and ( *j* + 1) first entry of column B(ie, multiply the first entry of column A with the second entry of column B, the second entry of column A with the third entry of column B, etc…) Multiply by the given *J* th sign . This process comes to a natural halt when the product, which gives the integral, is zero (in the example *i* = 4 ). The complete result is the following (with optional signs in each term):

{\displaystyle \underbrace {(+1)(x^{3})(\sin x)} _{j=0}+\underbrace {(-1)(3x^{2})(-\cos x)} _{j=1}+\underbrace {(+1)(6x)(-\sin x)} _{j=2}+\underbrace {(-1)(6)(\cos x)} _{j=3}+\underbrace {\int (+1)(0)(\cos x)\,dx} _{i=4:\;\to \;C}.}

It provides

{\displaystyle \underbrace {\int x^{3}\cos x\,dx} _{\text{step 0}}=x^{3}\sin x+3x^{2}\cos x-6x\sin x-6\cos x+C.}

Frequent partial integration also proves useful when separating and integrating functions respectively and their product results in a multiple of the original integrand. In this case the iteration can also be terminated with this index *i. *This, as expected, can happen with exponentiation and trigonometric functions. consider as an example

{\displaystyle u ^ {(i)}}{\displaystyle v^{(n-i)}}

\int e^{x}\cos x\,dx.

# I | clue | A: Derivative U ^{( i )} | B: Integral V ^{( N – I )} |
---|---|---|---|

0 | + | e^{x} | cos x |

1 | – | e^{x} | sin x |

2 | + | `e^{x} | – cos x |

**In this case the product of the terms in columns A** and **B** with the appropriate sign for the index *i* = 2 yields the negative of the original integrand ( compare rows *i* = 0 and *i* = 2 ).

{\displaystyle \underbrace {\int e^{x}\cos x\,dx} _{\text{step 0}}=\underbrace {(+1)(e^{x})(\sin x)} _{j=0}+\underbrace {(-1)(e^{x})(-\cos x)} _{j=1}+\underbrace {\int (+1)(e^{x})(-\cos x)\,dx} _{i=2}.}

Given that the integral on the RHS can have its own constant of integration , and putting the abstract integral on the other hand, gives *C`*

{\displaystyle 2\int e^{x}\cos x\,dx=e^{x}\sin x+e^{x}\cos x+C',}

And finally:

{\displaystyle \int e^{x}\cos x\,dx={\frac {1}{2}}\left(e^{x}(\sin x+\cos x)\right)+C,}

where *c* = *c* /2.

**High dimensions**

Integration by parts can be extended to functions of many variables by applying a version of the fundamental theorem of calculus to a suitable product rule. Many such pairs are possible in multivariate calculus, including a scalar-valued function u *and* a vector-valued function (vector field) **V. **^{[7]}

The product rule for divergence states:

{\displaystyle \nabla \cdot (u\mathbf {V} )\ =\ u\,\nabla \cdot \mathbf {V} \ +\ \nabla u\cdot \mathbf {V} .}

Suppose is an open bounded subset of a piece with smooth boundary of . Integrating with respect to the standard volume form , and applying the deviation theorem gives:

{\displaystyle \Gamma =\partial \Omega }.

{\displaystyle \int _ {\Gamma} u \mathbf {V} \cdot {\hat {\mathbf {n}}} \, d \Gamma \ = \ \int _ {\Omega} \nabla \cdot (u \mathbf {V}) \, d \Omega \ = \ \int _ {\Omega} u \, \nabla \cdot \mathbf {V} \, d \Omega \ + \ \int _ {\Omega} \nabla u \cdot \mathbf {V} \, d \Omega,}

where is the external unit for the limit is the normal vector, integrated with respect to its standard Riemannian volume form . rearrange gives:

{\displaystyle {\hat {\mathbf {n} }}}d\Gamma

{\displaystyle \int _ {\Omega} u \, \nabla \cdot \mathbf {V} \, d \Omega \ = \ \int _ {\Gamma} u \mathbf {V} \cdot {\hat {\mathbf {n}}} \, d \Gamma - \int _ {\Omega} \nabla u \cdot \mathbf {V} \, d \Omega,}

or in other words

{\displaystyle \int _{\Omega }u\,\operatorname {div} (\mathbf {V} )\,d\Omega \ =\ \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }\operatorname {grad} (u)\cdot \mathbf {V} \,d\Omega .}

The regularity requirements of the theorem can be relaxed. For example, the limit must be only the Lipschitz constant, and the functions *U* , *V* must lie only in the Sobolev space *H1 *^{( Ω).}

.{\displaystyle \Gamma =\partial \Omega }

**Green’s first identity**

Consider the continuous differential vector fields and , where is the *i* -th standard base vector for . Now apply the above integration to each part of the bar vector field times :

{\displaystyle \mathbf {U} =u_{1}\mathbf {e} _{1}+\cdots +u_{n}\mathbf {e} _{n}}{\displaystyle v\mathbf {e} _{1},\ldots ,v\mathbf {e} _{n}}\mathbf {e} _{i}i=1,\ldots ,nu_ {i}{\displaystyle v\mathbf {e} _{i}}

{\displaystyle \int _{\Omega }u_{i}{\frac {\partial v}{\partial x_{i}}}\,d\Omega \ =\ \int _{\Gamma }u_{i}v\,\mathbf {e} _{i}\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }{\frac {\partial u_{i}}{\partial x_{i}}}v\,d\Omega .}

Briefly *I* give a new integration by the formula of parts *:*

{\displaystyle \int _ {\Omega} \mathbf {U} \cdot \nabla v \, d \Omega \ = \ \int _ {\Gamma} v \mathbf {U} \cdot {\hat {\mathbf { n}}} \, d \Gamma - \int _ {\Omega} v \, \nabla \cdot \mathbf {U} \, d \Omega.}

The case , where , is known as Green’s first identity:

{\displaystyle \mathbf {U} =\nabla u}{\displaystyle u\in C^{2}({\bar {\Omega }})}

{\displaystyle \int _{\Omega }\nabla u\cdot \nabla v\,d\Omega \ =\ \int _{\Gamma }v\,\nabla u\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }v\,\nabla ^{2}u\,d\Omega .}