$$ \def\ab{\boldsymbol{a}} \def\bb{\boldsymbol{b}} \def\cb{\boldsymbol{c}} \def\db{\boldsymbol{d}} \def\eb{\boldsymbol{e}} \def\fb{\boldsymbol{f}} \def\gb{\boldsymbol{g}} \def\hb{\boldsymbol{h}} \def\kb{\boldsymbol{k}} \def\nb{\boldsymbol{n}} \def\tb{\boldsymbol{t}} \def\ub{\boldsymbol{u}} \def\vb{\boldsymbol{v}} \def\xb{\boldsymbol{x}} \def\yb{\boldsymbol{y}} \def\Ab{\boldsymbol{A}} \def\Bb{\boldsymbol{B}} \def\Cb{\boldsymbol{C}} \def\Eb{\boldsymbol{E}} \def\Fb{\boldsymbol{F}} \def\Jb{\boldsymbol{J}} \def\Lb{\boldsymbol{L}} \def\Rb{\boldsymbol{R}} \def\Ub{\boldsymbol{U}} \def\xib{\boldsymbol{\xi}} \def\evx{\boldsymbol{e}_x} \def\evy{\boldsymbol{e}_y} \def\evz{\boldsymbol{e}_z} \def\evr{\boldsymbol{e}_r} \def\evt{\boldsymbol{e}_\theta} \def\evp{\boldsymbol{e}_r} \def\evf{\boldsymbol{e}_\phi} \def\evb{\boldsymbol{e}_\parallel} \def\omb{\boldsymbol{\omega}} \def\dA{\;d\Ab} \def\dS{\;d\boldsymbol{S}} \def\dV{\;dV} \def\dl{\mathrm{d}\boldsymbol{l}} \def\rmd{\mathrm{d}} \def\bfzero{\boldsymbol{0}} \def\Rey{\mathrm{Re}} \def\Real{\mathbb{R}} \def\grad{\boldsymbol\nabla} \newcommand{\dds}[2]{\frac{d{#1}}{d{#2}}} \newcommand{\ddy}[2]{\frac{\partial{#1}}{\partial{#2}}} \newcommand{\pder}[3]{\frac{\partial^{#3}{#1}}{\partial{#2}^{#3}}} \newcommand{\deriv}[3]{\frac{d^{#3}{#1}}{d{#2}^{#3}}} \newcommand{\ddt}[1]{\frac{d{#1}}{dt}} \newcommand{\DDt}[1]{\frac{\mathrm{D}{#1}}{\mathrm{D}t}} \newcommand{\been}{\begin{enumerate}} \newcommand{\enen}{\end{enumerate}}\newcommand{\beit}{\begin{itemize}} \newcommand{\enit}{\end{itemize}} \newcommand{\nibf}[1]{\noindent{\bf#1}} \def\bra{\langle} \def\ket{\rangle} \renewcommand{\S}{{\cal S}} \newcommand{\wo}{w_0} \newcommand{\wid}{\hat{w}} \newcommand{\taus}{\tau_*} \newcommand{\woc}{\wo^{(c)}} \newcommand{\dl}{\mbox{$\Delta L$}} \newcommand{\upd}{\mathrm{d}} \newcommand{\dL}{\mbox{$\Delta L$}} \newcommand{\rs}{\rho_s} $$
4.1 Distributions
We’ve seen in the previous chapter how the Green’s function can be a valuable tool in solving BVPs. However, constructing the GF required defining the delta “function”, which is not really a function at all, at best a limit of functions, and also saw that the GF suffers a discontinuity in the \(n-1\)st derivative. We now take a short detour to consider these issues in more detail, by introducing the theory of distributions.
Perhaps the most important feature of the \(\delta\)-“function”: when integrated against a continuous function, it sifts out the value at \(x=0\): \[ \int_{-\infty}^{\infty}\delta(x)f(x)\mathrm{d}x=f(0). \]
It is the operation of \(\delta\) on another function that defines the property. This is the key idea in the theory of distributions, in which a generalized function is only thought of in relation to how it affects other functions when integrated against them.
Put simply, we define a function by how it integrates all other functions, and that is enough to define a function, but also to extend the definition of a function.
For example we have defined the delta distribution \(\delta\) such that when it operates on a \(\phi\), it “sifts out” the value \(\phi(0)\in\mathbb{R}\).
The definition of the delta function can be generalised as \[ \langle \delta,\phi\rangle\equiv \phi(0), \] where \(\delta\) is the \(\delta\)-distribution and \(\phi\) is the test function. \(\langle\delta,\phi\rangle\) reads as “\(\delta\) applied to \(\phi\)”.
This definition just requires we have some definition of an inner product, it far more general than the \(\delta\) encountered in the previous chapter.
We will generalise this idea momentarily to allow for a vastly expanded notion of what a distribution can do. First, we need some tools and terminology.
4.2 Test functions
This is a class of functions we use to define distributions, as they have great properties
Definition 4.2 A test function \(\phi\) os a function \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) \(\phi\in C_0^{\infty} (\mathbb{R})\).
This rather dry statement implies the following:
- Test functions \(\phi\in C^{\infty}(\mathbb{R})\) are differentiable any number of times
- Test functions \(\phi\) have ``compact support’’, i.e. supp \(\phi\subseteq[-X,X]\) for some \(X>0\), i.e. \(\phi(x)=0\quad\forall x\notin [-X,X]\).
So a test function is infinitely smooth, has no kinks or corners, and vanishes outside a finite region.
The first property will mean we can provide derivatives for distributions to any order, the second will allow us to exploit integration by parts to define distributional derivatives.
4.3 An example the bump function:
Let \(C>0,\;\epsilon >0\) \[ \phi_{C;\epsilon}=\left\{\begin{array}{l} \exp\left (\frac{-C}{\epsilon^2-(x-a)^2}\right )\;\mbox{ for }a-\epsilon<x<a+\epsilon\\ 0\qquad \mbox{ otherwise }\end{array}\right. \tag{4.1}\]
See figure above.
One can show (for all integer \(n\geq 0\)): \[ \lim_{x\uparrow a+\epsilon}\frac{d}{dx^n}\phi_{C;\epsilon}(x)=0, \quad \lim_{x\downarrow a-\epsilon}\frac{d}{dx^n}\phi_{C;\epsilon}(x)=0 \] That is to say the function is infinitely differentiable even at the “joins” in th definition.
If we mutiply this by some other function \(g(x)\) which is infinitely differentiable (but not necessarily having compact support) it will also be a test function. Thats a lot of variability.
4.4 Weak derivatives
Having defined test functions, we can generalise the notion of a derivative. Start with the classical definition: let \(u(x)\) be a continuously differentiable function with derivative \(f(x)\), so \(u'(x)=f(x)\). Now, multiply each side of the equation by a test function \(\phi\) and integrate over \(\mathbb{R}\):
\[ \int_\mathbb{R}u'\phi\;dx=\int_\mathbb{R}f\phi\;dx. \tag{4.2}\] Integrating the LHS by parts and using the compact support of \(\phi\), we obtain \[ -\int_\mathbb{R}u\phi'\;dx=\int_\mathbb{R}f\phi\;dx. \tag{4.3}\] The idea of the weak derivative is to think of Equation 4.3} as the definition of a derivative. That is, we say \(f\) is the weak derivative of \(u\) if Equation 4.3 holds for all test functions \(\phi\in C_0^{\infty}(\mathbb{R})\)
We can also confine to smaller intervals, for instance \(\phi\in C_0^{\infty}(a,b)\) means the test functions have compact support in a bounded subset of \((a,b)\).
The value of this definition is that it does not require \(u\) to be differentiable, just integrable.
Of course, if \(u\) is continuously differentiable, the weak derivative and the ordinary one will agree, but a function that is not continuously differentiable can still have a weak derivative, where essentially the integration smooths out discontinuities.
4.5 Distributions defined
This leads us to the notion of a distribution, or a generalised function. A distribution is not defined at points, but rather it is a global object defined in terms of its action on test functions. To be more precise:
Definition 4.1 A distribution \(u\) is a functional mapping test functions \(\phi\in C_0^{\infty}(\mathbb{R})\) to real numbers, \[ u:\phi\in C_0^{\infty}(\mathbb{R})\mapsto\langle u,\phi\rangle\in\mathbb{R}\qquad (\mbox{$\langle u,\phi\rangle$ instead of }u(\phi)) \tag{4.4}\] where the mapping is linear and continuous.
While we have motivated the action \(\langle u,\phi\rangle\) as meaning integration, this is not a requirement.
4.5.1 properties of derivatives
Linearity is straightforward, and means \[ \langle u,\alpha\phi+\beta\psi\rangle=\alpha\langle u,\phi\rangle+\beta\langle u,\psi\rangle\qquad \forall\alpha,\beta\in\mathbb{R}\quad\forall\phi,\psi\in C_0^{\infty}(\mathbb{R}) \]
Continuity is slightly more technical, it means that if \(\phi_n\) is a sequence of test functions that converges to zero, \[ \phi_n(x)\to0\quad\text{ as }\quad n\to\infty \] then \[ \bra u,\phi_n\ket\to0 \tag{4.5}\] as a sequence of real numbers.
To show continuity, what we really need is to be able to switch the order of “the action of the distribution” (integration) and the limit, that is Equation 4.5 will hold if \[ \lim_{n\to\infty}\bra u,\phi_n\ket\;=\;\bra u,\lim_{n\to\infty}\phi_n\ket. \] It turns out that we can do this if the following holds:
\(\forall X>0\) there exists \(C>0\), and integer \(N\geq 0\), such that \[\vert\langle u,\phi\rangle\vert\leq C\sum_{m\leq N}\max_{-\infty\leq x\leq \infty}\left\vert\frac{\mathrm{d}^m\phi}{\mathrm{d}x^m}\right\vert \tag{4.6}\] \(\forall\phi\) with support in \([-X,X]\).
For our purposes we will want to show Equation 4.6 to show continuity, and in fact you can take this as the definition of continuity.
4.5.2 Examples
\[ \langle \delta,\phi\rangle =\phi(0) \]
linearity (follows from linearity of inner product):
continuity, check Equation 4.6: \[\vert\langle\delta,\phi\rangle\vert=\vert\phi(0)\vert\leq\max_{-X<x<X}\vert\phi(x)\vert\quad \forall\phi \mbox{ with support of }\phi \mbox{ in } [-X,X]. \]
i.e. condition Equation 4.6 is satisfied with \(C=1,\) \(N=0\).
Let \(a\in\mathbb{R},\;n\geq 0\). Define \(\langle D_n,\phi\rangle=\phi^{(n)}(a)\) (\(n\)th derivative). This is a distribution (to be proved in a problem sheet).
For any locally integrable function \(f(x)\), a natural distribution is defined by \[\langle f,\phi\rangle=\int_{-\infty}^{\infty}f(x)\phi(x)\mathrm{d}x\]
Check:
Well-defined, \(\langle f,\phi\rangle\in\mathbb{R}\;\forall\phi\in C_0^{\infty}(\mathbb{R})\) and linear.
Continuity? Equation 4.6: Let \(X>0\) be given. Claim Equation 4.6 holds for \[ C=C(X)=\int_{-X}^X\vert f(x)\vert\mathrm{d}x\mbox{ and } N=0: \] \[ \vert\langle f,\phi\rangle\vert=\vert\int_{-\infty}^{\infty}f(x)\phi(x)\mathrm{d}x\vert=\vert\int_{-X}^Xf(x)\phi(x)\mathrm{d}x\vert\] which by the estimation lemma \[\leq{\int_{-X}^X\vert f(x)\vert\mathrm{d}x}\max_{-X<x<X}(\vert\phi(x)\vert) = C \max_{-\infty<x<\infty} (\vert\phi(x)\vert) \]
Different continuous functions induce different distributions
\[ \langle H,\phi\rangle=\int_{-\infty}^{\infty}H(x)\phi(x)\mathrm{d}x=\int_0^{\infty}\phi(x)\mathrm{d}x \] Can check linearity, continuity as an exercise.
Different functions can lead to the same distribution.
Distributions induced by integrable functions are called regular distributions. They are called singular distributions if not.
The \(\delta\)-distribution is an example of a singular distribution.
4.5.3 Operations on distributions
Now we consider some operations that can be performed on distributions. Let \(u_1,u_2,u\) be distributions, and \(f_1,f_2,f\) be integrable functions (or the regular distributions induced by them). The notion of integration is not required for distributions, but the rules for distributions are consistent with those for locally integrable functions.
Linear combinations of distributions. Let \(\alpha_1,\alpha_2\in\mathbb{R}\).
\[\begin{align} &\langle\alpha_1f_1+\alpha_2f_2,\phi\rangle=\int_{-\infty}^{\infty}(\alpha_1f_1(x)+\alpha f_2(x))\phi(x)\mathrm{d}x,\\ &=\alpha_1\int_{-\infty}^{\infty}f_1(x)\phi(x)\mathrm{d}x+\alpha_2\int_{-\infty}^{\infty}f_2(x)\phi(x)\mathrm{d}x\\ &=\alpha_1\langle f_1,\phi\rangle+\alpha_2\langle f_2,\phi\rangle \end{align}\]
Thus, define \(\alpha_1u_1+\alpha_2u_2\) for general distributions \(u_1,u_2\) via \[ \langle\alpha_1 u_1+\alpha_2u_2,\phi\rangle\equiv\alpha_1\langle u_1,\phi\rangle+\alpha_2\langle u_2,\phi\rangle\quad \forall\phi \in C_0^{\infty}(\mathbb{R}) \]
If \(u_1,u_2\) are distributions, is \(\alpha_1u_1+\alpha_2 u_2\) a distribution? Need to check linearity and continuity, but we’ll skip this here [will we?].
Differentiation of distributions. Differentiation follows the weak derivative formulated earlier. That is, for a general distribution \(u\), define \[ \langle u',\phi\rangle\equiv-\langle u,\phi '\rangle\qquad\forall \phi\in C_0^{\infty}(\mathbb{R}) \] If \(u\) is distribution, can we be sure that \(u':\phi\mapsto -\langle u,\phi '\rangle\) is also a distribution? (It is! – try it as an exercise.)
Let \(H\) be the Heaviside function, or the distribution it induces, i.e. \[\langle\underbrace{H}_{H\mbox{-distribution}},\phi\rangle\equiv\int_{-\infty}^{\infty}\underbrace{H(x)}_{H-\mbox{function}}\phi(x)\mathrm{d}x=\int_0^{\infty}\phi(x)\mathrm{d}x \] Show that \(H'=\delta\). \[\begin{array}{ll} \langle H',\phi\rangle&=\langle-H,\phi '\rangle \qquad\mbox{ Def. of derivative of a distribution}\\ &=\int_{-\infty}^{\infty}\phi '(x)\mathrm{d}x\qquad\mbox{ see earlier example}\\ &=-\phi\vert_{x=0}^{x={\infty}}\\ %\qquad\mbox{FTC}\\ &=\phi(0)\qquad \phi\mbox{ has compact support}\\ &=\langle\delta,\phi\rangle\qquad\mbox{ Def. of $\delta$-distribution}\end{array} \]
Translation: similar considerations as before, upshot \((a\in\mathbb{R},u \mbox{ distr})\): \[\langle u(x-a),\phi(x)\rangle\overset{\text{chg of var}}{=}\langle u(y),\phi(y+a)\rangle=\langle u(x),\phi(x+a)\rangle \]
\[ \langle \delta(x-a),\phi(x)\rangle=\langle\delta(x),\phi(x+a)\rangle=\phi(a) \]
Multiplication: let \(a(x)\) be an infinitely differentiable function. We define \[\bra au,\phi\ket=\bra u,a\phi\ket.\]
Convergence of a sequence of distributions \(u,u_1,u_2,\dots\) distributions.\ Convergence \(u_j\rightarrow u\) as \(j\rightarrow \infty\) means: \[ \lim_{j\rightarrow\infty}\langle u_j,\phi\rangle=\langle u,\phi\rangle\qquad \forall\phi\in C_0^{\infty}(\mathbb{R}) \]
Similarly: if \(u(\alpha)\) is a family of distributions with a continuous parameter \(\alpha\), then convergence: \(u(\alpha)\rightarrow u(\alpha_0)\) for \(\alpha\rightarrow \alpha_0\).
means: \[\lim_{\alpha\rightarrow\alpha_0}\langle u(\alpha),\phi\rangle=\langle u(\alpha_0),\phi\rangle\qquad \forall\phi\in C_0^{\infty}(\mathbb{R}) \]
4.6 Distributed solutions
Consider the equation \[ Lu\equiv a_2u''+a_1u'+a_0u=f. \] We have always thought about the classical solution, that is a twice continuously differentiable function \(u(x)\) that satisfies the differential equation identically, i.e. we can take derivatives of \(u\), substitute in, and the equation checks at every point. With distribution theory and the notion of a generalised function, we now can define a distributed solution. That is, if \(u\) and \(f\) are distributions, then \(Lu\) is a distribution, defined by the action \[ \begin{split} \langle Lu,\phi\rangle&=\bra a_2u'',\phi\ket+\bra a_1u',\phi\ket+\bra a_0u,\phi\ket\\ & =\bra u,(a_2\phi)''\ket-\bra u,(a_1\phi)'\ket+\bra u,a_0\phi\ket\overset{\text{ define}}{=}\bra u,L^*\phi\ket. \end{split} \] Here \(L^*\) is the formal adjoint operator. We say that \(u\) is a distributed solution to \(Lu=f\) if \[ \bra u,L^*\phi\ket=\bra f,\phi\ket \] holds for all test functions \(\phi\). We highlight again that a function need not be differentiable in the ordinary sense to satisfy this definition; hence, distributions provide a way to have well-defined solutions that may have issues in the classical sense.
In particular, this construction of a distributed solution gives us a new way to interpret the Green’s function. Since \(\delta\) is really a distribution or a generalised function, the equation \(Lg=\delta(x-\xi)\) should be interpreted in the distributional sense, \[\bra Lg, \phi\ket =\bra \delta(x-\xi),\phi\ket\] or \[\bra g(x,\xi),L^*\phi\ket=\phi(\xi).\]
Moreover, since the Green’s function that we construct is not twice continuously differentiable, it is really a distributed solution. Alternatively, if we interpret \(Lg=\delta(x-\xi)\) as meaning that \(Lg=0\) everywhere that \(x\neq\xi\), then using the properties of \(\delta\) we can work purely in the ``classical’’ sense. In fact, the final solution of \(Ly=f\), obtained by integration with \(g\), is continuous and a classical solution.
4.7 Final thoughts
If you are interested in distribution theory, it is at the core of functional analysis. Moreover, the idea of weak formulations has great use in finite element methods. For us, distribution theory is somewhat of a detour for this course. One could proceed to write things in a distributional sense anytime we encounter a `delta function’, but we can as well recognise delta as the limit of continuous functions and satisfying certain properties, thus in effect translating to a classical system. Unless we are specifically interested in a distributional aspect, the latter will be our approach.