4 Solutions for Chapter 4
Warm-up
Exercise 4.1 First, as always, \(\mathcal{F}_0 = \{ \emptyset, \Omega\}\). Here \(\Omega = \{0,1,2\}\).
Next, \(\mathcal{F}_1 = \{ \emptyset, \{0\}, \{1\}, \{2\}, \{0,1\}, \{0,2\}, \{1,2\}, \Omega \}\).
To find \(\mathcal{F}_2\), we need \(\{00\}, \{01\}, \{02\}, \{10\}, \{11\}, \{12\}, \{20\}, \{21\}, \{22\},\) as well as all combinations of them such as \(\{00, 01,02\}\) or \(\{ 00, 01, 02, 10, 11, 12, 20\}\). In total there are \(2^9 = 128\) elements. Note that, if we use the shorthand \(\{0\} = \{0 \cdot\} = \{ 00, 01, 02\}\), then every element of \(\mathcal{F}_1\) will also be an element of \(\mathcal{F}_2\).
I do not recommend trying to construct \(\mathcal{F}_3\) by hand. There are \(3^3 = 27\) possible outcomes, generating \(2^{27} = 134,217,728\) elements of the \(\sigma\)-algebra.
Exercise 4.2 First, it is clear that \(S_n^2 -n = (X_1 + \cdots +X_n)^2 -n\) is a function of \(X_1, \ldots, X_n\). Therefore, it is measurable with respect to the sigma-algebra \(\mathcal{F}_n = \sigma(X_1, \ldots, X_n)\).
In order to check the martingale condition we should try to write \(M_n = S_n^2 -n\) recursively in terms of \(M_{n-1} = S_{n-1}^2- (n-1)\). We have that \(S_n = S_{n-1} + X_n\), so \(S_n^2 = S_{n-1}^2 + 2 S_{n-1}X_n + X_n^2\). Consequently, \[ \begin{aligned} M_n & = S_n^2 - n \\ &= S_{n-1}^2 - (n-1) + 2S_{n-1}X_n + X_n^2 - 1 \\ & = M_{n-1} + 2S_{n-1}X_n + X_n^2 - 1. \end{aligned}\] Now observe that \(X_n\) is independent of \(\mathcal{F}_{n-1}\), whereas \(S_{n-1}\) and \(M_{n-1}\) are measurable w.r.t.~\(\mathcal{F}_{n-1}\), which implies \[\begin{align*} \E[M_{n-1} \mid \mathcal{F}_{n-1}] &= M_{n-1} && (\text{taking out what is known}) \\ \E[ S_{n-1} X_n \mid \mathcal{F}_{n-1}] &= S_{n-1} \E[X_n] && (\text{taking out what is known and independence})\\ \E[X_n^2 \mid \mathcal{F}_{n-1}] &= \E[X_n^2] && (\text{independence}). \end{align*}\] Consequently, by linearity, \[\begin{align*} \E[ M_n \mid \mathcal{F}_{n-1}] &= M_{n-1} + 2S_{n-1}\E[X_n] + \E[X_n^2] - 1 \\ & = M_{n-1} + 0 + \Var[X_n] - 1 = M_{n-1} \end{align*}\] as required.
Main problems
Exercise 4.3 Write \(M_{n+1} = M_n R\) where \(R = \frac{\e^{\theta X_{n+1}}}{m(\theta)}\).
The random variable \(R\) is independent of \(\mathcal{F}_n\), and \(M_n\) is \(\mathcal{F}_n\) measurable, so \[\begin{align*} \mathbb{E}[M_{n+1} \mid \mathcal{F}_n] & = M_n \mathbb{E}[ R \mid \mathcal{F}_n] && (\text{taking out what is known and independence}) \\ & = M_n \mathbb{E}[R] && (\text{independence}). \end{align*}\]
Also, \[\mathbb{E}[R] = \frac{1}{m(\theta)} \mathbb{E}[ \e^{\theta X_{n+1}}] = \frac{m(\theta)}{m(\theta)} = 1 \] as the law of \(X_{n+1}\) is the same as \(X_1\). So, \[ \mathbb{E}[M_{n+1} \mid \mathcal{F}_n] = M_n \mathbb{E}[R] = M_n. \]
Exercise 4.4 We have
It should be clear that \(X_t\) only depends on the first \(t\) coin tosses, so the sequence is adapted, and that \(\mathbb{E}[\vert X_t \vert] < \infty\) always holds.
Let \(\tau\) be the first time that a coin is a head, i.e., \(\tau := \min \{ k \in \N : \omega_k = \h \}\).
Now, either \(\tau < t\), which means that at time \(t\) we have stopped betting and \[X_t = X_\tau = -\sum_{i=1}^{\tau-1} 2^{i-1} + 2^{\tau-1} = -(2^{\tau -1}-1) + 2^{\tau-1} = 1\] (because \(\omega_1=\dots= \omega_{\tau-1} = \T\) and \(\omega_\tau = \h\)); or \(\tau \geq t\), which means that \(\omega_1 = \dots = \omega_t = \T\), and \[\qquad X_t = -\sum_{i=1}^t 2^{i-1} = -(2^t -1) = 1-2^t.\]
We need to show that \(\mathbb{E}[X_t \mid \cF_{t-1}] = X_{t-1}\) for all \(t \geq 1\), assuming the coin tosses are all fair (heads and tails equally likely).
For \(t=1\), it is easy to see that \[\mathbb{E}[X_1\mid \cF_0 ] = \mathbb{E}[X_1] = \frac12 X_1(\h) + \frac12 X_1(\T) = 0 = X_0,\] and for \(t>1\) the conditional expectation \(\mathbb{E}[X_t \mid \cF_{t-1}]\) is a random variable that depends on the first \(t-1\) coins, given by \[\begin{split} \mathbb{E}[X_t \mid \cF_{t-1}](\omega_1\dots\omega_{t-1}) &= \frac12 X_t(\omega_1\dots \omega_{t-1}\h) + \frac12 X_t(\omega_1\dots\omega_{t-1}\T)\\ &= \begin{cases} \frac12 + \frac12 (1-2^t) & \text{if $\omega_1 = \dots = \omega_{t-1} = \T$},\\ \frac12 + \frac12 & \text{o/wise}, \end{cases} \end{split} \] which is identical to \(X_{t-1}(\omega_1\dots\omega_{t-1})\).
(You might have seen this martingale before if you’ve taken Stochastic Processes, as an example of a martingale for which the Optional Stopping Theorem doesn’t apply: since \(X_\tau = 1\) with probability 1, and \(X_0 = 0\), we have \(\mathbb{E}[X_\tau] \neq \mathbb{E}[X_0]\).
Although this strategy looks like it achieves a guaranteed winnings of \(X_\tau = 1\), its success relies both on there being no upper limit on the size of the bets that can be made and the gambler having an arbitrary amount of money to continue betting until the first head occurs; typically neither of these is true in practice!
Indeed, it’s possible to show that the expected exposure when employing this strategy (i.e., the expected maximum amount owed by the gambler before the first head occurs) is infinite: \(\mathbb{E}[ \displaystyle\min_{ t < \tau }X_t] = \mathbb{E}[X_{\tau-1}] = -\infty\).)
Exercise 4.5 We have
From the representation of \(V_t-V_0\), we find that \[V_t - V_{t-1} = (V_t - V_0) - (V_{t-1} - V_0) = x_t(B_t - B_{t-1}) + y_t(S_t - S_{t-1}). \tag{4.1} \] Since \(V_{t-1} = x_t B_{t-1} + y_t S_{t-1}\) by definition, we see that the previous equation becomes \(V_t - V_{t-1} = x_t B_t + y_t S_t - V_{t-1}\). So \(V_t = x_t B_t + y_t S_t\), as required.
The integrability and adaptedness follow from the checks we made in the notes (you should check them yourself…)
The condition that \(V_t/(1+r)^t\) is a martingale under \(\mathbb{Q}\) is that \[\mathbb{E}_{\mathbb{Q}} [ V_t \mid \mathcal{F}_{t-1}] = (1+r)V_{t-1}.\] This is the same as \(\mathbb{E}_{\mathbb{Q}} [ V_t -V_{t-1} \mid \mathcal{F}_{t-1}] = rV_{t-1}\), because \(V_{t-1}\) is measurable w.r.t. \(\mathcal{F}_{t-1}\) (taking out what is known).
Recall that \(x_t\) and \(y_t\) are measurable with respect to \(\mathcal{F}_{t-1}\).
Also, \(\mathbb{E}_{\mathbb{Q}} [ S_t \mid \mathcal{F}_{t-1}] = (1+r) S_{t-1}\) by Theorem 4.1 in the lecture notes and \(B_t = (1+r)B_{t-1}\).
From equation (4.1) we find that \[ \begin{aligned} V_t - V_{t-1} & = x_t (B_t-B_{t-1}) + y_t (S_t-S_{t-1}) \\ & = x_t r B_{t-1} + y_t (S_t - S_{t-1}). \end{aligned} \] So, \[ \begin{aligned} \mathbb{E}_{\mathbb{Q}} [ V_t -V_{t-1} \mid \mathcal{F}_{t-1}] &= \mathbb{E}_{\mathbb{Q}} [ x_t (B_t-B_{t-1}) + y_t (S_t-S_{t-1}) \mid \mathcal{F}_{t-1}] \\ &= r \mathbb{E}_{\mathbb{Q}}[x_t \mid \mathcal{F}_{t-1}] B_{t-1} + \mathbb{E}_{\mathbb{Q}} [ y_t (S_t - S_{t-1}) \mid \mathcal{F}_{t-1}] \\ (\text{taking out what is known}) &= r x_t B_{t-1} + y_t \left ( \mathbb{E}_{\mathbb{Q}}[S_{t} \mid \mathcal{F}_{t-1}] - S_{t-1}\right )\\ (\text{by Theorem 4.1}) & = r (x_t B_{t-1} + y_t S_{t-1}) \\ & = r V_{t-1}. \end{aligned} \]
Exercise 4.6 Recall that \(M_t = V_t/(1+r)^t\) is a martingale under \(\mathbb{Q}\), and therefore \[ \mathbb{E}_{\mathbb{Q}}[M_n \mid \mathcal{F}_m] = M_m\] for all \(m <n\), where \(\mathcal{F}_n\) is the sigma-algebra made by \(S_0, \ldots, S_n\). (This can be seen be applying the iterated conditioning property to the martingale condition.) Taking \(n = T\) and \(m=t\), and clearing denominators, the identity in (a) follows. For the identity (b) take \(n=t\) and \(m=0\).