×

Close

Type:
**Note**Offline Downloads:
**129**Views:
**2209**Uploaded:
**10 months ago**

Contents
Preface
5
Chapter 1. Probability, measure and integration
1.1. Probability spaces and σ-fields
1.2. Random variables and their expectation
1.3. Convergence of random variables
1.4. Independence, weak convergence and uniform integrability
7
7
10
19
25
Chapter 2. Conditional expectation and Hilbert spaces
2.1. Conditional expectation: existence and uniqueness
2.2. Hilbert spaces
2.3. Properties of the conditional expectation
2.4. Regular conditional probability
35
35
39
43
46
Chapter 3. Stochastic Processes: general theory
3.1. Definition, distribution and versions
3.2. Characteristic functions, Gaussian variables and processes
3.3. Sample path continuity
49
49
55
62
Chapter 4. Martingales and stopping times
4.1. Discrete time martingales and filtrations
4.2. Continuous time martingales and right continuous filtrations
4.3. Stopping times and the optional stopping theorem
4.4. Martingale representations and inequalities
4.5. Martingale convergence theorems
4.6. Branching processes: extinction probabilities
67
67
73
76
82
88
90
Chapter 5. The Brownian motion
5.1. Brownian motion: definition and construction
5.2. The reflection principle and Brownian hitting times
5.3. Smoothness and variation of the Brownian sample path
95
95
101
103
Chapter 6. Markov, Poisson and Jump processes
6.1. Markov chains and processes
6.2. Poisson process, Exponential inter-arrivals and order statistics
6.3. Markov jump processes, compound Poisson processes
111
111
119
125
Bibliography
127
Index
129
3

CHAPTER 1
Probability, measure and integration
This chapter is devoted to the mathematical foundations of probability theory.
Section 1.1 introduces the basic measure theory framework, namely, the probability space and the σ-fields of events in it. The next building block are random
variables, introduced in Section 1.2 as measurable functions ω 7→ X(ω). This allows
us to define the important concept of expectation as the corresponding Lebesgue
integral, extending the horizon of our discussion beyond the special functions and
variables with density, to which elementary probability theory is limited. As much
of probability theory is about asymptotics, Section 1.3 deals with various notions
of convergence of random variables and the relations between them. Section 1.4
concludes the chapter by considering independence and distribution, the two fundamental aspects that differentiate probability from (general) measure theory, as well
as the related and highly useful technical tools of weak convergence and uniform
integrability.
1.1. Probability spaces and σ-fields
We shall define here the probability space (Ω, F , P) using the terminology of measure theory. The sample space Ω is a set of all possible outcomes ω ∈ Ω of some
random experiment or phenomenon. Probabilities are assigned by a set function
A 7→ P(A) to A in a subset F of all possible sets of outcomes. The event space F
represents both the amount of information available as a result of the experiment
conducted and the collection of all events of possible interest to us. A pleasant
mathematical framework results by imposing on F the structural conditions of a
σ-field, as done in Subsection 1.1.1. The most common and useful choices for this
σ-field are then explored in Subsection 1.1.2.
1.1.1. The probability space (Ω, F , P). We use 2Ω to denote the set of all
possible subsets of Ω. The event space is thus a subset F of 2Ω , consisting of all
allowed events, that is, those events to which we shall assign probabilities. We next
define the structural conditions imposed on F .
Definition 1.1.1. We say that F ⊆ 2Ω is a σ-field (or a σ-algebra), if
(a) Ω ∈ F,
c
(b) If A ∈ F then Ac ∈ F as well (where
S∞A = Ω \ A).
(c) If Ai ∈ F for i = 1, 2 . . . then also i=1 Ai ∈ F.
Remark. Using DeMorgan’s T
law you can easily check that if Ai ∈ F for i = 1, 2 . . .
and F is a σ-field, then also i Ai ∈ F. Similarly, you can show that a σ-field is
closed under countably many elementary set operations.
7

8
1. PROBABILITY, MEASURE AND INTEGRATION
Definition 1.1.2. A pair (Ω, F ) with F a σ-field of subsets of Ω is called a measurable space. Given a measurable space, a probability measure P is a function
P : F → [0, 1], having the following properties:
(a) 0 ≤ P(A) ≤ 1 for all A ∈ F.
(b) P(Ω) = 1.
S∞
P∞
A = n=1 An is a
(c) (Countable additivity) P(A) =
n=1 P(An ) whenever
T
countable union of disjoint sets An ∈ F (that is, An Am = ∅, for all n 6= m).
A probability space is a triplet (Ω, F , P), with P a probability measure on the
measurable space (Ω, F ).
The next exercise collects some of the fundamental properties shared by all probability measures.
Exercise 1.1.3. Let (Ω, F , P) be a probability space and A, B, Ai events in F .
Prove the following properties of every probability measure.
(a) Monotonicity. If A ⊆ B then P(A) ≤ P(B).
P
(b) Sub-additivity. If A ⊆ ∪i Ai then P(A) ≤ i P(Ai ).
(c) Continuity from below: If Ai ↑ A, that is, A1 ⊆ A2 ⊆ . . . and ∪i Ai = A,
then P(Ai ) ↑ P(A).
(d) Continuity from above: If Ai ↓ A, that is, A1 ⊇ A2 ⊇ . . . and ∩i Ai = A,
then P(Ai ) ↓ P(A).
(e) Inclusion-exclusion rule:
P(
n
[
Ai ) =
i=1
X
i
−
P(Ai ) −
X
i<j
P(Ai ∩ Aj ) +
X
i<j<k
· · · + (−1)n+1 P(A1 ∩ · · · ∩ An )
P(Ai ∩ Aj ∩ Ak )
The σ-field F always contains at least the set Ω and its complement, the empty
set ∅. Necessarily, P(Ω) = 1 and P(∅) = 0. So, if we take F0 = {∅, Ω} as our
σ-field, then we are left with no degrees of freedom in choice of P. For this reason
we call F0 the trivial σ-field.
Fixing Ω, we may expect that the larger the σ-field we consider, the more freedom
we have in choosing the probability measure. This indeed holds to some extent,
that is, as long as we have no problem satisfying the requirements (a)-(c) in the
definition of a probability measure. For example, a natural question is when should
we expect the maximal possible σ-field F = 2Ω to be useful?
Example 1.1.4. When the sample space Ω is finite we can and typically shall take
F = 2Ω . Indeed, P
in such situations we assign a probability pω > 0 to each
Pω ∈ Ω
making sure that ω∈Ω pω = 1. Then, it is easy to see that taking P(A) = ω∈A pω
for any A ⊆ Ω results with a probability measure on (Ω, 2Ω ). For instance, when we
consider a single coin toss we have Ω1 = {H, T} (ω = H if the coin lands on its head
and ω = T if it lands on its tail), and F1 = {∅, Ω, {H}, {T}}. Similarly, when we
consider any finite number of coin tosses, say n, we have Ωn = {(ω1 , . . . , ωn ) : ωi ∈
{H, T}, i = 1, . . . , n}, that is Ωn is the set of all possible n-tuples of coin tosses,
while Fn = 2Ωn is the collection of all possible sets of n-tuples of coin tosses. The
same construction applies even when Ω is infinite, provided it is countable. For
instance, when Ω = {0, 1, 2, . . .} is the set of all non-negative integers and F = 2Ω ,
we get the Poisson probability measure of parameter λ > 0 when starting from
k
pk = λk! e−λ for k = 0, 1, 2, . . ..

1.1. PROBABILITY SPACES AND σ-FIELDS
9
When Ω is uncountable such a strategy as in Example 1.1.4 will no longer work.
The problem is that if we take pω = P({ω}) > 0 for uncountably many values of
ω, we shall end up with P(Ω) = ∞. Of course we may define everything as before
b of Ω and demand that P(A) = P(A ∩ Ω)
b for each A ⊆ Ω.
on a countable subset Ω
Excluding such trivial cases, to genuinely use an uncountable sample space Ω we
need to restrict our σ-field F to a strict subset of 2Ω .
1.1.2. Generated and Borel σ-fields. Enumerating the sets in the σ-field
F it not a realistic option for uncountable Ω. Instead, as we see next, the most
common construction of σ-fields is then by implicit means. That is, we demand
that certain sets (called the generators) be in our σ-field, and take the smallest
possible collection for which this holds.
Definition 1.1.5. Given a collection of subsets Aα ⊆ Ω, where α ∈ Γ a not
necessarily countable index set, we denote the smallest σ-field F such that Aα ∈ F
for all α ∈ Γ by σ({Aα }) (or sometimes by σ(Aα , α ∈ Γ)), and call σ({Aα }) the
σ-field generated
by the collection {Aα }. That is,
T
σ({Aα }) = {G : G ⊆ 2Ω is a σ − field, Aα ∈ G ∀α ∈ Γ}.
Definition 1.1.5 works because the intersection of (possibly uncountably many)
σ-fields is also a σ-field, which you will verify in the following exercise.
Exercise
1.1.6. Let Aα be a σ-field for each α ∈ Γ, an arbitrary index set. Show
T
that α∈Γ Aα is a σ-field. Provide an example of two σ-fields F and G such that
F ∪ G is not a σ-field.
Different sets of generators may result with the same σ-field. For example, taking
Ω = {1, 2, 3} it is not hard to check that σ({1}) = σ({2, 3}) = {∅, {1}, {2, 3}, {1, 2, 3}}.
Example 1.1.7. An example of a generated σ-field is the Borel σ-field on R. It
may be defined as B = σ({(a, b) : a, b ∈ R}).
The following lemma lays out the strategy one employs to show that the σ-fields
generated by two different collections of sets are actually identical.
Lemma 1.1.8. If two different collections of generators {Aα } and {Bβ } are such
that Aα ∈ σ({Bβ }) for each α and Bβ ∈ σ({Aα }) for each β, then σ({Aα }) =
σ({Bβ }).
Proof. Recall that if a collection of sets A is a subset of a σ-field G, then by
Definition 1.1.5 also σ(A) ⊆ G. Applying this for A = {Aα } and G = σ({Bβ }) our
assumption that Aα ∈ σ({Bβ }) for all α results with σ({Aα }) ⊆ σ({Bβ }). Similarly,
our assumption that Bβ ∈ σ({Aα }) for all β results with σ({Bβ }) ⊆ σ({Aα }).
Taken together, we see that σ({Aα }) = σ({Bβ }).
For instance, considering BQ = σ({(a, b) : a, b ∈ Q}), we have by the preceding
lemma that BQ = B as soon as we show that any interval (a, b) is in BQ . To verify
this fact, note that for any real a < b there are rational numbers qn < rn such that
qn ↓ a and rn ↑ b, hence (a, b) = ∪n (qn , rn ) ∈ BQ . Following the same approach,
you are to establish next a few alternative definitions for the Borel σ-field B.
Exercise 1.1.9. Verify the alternative definitions of the Borel σ-field B:
σ({(a, b) : a < b ∈ R}) = σ({[a, b] : a < b ∈ R}) = σ({(−∞, b] : b ∈ R})
= σ({(−∞, b] : b ∈ Q}) = σ({O ⊆ R open })

## Leave your Comments