×

Close

- Probability Theory and Stochastic Processes - PTSP
- Note
- 7 Topics
**3933 Views**- 157 Offline Downloads
- Uploaded

Touch here to read

Page-3

- Probability,measure and Integration - ( 2 - 23 )
- Uniform integrability , limits and expectation - ( 24 - 29 )
- Conditional expectation and Hilbert Spaces - ( 30 - 43 )
- Stochastic Processes - ( 44 - 62 )
- Martingales and Stopping Times - ( 63 - 90 )
- The Brownian Motion - ( 91 - 106 )
- Markov, Poisson and Jump Processes - ( 107 - 122 )

Topic:

8 1. PROBABILITY, MEASURE AND INTEGRATION Definition 1.1.2. A pair (Ω, F ) with F a σ-field of subsets of Ω is called a measurable space. Given a measurable space, a probability measure P is a function P : F → [0, 1], having the following properties: (a) 0 ≤ P(A) ≤ 1 for all A ∈ F. (b) P(Ω) = 1. S∞ P∞ A = n=1 An is a (c) (Countable additivity) P(A) = n=1 P(An ) whenever T countable union of disjoint sets An ∈ F (that is, An Am = ∅, for all n 6= m). A probability space is a triplet (Ω, F , P), with P a probability measure on the measurable space (Ω, F ). The next exercise collects some of the fundamental properties shared by all probability measures. Exercise 1.1.3. Let (Ω, F , P) be a probability space and A, B, Ai events in F . Prove the following properties of every probability measure. (a) Monotonicity. If A ⊆ B then P(A) ≤ P(B). P (b) Sub-additivity. If A ⊆ ∪i Ai then P(A) ≤ i P(Ai ). (c) Continuity from below: If Ai ↑ A, that is, A1 ⊆ A2 ⊆ . . . and ∪i Ai = A, then P(Ai ) ↑ P(A). (d) Continuity from above: If Ai ↓ A, that is, A1 ⊇ A2 ⊇ . . . and ∩i Ai = A, then P(Ai ) ↓ P(A). (e) Inclusion-exclusion rule: P( n [ Ai ) = i=1 X i − P(Ai ) − X i<j P(Ai ∩ Aj ) + X i<j<k · · · + (−1)n+1 P(A1 ∩ · · · ∩ An ) P(Ai ∩ Aj ∩ Ak ) The σ-field F always contains at least the set Ω and its complement, the empty set ∅. Necessarily, P(Ω) = 1 and P(∅) = 0. So, if we take F0 = {∅, Ω} as our σ-field, then we are left with no degrees of freedom in choice of P. For this reason we call F0 the trivial σ-field. Fixing Ω, we may expect that the larger the σ-field we consider, the more freedom we have in choosing the probability measure. This indeed holds to some extent, that is, as long as we have no problem satisfying the requirements (a)-(c) in the definition of a probability measure. For example, a natural question is when should we expect the maximal possible σ-field F = 2Ω to be useful? Example 1.1.4. When the sample space Ω is finite we can and typically shall take F = 2Ω . Indeed, P in such situations we assign a probability pω > 0 to each Pω ∈ Ω making sure that ω∈Ω pω = 1. Then, it is easy to see that taking P(A) = ω∈A pω for any A ⊆ Ω results with a probability measure on (Ω, 2Ω ). For instance, when we consider a single coin toss we have Ω1 = {H, T} (ω = H if the coin lands on its head and ω = T if it lands on its tail), and F1 = {∅, Ω, {H}, {T}}. Similarly, when we consider any finite number of coin tosses, say n, we have Ωn = {(ω1 , . . . , ωn ) : ωi ∈ {H, T}, i = 1, . . . , n}, that is Ωn is the set of all possible n-tuples of coin tosses, while Fn = 2Ωn is the collection of all possible sets of n-tuples of coin tosses. The same construction applies even when Ω is infinite, provided it is countable. For instance, when Ω = {0, 1, 2, . . .} is the set of all non-negative integers and F = 2Ω , we get the Poisson probability measure of parameter λ > 0 when starting from k pk = λk! e−λ for k = 0, 1, 2, . . ..

1.1. PROBABILITY SPACES AND σ-FIELDS 9 When Ω is uncountable such a strategy as in Example 1.1.4 will no longer work. The problem is that if we take pω = P({ω}) > 0 for uncountably many values of ω, we shall end up with P(Ω) = ∞. Of course we may define everything as before b of Ω and demand that P(A) = P(A ∩ Ω) b for each A ⊆ Ω. on a countable subset Ω Excluding such trivial cases, to genuinely use an uncountable sample space Ω we need to restrict our σ-field F to a strict subset of 2Ω . 1.1.2. Generated and Borel σ-fields. Enumerating the sets in the σ-field F it not a realistic option for uncountable Ω. Instead, as we see next, the most common construction of σ-fields is then by implicit means. That is, we demand that certain sets (called the generators) be in our σ-field, and take the smallest possible collection for which this holds. Definition 1.1.5. Given a collection of subsets Aα ⊆ Ω, where α ∈ Γ a not necessarily countable index set, we denote the smallest σ-field F such that Aα ∈ F for all α ∈ Γ by σ({Aα }) (or sometimes by σ(Aα , α ∈ Γ)), and call σ({Aα }) the σ-field generated by the collection {Aα }. That is, T σ({Aα }) = {G : G ⊆ 2Ω is a σ − field, Aα ∈ G ∀α ∈ Γ}. Definition 1.1.5 works because the intersection of (possibly uncountably many) σ-fields is also a σ-field, which you will verify in the following exercise. Exercise 1.1.6. Let Aα be a σ-field for each α ∈ Γ, an arbitrary index set. Show T that α∈Γ Aα is a σ-field. Provide an example of two σ-fields F and G such that F ∪ G is not a σ-field. Different sets of generators may result with the same σ-field. For example, taking Ω = {1, 2, 3} it is not hard to check that σ({1}) = σ({2, 3}) = {∅, {1}, {2, 3}, {1, 2, 3}}. Example 1.1.7. An example of a generated σ-field is the Borel σ-field on R. It may be defined as B = σ({(a, b) : a, b ∈ R}). The following lemma lays out the strategy one employs to show that the σ-fields generated by two different collections of sets are actually identical. Lemma 1.1.8. If two different collections of generators {Aα } and {Bβ } are such that Aα ∈ σ({Bβ }) for each α and Bβ ∈ σ({Aα }) for each β, then σ({Aα }) = σ({Bβ }). Proof. Recall that if a collection of sets A is a subset of a σ-field G, then by Definition 1.1.5 also σ(A) ⊆ G. Applying this for A = {Aα } and G = σ({Bβ }) our assumption that Aα ∈ σ({Bβ }) for all α results with σ({Aα }) ⊆ σ({Bβ }). Similarly, our assumption that Bβ ∈ σ({Aα }) for all β results with σ({Bβ }) ⊆ σ({Aα }). Taken together, we see that σ({Aα }) = σ({Bβ }). For instance, considering BQ = σ({(a, b) : a, b ∈ Q}), we have by the preceding lemma that BQ = B as soon as we show that any interval (a, b) is in BQ . To verify this fact, note that for any real a < b there are rational numbers qn < rn such that qn ↓ a and rn ↑ b, hence (a, b) = ∪n (qn , rn ) ∈ BQ . Following the same approach, you are to establish next a few alternative definitions for the Borel σ-field B. Exercise 1.1.9. Verify the alternative definitions of the Borel σ-field B: σ({(a, b) : a < b ∈ R}) = σ({[a, b] : a < b ∈ R}) = σ({(−∞, b] : b ∈ R}) = σ({(−∞, b] : b ∈ Q}) = σ({O ⊆ R open })

10 1. PROBABILITY, MEASURE AND INTEGRATION Hint: Any O ⊆ R open is a countable union of sets (a, b) for a, b ∈ Q (rational). If A ⊆ R is in B of Example 1.1.7, we say that A is a Borel set. In particular, all open or closed subsets of R are Borel sets, as are many other sets. However, Proposition 1.1.10. There exists a subset of R that is not in B. That is, not all sets are Borel sets. Despite the above proposition, all sets encountered in practice are Borel sets. Often there is no explicit enumerative description of the σ-field generated by an infinite collection of subsets. A notable exception is G = σ({[a, b] : a, b ∈ Z}), where one may check that the sets in G are all possible unions of elements from the countable collection {{b}, (b, b + 1), b ∈ Z}. In particular, B 6= G since for example (0, 1/2) ∈ / G. Example 1.1.11. One example of a probability measure defined on (R, B) is the Uniform probability measure on (0, 1), denoted U and defined as following. For each interval (a, b) ⊆ (0, 1), a < b, we set U ((a, b)) = b − a (the length of the interval), and for any other open interval I we set U (I) = U (I ∩ (0, 1)). Note that we did not specify U (A) for each Borel set A, but rather only for the generators of the Borel σ-field B. This is a common strategy, as under mild conditions on the collection {Aα } of generators each probability measure Q specified only for the sets Aα can be uniquely extended to a probability measure P on σ({Aα }) that coincides with Q on all the sets Aα (and these conditions hold for example when the generators are all open intervals in R). Exercise 1.1.12. Check that the following are Borel sets and find the probability assigned to each by the uniform measure of the preceding example: (0, 1/2)∪(1/2, 3/2), {1/2}, a countable subset A of R, the set T of all irrational numbers within (0, 1), the interval [0, 1] and the set R of all real numbers. Example 1.1.13. Another classical example of an uncountable Ω is relevant for studying the experiment with an infinite number of coin tosses, that is, Ω∞ = ΩN 1 for Ω1 = {H, T} (recall that setting H = 1 and T = 0, each infinite sequence ω ∈ Ω∞ is in correspondence with a unique real number x ∈ [0, 1] with ω being the binary expansion of x). The σ-field should at least allow us to consider any possible outcome of a finite number of coin tosses. The natural σ-field in this case is the minimal σ-field having this property, that is, Fc = σ(An,θ , θ ∈ {H, T}n , n < ∞), for the subsets An,θ = {ω : ωi = θi , i = 1 . . . , n} of Ω∞ (e.g. A1,H is the set of all sequences starting with H and A2,TT are all sequences starting with a pair of T symbols). This is also our first example of a stochastic process, to which we return in the next section. Note that any countable union of sets of probability zero has probability zero, but this is not the case for an uncountable union. For example, U ({x}) = 0 for every x ∈ R, but U (R) = 1. When we later deal with continuous time stochastic processes we should pay attention to such difficulties! 1.2. Random variables and their expectation Random variables are numerical functions ω 7→ X(ω) of the outcome of our random experiment. However, in order to have a successful mathematical theory, we limit our interest to the subset of measurable functions, as defined in Subsection

1.2. RANDOM VARIABLES AND THEIR EXPECTATION 11 1.2.1 and study some of their properties in Subsection 1.2.2. Taking advantage of these we define the mathematical expectation in Subsection 1.2.3 as the corresponding Lebesgue integral and relate it to the more elementary definitions that apply for simple functions and for random variables having a probability density function. 1.2.1. Indicators, simple functions and random variables. We start with the definition of a random variable and two important examples of such objects. Definition 1.2.1. A Random Variable (R.V.) is a function X : Ω → R such that ∀α ∈ R the set {ω : X(ω) ≤ α} is in F (such a function is also called a F -measurable or, simply, measurable function). ( 1, ω ∈ A Example 1.2.2. For any A ∈ F the function IA (ω) = is a R.V. 0, ω ∈ /A Ω, α ≥ 1 since {ω : IA (ω) ≤ α} = Ac , 0 ≤ α < 1 all of whom are in F . We call such ∅, α < 0 R.V. also an indicator function. PN Example 1.2.3. By same reasoning check that X(ω) = n=1 cn IAn (ω) is a R.V. for any finite N , non-random cn ∈ R and sets An ∈ F. We call any such X a simple function, denoted by X ∈ SF. Exercise 1.2.4. Verify the following properties of indicator R.V.-s. (a) I∅ (ω) = 0 and IΩ (ω) = 1 (b) IAc (ω) = 1 − IA (ω) (c) IA (ω) ≤ IB (ω) Q if and only if A ⊆ B (d) I∩i Ai (ω) = i IAi (ω) P (e) If Ai are disjoint then I∪i Ai (ω) = i IAi (ω) Though in our definition of a R.V. the σ-field F is implicit, the choice of F is very important (and we sometimes denote by mF the collection of all R.V. for a given σ-field F ). For example, there are non-trivial σ-fields G and F on Ω = R such that X(ω) = ω is measurable for (Ω, F ), but not measurable for (Ω, G). Indeed, one such example is when F is the Borel σ-field B and G = σ({[a, b] : a, b ∈ Z}) (for example, the set {ω : ω ≤ α} is not in G whenever α ∈ / Z). To practice your understanding, solve the following exercise at this point. Exercise 1.2.5. Let Ω = {1, 2, 3}. Find a σ-field F such that (Ω, F ) is a measurable space, and a mapping X from Ω to R, such that X is not a random variable on (Ω, F ). Our next proposition explains why simple functions are quite useful in probability theory. Proposition 1.2.6. For every R.V. X(ω) there exists a sequence of simple functions Xn (ω) such that Xn (ω) → X(ω) as n → ∞, for each fixed ω ∈ Ω. Proof. Let fn (x) = n1x>n + n n2 −1 X k=0 k2−n 1(k2−n ,(k+1)2−n ] (x) ,

## Leave your Comments