×
SUCCESS DOESN'T HAPPEN TO YOU. IT HAPPENS BECAUSE OF YOU
--Your friends at LectureNotes
Close

Note for Probability Theory and Stochastic Processes - PTSP By JNTU Heroes

  • Probability Theory and Stochastic Processes - PTSP
  • Note
  • Jawaharlal Nehru Technological University Anantapur (JNTU) College of Engineering (CEP), Pulivendula, Pulivendula, Andhra Pradesh, India - JNTUACEP
  • 7 Topics
  • 277 Views
  • 3 Offline Downloads
  • Uploaded 1 year ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-2

Preface These are the lecture notes for a one quarter graduate course in Stochastic Processes that I taught at Stanford University in 2002 and 2003. This course is intended for incoming master students in Stanford’s Financial Mathematics program, for advanced undergraduates majoring in mathematics and for graduate students from Engineering, Economics, Statistics or the Business school. One purpose of this text is to prepare students to a rigorous study of Stochastic Differential Equations. More broadly, its goal is to help the reader understand the basic concepts of measure theory that are relevant to the mathematical theory of probability and how they apply to the rigorous construction of the most fundamental classes of stochastic processes. Towards this goal, we introduce in Chapter 1 the relevant elements from measure and integration theory, namely, the probability space and the σ-fields of events in it, random variables viewed as measurable functions, their expectation as the corresponding Lebesgue integral, independence, distribution and various notions of convergence. This is supplemented in Chapter 2 by the study of the conditional expectation, viewed as a random variable defined via the theory of orthogonal projections in Hilbert spaces. After this exploration of the foundations of Probability Theory, we turn in Chapter 3 to the general theory of Stochastic Processes, with an eye towards processes indexed by continuous time parameter such as the Brownian motion of Chapter 5 and the Markov jump processes of Chapter 6. Having this in mind, Chapter 3 is about the finite dimensional distributions and their relation to sample path continuity. Along the way we also introduce the concepts of stationary and Gaussian stochastic processes. Chapter 4 deals with filtrations, the mathematical notion of information progression in time, and with the associated collection of stochastic processes called martingales. We treat both discrete and continuous time settings, emphasizing the importance of right-continuity of the sample path and filtration in the latter case. Martingale representations are explored, as well as maximal inequalities, convergence theorems and applications to the study of stopping times and to extinction of branching processes. Chapter 5 provides an introduction to the beautiful theory of the Brownian motion. It is rigorously constructed here via Hilbert space theory and shown to be a Gaussian martingale process of stationary independent increments, with continuous sample path and possessing the strong Markov property. Few of the many explicit computations known for this process are also demonstrated, mostly in the context of hitting times, running maxima and sample path smoothness and regularity. 5

Text from page-3

6 PREFACE Chapter 6 provides a brief introduction to the theory of Markov chains and processes, a vast subject at the core of probability theory, to which many text books are devoted. We illustrate some of the interesting mathematical properties of such processes by examining the special case of the Poisson process, and more generally, that of Markov jump processes. As clear from the preceding, it normally takes more than a year to cover the scope of this text. Even more so, given that the intended audience for this course has only minimal prior exposure to stochastic processes (beyond the usual elementary probability class covering only discrete settings and variables with probability density function). While students are assumed to have taken a real analysis class dealing with Riemann integration, no prior knowledge of measure theory is assumed here. The unusual solution to this set of constraints is to provide rigorous definitions, examples and theorem statements, while forgoing the proofs of all but the most easy derivations. At this somewhat superficial level, one can cover everything in a one semester course of forty lecture hours (and if one has highly motivated students such as I had in Stanford, even a one quarter course of thirty lecture hours might work). In preparing this text I was much influenced by Zakai’s unpublished lecture notes [Zak]. Revised and expanded by Shwartz and Zeitouni it is used to this day for teaching Electrical Engineering Phd students at the Technion, Israel. A second source for this text is Breiman’s [Bre92], which was the intended text book for my class in 2002, till I realized it would not do given the preceding constraints. The resulting text is thus a mixture of these influencing factors with some digressions and additions of my own. I thank my students out of whose work this text materialized. Most notably I thank Nageeb Ali, Ajar Ashyrkulova, Alessia Falsarone and Che-Lin Su who wrote the first draft out of notes taken in class, Barney Hartman-Glaser, Michael He, Chin-Lum Kwa and Chee-Hau Tan who used their own class notes a year later in a major revision, reorganization and expansion of this draft, and Gary Huang and Mary Tian who helped me with the intricacies of LATEX. I am much indebted to my colleague Kevin Ross for providing many of the exercises and all the figures in this text. Kevin’s detailed feedback on an earlier draft of these notes has also been extremely helpful in improving the presentation of many key concepts. Amir Dembo Stanford, California January 2008

Text from page-4

CHAPTER 1 Probability, measure and integration This chapter is devoted to the mathematical foundations of probability theory. Section 1.1 introduces the basic measure theory framework, namely, the probability space and the σ-fields of events in it. The next building block are random variables, introduced in Section 1.2 as measurable functions ω 7→ X(ω). This allows us to define the important concept of expectation as the corresponding Lebesgue integral, extending the horizon of our discussion beyond the special functions and variables with density, to which elementary probability theory is limited. As much of probability theory is about asymptotics, Section 1.3 deals with various notions of convergence of random variables and the relations between them. Section 1.4 concludes the chapter by considering independence and distribution, the two fundamental aspects that differentiate probability from (general) measure theory, as well as the related and highly useful technical tools of weak convergence and uniform integrability. 1.1. Probability spaces and σ-fields We shall define here the probability space (Ω, F , P) using the terminology of measure theory. The sample space Ω is a set of all possible outcomes ω ∈ Ω of some random experiment or phenomenon. Probabilities are assigned by a set function A 7→ P(A) to A in a subset F of all possible sets of outcomes. The event space F represents both the amount of information available as a result of the experiment conducted and the collection of all events of possible interest to us. A pleasant mathematical framework results by imposing on F the structural conditions of a σ-field, as done in Subsection 1.1.1. The most common and useful choices for this σ-field are then explored in Subsection 1.1.2. 1.1.1. The probability space (Ω, F , P). We use 2Ω to denote the set of all possible subsets of Ω. The event space is thus a subset F of 2Ω , consisting of all allowed events, that is, those events to which we shall assign probabilities. We next define the structural conditions imposed on F . Definition 1.1.1. We say that F ⊆ 2Ω is a σ-field (or a σ-algebra), if (a) Ω ∈ F, c (b) If A ∈ F then Ac ∈ F as well (where S∞A = Ω \ A). (c) If Ai ∈ F for i = 1, 2 . . . then also i=1 Ai ∈ F. Remark. Using DeMorgan’s T law you can easily check that if Ai ∈ F for i = 1, 2 . . . and F is a σ-field, then also i Ai ∈ F. Similarly, you can show that a σ-field is closed under countably many elementary set operations. 7

Text from page-5

8 1. PROBABILITY, MEASURE AND INTEGRATION Definition 1.1.2. A pair (Ω, F ) with F a σ-field of subsets of Ω is called a measurable space. Given a measurable space, a probability measure P is a function P : F → [0, 1], having the following properties: (a) 0 ≤ P(A) ≤ 1 for all A ∈ F. (b) P(Ω) = 1. P∞ S∞ (c) (Countable additivity) P(A) = A = n=1 An is a n=1 P(An ) whenever T countable union of disjoint sets An ∈ F (that is, An Am = ∅, for all n 6= m). A probability space is a triplet (Ω, F , P), with P a probability measure on the measurable space (Ω, F ). The next exercise collects some of the fundamental properties shared by all probability measures. Exercise 1.1.3. Let (Ω, F , P) be a probability space and A, B, Ai events in F . Prove the following properties of every probability measure. (a) Monotonicity. If A ⊆ B then P(A) ≤ P(B). P (b) Sub-additivity. If A ⊆ ∪i Ai then P(A) ≤ i P(Ai ). (c) Continuity from below: If Ai ↑ A, that is, A1 ⊆ A2 ⊆ . . . and ∪i Ai = A, then P(Ai ) ↑ P(A). (d) Continuity from above: If Ai ↓ A, that is, A1 ⊇ A2 ⊇ . . . and ∩i Ai = A, then P(Ai ) ↓ P(A). (e) Inclusion-exclusion rule: P( n [ Ai ) = i=1 X i − P(Ai ) − X i<j P(Ai ∩ Aj ) + X i<j<k · · · + (−1)n+1 P(A1 ∩ · · · ∩ An ) P(Ai ∩ Aj ∩ Ak ) The σ-field F always contains at least the set Ω and its complement, the empty set ∅. Necessarily, P(Ω) = 1 and P(∅) = 0. So, if we take F0 = {∅, Ω} as our σ-field, then we are left with no degrees of freedom in choice of P. For this reason we call F0 the trivial σ-field. Fixing Ω, we may expect that the larger the σ-field we consider, the more freedom we have in choosing the probability measure. This indeed holds to some extent, that is, as long as we have no problem satisfying the requirements (a)-(c) in the definition of a probability measure. For example, a natural question is when should we expect the maximal possible σ-field F = 2Ω to be useful? Example 1.1.4. When the sample space Ω is finite we can and typically shall take F = 2Ω . Indeed, P in such situations we assign a probability pω > 0 to each Pω ∈ Ω making sure that ω∈Ω pω = 1. Then, it is easy to see that taking P(A) = ω∈A pω for any A ⊆ Ω results with a probability measure on (Ω, 2Ω ). For instance, when we consider a single coin toss we have Ω1 = {H, T} (ω = H if the coin lands on its head and ω = T if it lands on its tail), and F1 = {∅, Ω, {H}, {T}}. Similarly, when we consider any finite number of coin tosses, say n, we have Ωn = {(ω1 , . . . , ωn ) : ωi ∈ {H, T}, i = 1, . . . , n}, that is Ωn is the set of all possible n-tuples of coin tosses, while Fn = 2Ωn is the collection of all possible sets of n-tuples of coin tosses. The same construction applies even when Ω is infinite, provided it is countable. For instance, when Ω = {0, 1, 2, . . .} is the set of all non-negative integers and F = 2Ω , we get the Poisson probability measure of parameter λ > 0 when starting from k pk = λk! e−λ for k = 0, 1, 2, . . ..

Lecture Notes