×

Close

Type:
**Note**Institute:
**
Jawaharlal nehru technological university anantapur college of engineering
**Offline Downloads:
**165**Views:
**3578**Uploaded:
**11 months ago**

Touch here to read

Page-1

Topic:

Random Variable
Motivation example In an opinion poll, we might decide to ask 50 people whether they agree
or disagree with a certain issue. If we record a “1” for agree and “0” for disagree, the sample
space for this experiment has 250 elements. If we define a variable X=number of 1s recorded
out of 50, we have captured the essence of the problem. Note that the sample space of X
is the set of integers {1, 2, . . . , 50} and is much easier to deal with than the original sample
space.
In defining the quantity X, we have defined a mapping (a function) from the original sample
space to a new sample space, usually a set of real numbers. In general, we have the following
definition.
Definition of Random Variable A random variable is a function from a sample space S into
the real numbers.
Example 1.4.2 (Random variables)
In some experiments random variables are implicitly used; some examples are these.
Experiment
Random variable
Toss two dice
X =sum of the numbers
Toss a coin 25 times
X =number of heads in 25 tosses
Apply different amounts of
fertilizer to corn plants
X =yield/acre
Suppose we have a sample space
S = {s1 , . . . , sn }
with a probability function P and we define a random variable X with range X = {x1 , . . . , xm }.
We can define a probability function PX on X in the following way. We will observe X = xi
if and only if the outcome of the random experiment is an sj ∈ S such that X(sj ) = xi .
Thus,
PX (X = xi ) = P ({sj ∈ S : X(sj ) = xi }).
1
(1)

Note PX is an induced probability function on X , defined in terms of the original function
P . Later, we will simply write PX (X = xi ) = P (X = xi ).
Fact The induced probability function defined in (1) defines a legitimate probability function
in that it satisfies the Kolmogorov Axioms.
Proof: CX is finite. Therefore B is the set of all subsets of X . We must verify each of the
three properties of the axioms.
(1) If A ∈ B then PX (A) = P (∪xi ∈A {sj ∈ S : X(sj ) = xi }) ≥ 0 since P is a probability
function.
(2) PX (X ) = P (∪m
i=1 {sj ∈ S : X(sj ) = xi }) = P (S) = 1.
(3) If A1 , A2 , . . . ∈ B and pairwise disjoint then
∞
PX (∪∞
k=1 Ak ) = P (∪k=1 {∪xi ∈Ak {sj ∈ S : X(sj ) = xi }})
=
∞
X
P (∪xi ∈Ak {sj ∈ S : X(sj ) = xi } =
∞
X
PX (Ak ),
k=1
k=1
where the second inequality follows from the fact P is a probability function. ¤
A note on notation: Random variables will always be denoted with uppercase letters and
the realized values of the variable will be denoted by the corresponding lowercase letters.
Thus, the random variable X can take the value x.
Example 1.4.3 (Three coin tosses-II) Consider again the experiment of tossing a fair coin
three times independently. Define the random variable X to be the number of heads obtained
in the three tosses. A complete enumeration of the value of X for each point in the sample
space is
s
X(s)
HHH HHT
3
HTH
THH
TTH
THT
HTT
TTT
2
2
1
1
1
0
2
The range for the random variable X is X = {0, 1, 2, 3}. Assuming that all eight points
in S have probability 81 , by simply counting in the above display we see that the induced
probability function on X is given by
2

x
0
1
2
3
PX (X = x)
1
8
3
8
3
8
1
8
The previous illustrations had both a finite S and finite X , and the definition of PX was
straightforward. Such is also the case if X is countable. If X is uncountable, we define the
induced probability function, PX , in a manner similar to (1). For any set A ⊂ X ,
PX (X ∈ A) = P ({s ∈ S : X(s) ∈ A}).
(2)
This does define a legitimate probability function for which the Kolmogorov Axioms can be
verified.
Distribution Functions
Definition of Distribution The cumulative distribution function (cdf) of a random variable
X, denoted by FX (x), is defined by
FX (x) = PX (X ≤ x),
for all x.
Example 1.5.2 (Tossing three coins) Consider the experiment of tossing three fair coins,
and let X =number of heads observed. The
0
1
8
FX (x) = 1
2
7
8
1
cdf of X is
if −∞ < x < 0
if 0 ≤ x < 1
if 1 ≤ x < 2
if 2 ≤ x < 3
if 3 ≤ x < ∞.
3

Remark:
1. FX is defined for all values of x, not just those in X = {0, 1, 2, 3}. Thus, for example,
7
FX (2.5) = P (X ≤ 2.5) = P (X = 0, 1, 2) = .
8
2. FX has jumps at the values of xi ∈ X and the size of the jump at xi is equal to
P (X = xi ).
3. FX = 0 for x < 0 since X cannot be negative, and FX (x) = 1 for x ≥ 3 since x is
certain to be less than or equal to such a value.
FX is right-continuous, namely, the function is continuous when a point is approached
from the right. The property of right-continuity is a consequence of the definition of the cdf.
In contrast, if we had defined FX (x) = PX (X < x), FX would then be left-continuous.
Theorem 1.5.3
The function FX (x) is a cdf if and only of the following three conditions hold:
a. limx→−∞ F (x) = 0 and limx→∞ F (x) = 1.
b. F (x) is a nondecreasing function of x.
c. F (x) is right-continuous; that is, for every number x0 , limx↓x0 F (x) = F (x0 ).
Example 1.5.4 (Tossing for a head) Suppose we do an experiment that consists of tossing
a coin until a head appears. Let p =probability of a head on any given toss, and define
X =number of tosses required to get a head. Then, for any x = 1, 2, . . .,
P (X = x) = (1 − p)x−1 p.
The cdf is
FX (x) = P (X ≤ x) =
x
X
P (X = i) =
i=1
x
X
(1 − p)i−1 p = 1 − (1 − p)x .
i=1
It is easy to show that if 0 < p < 1, then FX (x) satisfies the conditions of Theorem 1.5.3.
First,
lim FX (x) = 0
x→−∞
4

## Leave your Comments