×
Every problem might not have a solution right now, but don’t forget that but every solution was once a problem.

# Note for Industrial Engineering and Operation Research - IEOR By Jitendra Pal

• Industrial Engineering and Operation Research - IEOR
• Note
• Biju Patnaik University of Technology BPUT - BPUT
• 5 Topics
• 386 Views
0 User(s)

#### Text from page-1

BIJU PATNAIK UNIVERSITY OF TECHNOLOGY, ODISHA Lecture Notes On INTRODUCTION TO OPERATION RESEARCH Prepared by, Dr. Subhendu Kumar Rath, BPUT, Odisha.

#### Text from page-2

OPERATION RESEARCH 806 16 MARKOV CHAINS The Markov chains to be considered in this chapter have the following properties: 1. A finite number of states. 2. Stationary transition probabilities. We also will assume that we know the initial probabilities P{X0 i} for all i. Formulating the Inventory Example as a Markov Chain Returning to the inventory example developed in the preceding section, recall that Xt is the number of cameras in stock at the end of week t (before ordering any more), where Xt represents the state of the system at time t. Given that the current state is Xt i, the expression at the end of Sec. 16.1 indicates that Xt 1 depends only on Dt 1 (the demand in week t 1) and Xt. Since Xt 1 is independent of any past history of the inventory system, the stochastic process {Xt} (t 0, 1, . . .) has the Markovian property and so is a Markov chain. Now consider how to obtain the (one-step) transition probabilities, i.e., the elements of the (one-step) transition matrix State 0 1 P 2 3 given that Dt 0 p  00   p10  p20   p30 1 1 p01 p11 p21 p31 2 p02 p12 p22 p32 has a Poisson distribution with a mean of 1. Thus, n} (1)ne 1 , n! 1 0} 1} e e P{Dt 1 2} P{Dt 1 3} P{Dt 1 3 p03 p13  p23  p33 for n 0, 1, . . . , so P{Dt P{Dt 1 1 1 0.368, 0.368, 1 1 e 0.184, 2 1 P{Dt 1 2} 1 (0.368 0.368 0.184) For the first row of P, we are dealing with a transition from state Xt Xt 1. As indicated at the end of Sec. 16.1, Xt 1 max{3 Dt 1, 0} Therefore, for the transition to Xt p03 p02 p01 By Dr.S.K.Rath, BPUT P{Dt P{Dt P{Dt 1 1 1 0} 1} 2} 0.368, 0.368, 0.184. if Xt 1 0. 3 or Xt 1 2 or Xt 1 1, 0.080. 0 to some state

#### Text from page-3

OPERATION RESEARCH 806 16 MARKOV CHAINS A transition from Xt 0 to Xt 1 0 implies that the demand for cameras in week t 1 is 3 or more after 3 cameras are added to the depleted inventory at the beginning of the week, so p00 P{Dt 1 3} 0.080. For the other rows of P, the formula at the end of Sec. 16.1 for the next state is Xt max {Xt 1 This implies that Xt p11 p10 p22 p21 p20 P{Dt P{Dt P{Dt P{Dt P{Dt 1 1 1 1 1 Dt 1, 0} 1 0} 1) 0} 1} 2} if Xt Xt, so p12 0.368, 1 P{Dt 0.368, 0.368, 1 P{Dt 1 1. 0, p13 1 1 0, and p23 0} 0.632, 1} 1 0. For the other transitions, (0.368 0.368) 0.264. For the last row of P, week t 1 begins with 3 cameras in inventory, so the calculations for the transition probabilities are exactly the same as for the first row. Consequently, the complete transition matrix is P State 0 1 2 3 0  0.080   0.632  0.264   0.080 1 0.184 0.368 0.368 0.184 2 0.368 0 0.368 0.368 3 0.368  0   0  0.368 The information given by this transition matrix can also be depicted graphically with the state transition diagram in Fig. 16.1. The four possible states for the number of cameras on hand at the end of a week are represented by the four nodes (circles) in the diagram. The FIGURE 16.1 State transition diagram for the inventory example for a camera store. 0.184 0.080 0.368 0 1 0.632 0.368 0.264 0.368 0.184 0.368 0.080 0.368 By Dr.S.K.Rath, BPUT 2 0.368 3 0.368

#### Text from page-4

OPERATION RESEARCH 806 16 MARKOV CHAINS arrows show the possible transitions from one state to another, or sometimes from a state back to itself, when the camera store goes from the end of one week to the end of the next week. The number next to each arrow gives the probability of that particular transition occurring next when the camera store is in the state at the base of the arrow. Additional Examples of Markov Chains A Stock Example. Consider the following model for the value of a stock. At the end of a given day, the price is recorded. If the stock has gone up, the probability that it will go up tomorrow is 0.7. If the stock has gone down, the probability that it will go up tomorrow is only 0.5. This is a Markov chain, where state 0 represents the stock’s going up and state 1 represents the stock’s going down. The transition matrix is given by State 0 P 1 0 0.7 0.5 1 0.3 0.5 A Second Stock Example. Suppose now that the stock market model is changed so that the stock’s going up tomorrow depends upon whether it increased today and yesterday. In particular, if the stock has increased for the past two days, it will increase tomorrow with probability 0.9. If the stock increased today but decreased yesterday, then it will increase tomorrow with probability 0.6. If the stock decreased today but increased yesterday, then it will increase tomorrow with probability 0.5. Finally, if the stock decreased for the past two days, then it will increase tomorrow with probability 0.3. If we define the state as representing whether the stock goes up or down today, the system is no longer a Markov chain. However, we can transform the system to a Markov chain by defining the states as follows:1 State State State State 0: 1: 2: 3: The stock increased both today and yesterday. The stock increased today and decreased yesterday. The stock decreased today and increased yesterday. The stock decreased both today and yesterday. This leads to a four-state Markov chain with the following transition matrix: State 0 1 P 2 3 0  0.9  0.6  0  0 1 2 0 0.1 0 0.4 0.5 0 0.3 0 3 0  0  0.5   0.7  A Gambling Example. Another example involves gambling. Suppose that a player has \$1 and with each play of the game wins \$1 with probability p 0 or loses \$1 with probability 1 p. The game ends when the player either accumulates \$3 or goes broke. 1 This example demonstrates that Markov chains are able to incorporate arbitrary amounts of history, but at the cost of significantly increasing the number of states. By Dr.S.K.Rath, BPUT