Summary of Chapter 9 in
Finite Mathematics /
Finite Mathematics & Applied Calculus
Topic: Markov Systems

              Student Home
True/False Quiz
Review Exercises
Summary Index
Everything for Finite Math
Everything for Calculus
Everything for Finite Math & Calculus
Chapter 8 Summary     Chapter G Summary

Utilities: Matrix Algebra Tool | Markov Process Simulation | Markov Process Utility

Markov Systems, State Transition Diagram, Transition Matrix | Distribution Vectors and Powers of the Transition Matrix | Long Term Behavior of Markov Systems | Absorbing Markov Systems | Calculating the Expected Number of Steps to Absorption

 
Markov Systems, State Transition Diagram, Transition Matrix

A Markov system (or Markov process or Markov chain) is a system that can be in one of several (numbered) states, and can pass from one state to another each time step according to fixed probabilities.

If a Markov system is in state i, there is a fixed probability, pij, of it going into state j the next time step, and pij is called a transition probability.

A Markov system can be illustrated by means of a state transition diagram, which is a diagram showing all the states and transition probabilities. (See example opposite.)

The matrix P whose ijth entry is pij is called the transition matrix associated with the system. The entries in each row add up to 1. Thus, for instance, a 22 transition matrix P would be set up as in the following figure.

Example

Transition Diagram:
(Missing arrows indicate zero probability.)

Matrix:


Top of Page
Distribution Vectors and Powers of the Transition Matrix

A distribution vector is a row vector with one nonnegative entry for each state in the system. The entries can represent the number of individuals in each state of the system.

A probability vector is a row vector in which the entries are nonnegative and add up to 1. The entries in a probability vector can represent the probabilities of finding a system in each of the states.

If v is an initial distribution vector and P is the transition matrix for a Markov system, then the distribution vector after 1 step is the matrix product, vP.

    Distribution After 1 Step:   vP
The distribution one step later, obtained by again multiplying by P, is given by (vP)P = vP2.
    Distribution After 2 Steps:   vP2
Similarly, the distribution after n steps can be obtained by multiplying v on the right by P n times, or multiplying v by Pn.
    Distribution After n Steps:   vPn

P2 is the two-step transition matrix for the system. Similarly, P3 is the three-step transition matrix, and Pn is the n-step transition matrix. This means that the ijth entry in Pn is the probability that the system will pass from state i to state j in n steps.

Try our on-line utility which computes powers of the transition matrix and its action on the distribution vector. Even better (and far more flexible) is our Matrix Algebra Tool, (updated in November 2003) where you can compute several things simultaneously, have the answers shown in fraction or decimal form, and also compute inverses and even complicated expressions involving several matrices.

Examples

Let

    P =
    0.2
    0.8
    0
    0.4
    0
    0.6
    0.5
    0.5
    0

and let v = [ 100   200   300 ] be an initial distribution. Then the distribution after one step is given by
  vP = [ 100   200   300 ]
0.2
0.8
0
0.4
0
0.6
0.5
0.5
0
= [ 250   230   120 ]

The distribution one step later is given by
vP2 = (vP)P
    = [ 250   230   120 ]
0.2
0.8
0
0.4
0
0.6
0.5
0.5
0
= [ 202   260   138 ].

To obtain the two-step transition matrix, we calculate
P2 =
0.2
0.8
0
0.2
0.8
0
0.4
0
0.6
0.4
0
0.6
0.5
0.5
0
0.5
0.5
0
    =
0.36
0.16
0.48
0.38
0.62
0
0.3
0.4
0.3

Thus, for example, the probability of going from State 3 to State 1 in two steps is given by the 3,1-entry in P2, namely 0.3.

Top of Page
Long Term Behavior of Markov Systems

If P is a transition matrix for a Markov system, and if v is a distribution vector with the property that vP = v, then we refer to v as a steady state (distribution) vector.

To find a steady state distribution for a Markov System with transition matrix P, we solve the system of equations given by

    x + y + z + . . .=1
    [x   y   z . . . ]P=[x   y   z . . .]
where we use as many unknowns as there are states in the Markov system. A steady state probability vector is then given by
v = [x   y   z . . . ]
A regular Markov system is one for which some power of its transition matrix has no zero entries. A regular Markov system has only one steady state vector.

Long Term Behavior If the higher and higher powers of P approach a fixed matrix P, we refer to P as the steady state or long-term transition matrix. If a Markov system is regular, then its long-term transition matrix is given by the square matrix whose rows are all the same and equal to the steady state probability vector

[x   y   z . . .].

Computing the Steady State Matrix Numerically

Using technology, it is often possible to approximate P with great accuracy by simply computing a very large power of P. How large? You know it is large enough when the rows are all the same to withing the accuracy you desire. For small matrices like the ones in the textbook, P256 is usually more than good enough. (256 is a power of 2, and so the computation of P256 is especially fast on our Matrix Algebra Tool. Try it!) Warning;If the matrix does not stabalize fairly quickly, then there is a danger of significant computational: the larger the numer of computations required, the more serious this problem becomes.

Example

The transition matrix

    P =
    0.2
    0.8
    0
    0.4
    0
    0.6
    0.5
    0.5
    0

that we looked at above is regular, since P3 contains only non-zero entries. (Check this on your graphing calculator or on our on-line utility.)

The steady state probability vector is given by

v = [35/99   40/99   24/99],
so that
  vP = [35/99   40/99   24/99]
0.2
0.8
0
0.4
0
0.6
0.5
0.5
0
    = [35/99   40/99   24/99]
    = v.

The long-term transition matrix is therefore

    P =
    35/99
    40/99
    24/99
    35/99
    40/99
    24/99
    35/99
    40/99
    24/99

(To check this, calculate P256 on the on-line utility.)

Using the Matrix Algebra Tool

Open the Matrix Algebra Tool and enter the matrix P using the following format (note the commas between each pair of entries in each row.): (If you like, you can just copy the blue text below and paste it into the window.)

    P = [0.2, 0.8, 0
    0.4, 0, 0.6
    0.5, 0.5, 0]
Then, in the formula field, enter the formula P^256 and press "Compute". If you want to see the answer as fractions rather than decimals, make sure "Fraction Mode" is checked (it is located just under the buttons).

Top of Page
Absorbing Markov Systems

An absorbing state is a state from which there is a zero probability of exiting. An absorbing Markov system is a Markov system that contains at least one absorbing state, and is such that it is possible to get from each non-absorbing state to some absorbing state in one or more time-steps.

In analyzing an absorbing system, we number the states so that the absorbing states come last. The transition matrix P of an absorbing system then looks like this:

    P =
    S
    T
    0
    I

Here I is an mm identity matrix (m = the number of absorbing states), S is a square (n-m) (n-m) matrix (n = total number of states, so n-m = the number of nonabsorbing states), 0 is a zero matrix and T is an (n-m)m matrix.

The matrix S gives the transition probabilities for movement among the non-absorbing states. The fundamental matrix for the absorbing system is the matrix

    Q = (I-S)-1.
Example

Transition Diagram:
(States 1 and 2 are absorbing. Missing arrows indicate zero probability.)

Matrix:

Top of Page
Calculating the Expected Number of Steps to Absorption

To obtain information about the time to absorption in an absorbing Markov system, we first calculate the fundamental matrix Q.

The number of times, starting in state i, you expect to visit state j before absorption is the ijth entry of Q.

The total number of steps expected before absorption equals the total number of visits you expect to make to all the non-absorbing states. This is the sum of all the entries in the ith row of Q.

The product QT gives the probabilities of winding up in the different absorbing states. For instance, if the ith row of QT is [x   y   z   t], then starting in state i, there is a probability x of winding up in the first absorbing state, a probability y of winding up in the second absorbing state, and so on.

Example

In the above example,

    Q = (I-S)-1 =
    0.75
    0
    -1
    -0.2
    0.8
    =
    4/3
    0
    1/3
    5/4

Thus, for example, the number of times, starting in State 2, you expect to visit State 1 before absorption is the (2,1) entry of Q. That entry is 1/3. In other words, if you start in State 2, you can expect to visit State 1 an average of 1/3 times before absorption.

The product QT is

    0 20/3 5/16 5/3
    QT =
    2/3
    1/3
    1/6
    5/6

Since the second row is [1/6   5/6], this means that, starting in State 2, there is a 1 in 6 chance of winding up in the first absorbing state (State 5), and a 5 in 6 change of winding up in the second absorbing state (State 6).

Using the Matrix Algebra Tool

Computing Q: Open the Matrix Algebra Tool and enter the matrix S using the following format (note the commas between each pair of entries in each row): (If you like, you can just copy the blue text below and paste it into the window.)

    S = [0.25, 0
    0.2, 0.2]
Then, in the formula field, enter the formula (I-S)^-1 and press "Compute". (The utility is clever enough to know which identity matrix you mean by "I", so you need not specify it.) If you want to see the answer as fractions rather than decimals, make sure "Fraction Mode" is checked.

Computing QT: Open the Matrix Algebra Tool and enter the matrices S and T using the following format: (If you like, you can just copy the blue text below and paste it into the window.)

    S = [0.25, 0
    0.2, 0.2]

    T = [0.5, 0.25
    0, 0.6]

Then, in the formula field, enter the formula (I-S)^-1*T and press "Compute".

Top of Page

Last Updated: January, 2004
Copyright © 1995-2004 Stefan Waner and Steven R. Costenoble

Top of Page