menu icon shown in narrow screens to bring the side navigation and scores panel into view

Regular and absorbing Markov processes

⊠
Go to Part A: Markov processes: Basics
This tutorial: Part B: Regular and absorbing Markov processes

(This topic is also in Section 8.7 in Finite Mathematics and Applied Calculus)

%%Note To follow this tutorial, you should know what is meant by a Markov system and its associated matrix as described in %%partAtut, and how to set up and multiply matrices as described in the %%mtut.

Resources

Matrix algebra tool Markov system simulation Markov system computation utility

Long term behavior of Markov systems

If $P$ is the transition matrix for a Markov system, and if $v ={}$[x, y, z, ...] (as many unknowns as states in the Markov system) is a distribution vector with the property that
$vP = v$,
then we refer to $v$ as a steady state (distribution) vector.

#[Example of a steady state distribution vector][Ejemplo de un vector de distribución de estado estable]#

#[Weather][Clima]#: #[Let's go back to the two-state weather model from %%partAtut, where State 1 is "Sunny" and State 2 is "Cloudy":][Volvamos al modelo meteorológico de dos estados de la %%partAtut, donde el estado 1 es "Soleado" y el estado 2 es "Nublado":]#
with transition matrix $P={}$ [.4,.6;.2,.8]. If we happen to choose $v ={}$[.25,.75], #[then we find][entonces encontramos]#
$vP$ \t ${}={}$ [.25,.75][.4,.6;.2,.8] \\ \t ${}={}$ [.25,.75]${}=v$
#[So $v ={}$[.25,.75] happens to be a steady state vector for our "weather" Markov system (see the calculation in the example below).][Entonces $v ={}$[.25,.75] resulta ser un vector de estado estable para nuestro sistema Markov "climático" (ver el cálculo en el ejemplo que sigue).]#
#[Your turn][Tu turno]#

#[Q][P]#: Why is such a vector called a "steady state" vector?
#[A][R]#: If we look at our simple weather model above, the equation $vP = v$ when $v ={}$[.25,.75] tells us that, if today there is a 25% chance of it being sunny and a 75% chance of it being cloudly today, then the probabilities for tomorrow are the same: once again a 25% chance of it being sunny and a 75% chance of it being cloudly.

#[Q][P]#: So how do we find such a steady state vector?
#[A][R]#: To find a steady state distribution vector for a Markov System with transition matrix $P$, we need to find a probability distribution vector $v = {}$[x, y, z, ...] (so that its entries add up to 1) such that $vP = v$. So, we must solve the following system:
$x + y + z + ...$ \t ${}= 1$ \\
\\ [x, y, z, ...]$P$ \t ${}=$ [x, y, z, ...]
#[(This system of equations is overdetermined because the matrix equation gives a redundant system.)][(Este sistema de ecuaciones está sobredeterminado porque la ecuación matricial da un sistema redundante.)]#

#[Example: Calculating a steady state distribution vector][Ejemplo: Cálculo de un vector de distribución de estado estable]#

#[Weather][Clima]#: #[Let's go back to the two-state weather model from %%partAtut, where State 1 is "Sunny" and State 2 is "Cloudy" with][Volvamos al modelo meteorológico de dos estados de la %%partAtut, donde el estado 1 es "Soleado" y el estado 2 es "Nublado" con]#
$P={}$ [.4,.6;.2,.8].

#[Our system of equations to solve is therefore][Nuestra sistema de ecuaciones para resolver es entonces]#
$x + y$ \t ${}= 1$ \\
\\ [x, y][.4,.6;.2,.8] \t ${}=$ [x; y].
#[The second equation above gives][la segunda ecuació'n anterior da]#
[.4x + .2y; .6x + .8y] \t ${}=$ [x; y] \\
#[implying that][lo que implica que]#
\\ $.4x + .2y = x$ #[or][o]# $-.6x + .2y = 0$ \\ #[and][y]# $.6x + .8y = y$ #[or][o]# $.6x - .2y = 0$
#[So we get the following system of equations to solve:][Así obtenemos el siguiente sistema de ecuaciones para resolver:]#
$x + y$ \t ${}= 1$ \\ $-.6x + .2y$ \t ${}= 0$ \\ $.6x - .2y$ \t ${}= 0$
#[Instead of solving this system by hand, let's use the ][En lugar de resolver este sistema a mano, utilicemos el ]# Matrix algebra tool #[to row-reduce the coefficient matrix][para reducir por filas la matriz de coeficientes]#
[1,1,1;-.6,.2,0;.6,-.2,0] \t #[which reduces to][que reduce a]# [1,0,0.25;0,1,0.75;0,0,0].

#[So, the solution is][Así, la solución es]#
$x = .25, y = .75$.
#[and so $v ={}$[x,y] ${}={}$[.25,.75] is the unique steady state vector for our "weather" Markov system.][y por lo tanto, $v ={}$[x,y] ${}={}$[.25,.75] es el único vector de estado estable para nuestro sistema Markov "climático".]#
#[Your turn][Tu turno]#

%%Q: It looks like we obtained a unique solution every time we solved for steady state vectors above. Does this mean that there always is a unique steady state vector for every Markov system?
%%A: #[Yes and no... It can be proved that every Markov system does have at least one steady state distribution vector, although some Markov systems may have more than one: As a simple example, if our Markov system happens to have the idetntity matrix as its transition matrix $P=I$, then every disttribution vector $v$ is a steady state distribution vector, becuase $vI = v$ for every vector $v$. ][Sí y no... Se puede demostrar que cada sistema Markov tiene al menos un vector de distribución de estado estable, aunque algunos sistemas Markov pueden tener más de uno: Como ejemplo simple, si nuestra sistema Markov tiene como matriz de transición la matriz de identidad $P=I$, entonces cada vector de distribución $v$ es un vector de distribución de estado estable, porque $vI = v$ para cada vector $v$.]#

#[However, there is a certain class of Markov systems, called regular Markov systems, that are guaranteed to have unique steady state distribution vectors:][Sin embargo, existe una cierta clase de sistemas Markov, llamados sistemas Markov regulares, que tienen garantizados vectores de distribución de estado estable únicos:]#

Regular Markov systems
#[A Markov system is regular if some power $P^n = P \cdot P \cdot ... \cdot P$ of its transition matrix $P$ has no zero entries. A little thought will convince you that, as there can be no negative entries in any transition matrix, all larger powers of $P$ will have no zero entries either.][Un sistema de Markov es regular si alguna potencia $P^n = P \cdot P \cdot ... \cdot P$ de su matriz de transición $P$ no tiene entradas cero. Si lo piensas un poco, te convencerás de que, como no puede haber entradas negativas en ninguna matriz de transición, todas las potencias mayores de $P$ tampoco tendrán entradas cero.]#

#[Properties of regular Markov systems][Propiedades de sistemas Markov regulares]#
  1. #[Higher and higher powers of $P$ approach a unique matrix—Let's call it $P^{\infty}$—all of whose rows are the same.][Potencias cada vez más altas de $P$ se acercan a una matriz única (llamémosla $P^{\infty}$) cuyas filas son todas iguales.]#
  2. #[The rows of $P^{\infty}$ are all the same, and each row is a copy of the unique steady state vector $v$.][Las filas de $P^{\infty}$ son todas iguales, y cada fila es una copia del vector de estado estable único $v$.]#
  3. #[You can calculate $P^{\infty}$ quite accurately using the %%malgtool or the %%marktool by just using a high enough power, like $P^(256}$ (the tool is faster and more accurate with exponents like 256 that are large powers of 2).][Puedes aproximar $P^{\infty}$ con bastante precisión usando %%malgtool o %%marktool simplemente usando una potencia lo suficientemente alta, como $P^(256}$ (la herramienta es más rápida y precisa con exponentes como 256 que son grandes potencias de 2).]#
%%Example
#[For example, go to the %%malgtool, select fraction mode, and enter][Por ejemplo, ve al %%malgtool, selecciona modo fraccionario, e ingresa]#
    P = [.7,.3;.4,.6]
#[in the input area, and enter P,P^256 in the formula field below that to see both $P$ and its 256 th power,][en el área de entrada, e ingresa P, P^256 en el campo de fórmula debajo de eso para ver tanto $P$ como su potencia 256,]#
$P^{\infty} \approx P^{256} \approx{}$ [4/7,3/7;4/7,3/7]
#[Each row in the $P^{\infty}$ gives a copy of the unique associated steady state vector,][Cada fila en $P^{\infty}$ da una copia del único vector de estado estable asociado,]#
$v$ = [4/7,3/7;]
#[If $P$ is not the transition matrix of a regular system, then (1) and (2) above need not be true; for instance, if][Si $P$ no es la matriz de transición de un sistema regular, entonces (1) y (2) anteriores no necesitan ser verdaderos; por ejemplos, si]#
$P ={}$ [0,1,0;0,0,1;1,0,0],
#[then (1) fails, as the powers of $P$ cycle through $P$, $P^2$, and $P^3 = I$. If][entonces (1) falla, ya que las potencias de $P$ recorren por $P$, $P^2$ y $P^3 = I$. Si]#
$P ={}$ [1/2,1/2,0;1,0,0;0,0,1]
#[we find][encuentramos]#
$P^{\infty} ={}$[2/3,1/3,0;2/3,1/3,0;0,0,1]
#[giving us two steady-state vectors:][dándonos dos vectores de estado estable:]#
$v_1$ = [2/3,1/3,0]
%%and
$v_2$ = [0,0,1]
Here, for your convenience, is a small copy of the %%malgtool you can use to play with some examples like those above:

Absorbing Markov systems

An absorbing state is a state from which there is a zero probability of exiting. An absorbing Markov system is a Markov system that contains at least one absorbing state, and is such that it is possible to get from each non-absorbing state to some absorbing state in one or more time-steps.

#[Following is the transition diagram of an absorbing Markov system with two absorbing states (states 3 and 4):][A continuación se muestra el diagrama de transición de un sistema de Markov absorbente con dos estados de absorción (estados 3 y 4):]# In analyzing an absorbing system, we number the states so that the absorbing states come last, as we have in the diagram above. The transition matrix $P$ of an absorbing system can be divided into four blocks as shown:
$P = {}$[
$\ $S$\ $
,
$\ $T$\ $
;
$\ $0$\ $
,
$\ $I$\ $
]
${}={}$ [1/3,1/3,1/6,1/6;0,1/2,1/4,1/4;0,0,1,0;0,0,0,1]
Here, for an absorbing Markov system with $n$ absorbing states and $m$ nonabsorbing states, the four boxed parts of the matrix are the following:
  1. #[$S$ is the $m \times m$ matrix representing the movement among the nonabsorbing states. In our example:][$S$ es la matriz $m \times m$ que representa el movimiento entre los estados no absorbentes. En nuestro ejemplo:]#
    $S ={}$[1/3,1/3;0,1/2] \gap[20] #[Top left block][Bloque superior izquierdo]#
  2. $T$ is the $m \times n$ matrix representing movement from nonabsorbing states to absorbing states
    $T ={}$[1/6,1/6;1/4,1/4] \gap[20] #[Top right block][Bloque superior derecho]#
  3. $0$ is the zero $n \times m$ matrix representing the lack of movement from absorbing states to nonabsorbing states
    $I ={}$[0,0;0,0] \gap[20] #[Bottom left block][Bloque inferior izquierdo]#
  4. #[$I$ is the $n \times n$ identity matrix representing the only possible movement in the absorbing states. In our example:][$I$ es la matriz identidad de $n \times n$ que representa el único movimiento posible en los estados absorbentes. En nuestro ejemplo:]#
    $I ={}$[1,0;0,1] \gap[20] #[Bottom right block][Bloque inferior derecho]#
#[Your turn][Tu turno]#

%%Q: So what is the point of setting up the matrices $S$ and $T$?
%%A: They actually tell us a great deal about the long-term behavior of absorbing Markov systems:

Behavior of Absorbing Markov systems

#[Note][Nota]# #[The material that follows goes beyond the discussion in the textbook.][El material que sigue va más allá de lo discutido en el libro de texto.]#

#[Long term behavior of absorbing Markov systems][Comportamiento a largo plazo de los sistemas absorbentes de Markov]#

#[If][Si]#
$P = {}$[
$\ $S$\ $
,
$\ $T$\ $
;
$\ $0$\ $
,
$\ $I$\ $
]
#[is the transition matrix of an absorbing system, then define the fundamental matrix of the system to be][es la matriz de transición de un sistema Markov, entonces defina la matriz fundamental del sistema a ser]#
$Q = (I-S)^{-1}$,
#[where $I$ is the identity matrix of the same dimensions as $S$. Then][donde $I$ es la matriz identidad de las mismas dimensiones que $S$. Entonces]#
  1. The $ij$ entry of $Q$. is the number of times, starting in state $i$, you expect to visit state $j$ (or, if $j = i$, return to State $i$) before absorption.
  2. The sum of all the entries in the $i$th row of $Q$ is the number of steps before absorption if you start in State $i$.
  3. The product $QT$ gives the probabilities of eventually winding up in the different absorbing states from the different non-absorbing states. For instance, if the $i$th row of $QT$ is $[p\ q\ r, \cdots]$, then starting in state $i$, there is a probability $p$ of ending up in the first absorbing state, a probability $q$ of ending up in the second absorbing state, and so on.
#[Example for you][Ejemplo para ti]#
Now try some of the exercises in Section 8.7 in Finite Mathematics and Applied Calculus. or move ahead by pressing "Next tutorial" on the sidebar.
Last Updated: July 2025
Copyright © 2022
Stefan Waner and Steven R. Costenoble

 

 

 

← Previous    Next →
Game version
All tutorials
Main page
Everything for calc
Everything for finite math
Everything
Español
Hide panel