Consider the Continuous Time Markov Chain With State Space 1 2 3 4 and Infintesimal Generator
11.3.3 The Generator Matrix
Here, we introduce the generator matrix. The generator matrix, usually shown by $G$, gives us an alternative way of analyzing continuous-time Markov chains. Consider a continuous-time Markov chain $X(t)$. Assume $X(0)=i$. The chain will jump to the next state at time $T_1$, where $T_1 \sim Exponential(\lambda_i)$. In particular, for a very small $\delta>0$, we can write \begin{align*} P(T_1 \lt \delta)&=1-e^{-\lambda_i \delta}\\ &\approx 1-(1-\lambda_i \delta)\\ &=\lambda_i \delta. \end{align*} Thus, in a short interval of length $\delta$, the probability of leaving state $i$ is approximately $\lambda_i \delta$. For this reason, $\lambda_i$ is often called the transition rate out of state $i$. Formally, we can write \begin{align} \lambda_i =\lim_{\delta \rightarrow 0^{+}} \left[ \frac{P\big(X(\delta)\neq i | X(0)=i\big)}{\delta}\right] \hspace{30pt} (11.7) \end{align} Since we go from state $i$ to state $j$ with probability $p_{ij}$, we call the quantity $g_{ij}=\lambda_i p_{ij}$, the transition rate from state $i$ to state $j$. Here, we introduce the generator matrix, $G$, whose $(i,j)$th element is $g_{ij}$, when $i \neq j$. We choose the diagonal elements ($g_{ii}$) of $G$ such that the rows of $G$ sum to $0$. That is, we let \begin{align*} g_{ii}&=-\sum_{j \neq i} g_{ij}\\ &=-\sum_{j \neq i}\lambda_i p_{ij}\\ &=-\lambda_i \sum_{j \neq i}p_{ij}\\ &=-\lambda_i. \end{align*} The last equality resulted as follows: If $\lambda_i=0$, then clearly $$\lambda_i \sum_{j \neq i}p_{ij}=\lambda_i=0.$$ If $\lambda_i \neq 0$, then $p_{ii}=0$ (no self-transitions), so $$\sum_{j \neq i}p_{ij}=1.$$ It turns out the generator matrix is useful in analyzing continuous-time Markov chains.
Example
Explain why the following approximations hold:
- $p_{jj}(\delta) \approx 1+g_{jj} \delta$, for all $j \in S$.
- $p_{kj}(\delta) \approx \delta g_{kj}$, for $k \neq j$.
- Solution
- Let $\delta$ be small. Equation 11.7 can be written as \begin{align*} p_{jj}(\delta) &\approx 1-\lambda_j \delta \\ &=1+g_{jj} \delta. \end{align*} Also, we can approximate $p_{kj}(\delta)=P(X(\delta)=j|X(0)=k)$ as follows. This probability is approximately equal to the probability that we have a single transition from state $k$ to state $j$ in the interval $[0,\delta]$. Note that the probability of more than one transition is negligible if $\delta$ is small (refer to the Poisson process section). Thus, we can write \begin{align*} p_{kj}(\delta)&=P(X(\delta)=j|X(0)=k)\\ &\approx P(X(\delta) \neq k |X(0)=k) p_{kj}\\ &\approx \lambda_k \delta p_{kj}\\ &=\delta g_{kj}, \textrm{ for }k \neq j. \end{align*} We can state the above approximations more precisely as
- $g_{jj}=-\lim_{\delta \rightarrow 0^{+}} \left[ \frac{1-p_{jj}(\delta)}{\delta}\right], \textrm{ for all } j \in S$;
- $g_{kj}= \lim_{\delta \rightarrow 0^{+}} \left[ \frac{p_{kj}(\delta)}{\delta} \right]$, for $k \neq j$.
- Let $\delta$ be small. Equation 11.7 can be written as \begin{align*} p_{jj}(\delta) &\approx 1-\lambda_j \delta \\ &=1+g_{jj} \delta. \end{align*} Also, we can approximate $p_{kj}(\delta)=P(X(\delta)=j|X(0)=k)$ as follows. This probability is approximately equal to the probability that we have a single transition from state $k$ to state $j$ in the interval $[0,\delta]$. Note that the probability of more than one transition is negligible if $\delta$ is small (refer to the Poisson process section). Thus, we can write \begin{align*} p_{kj}(\delta)&=P(X(\delta)=j|X(0)=k)\\ &\approx P(X(\delta) \neq k |X(0)=k) p_{kj}\\ &\approx \lambda_k \delta p_{kj}\\ &=\delta g_{kj}, \textrm{ for }k \neq j. \end{align*} We can state the above approximations more precisely as
For a continuous-time Markov chain, we define the generator matrix $G$. The $(i,j)$th entry of the transition matrix is given by \begin{align} \nonumber g_{ij} = \left\{ \begin{array}{l l} \lambda_i p_{ij} & \quad \textrm{ if }i \neq j \\ & \quad \\ -\lambda_i & \quad \textrm{ if }i = j \end{array} \right. \end{align}
Example
Consider the continuous Markov chain of Example 11.17: A chain with two states $S=\{0, 1\}$ and $\lambda_0=\lambda_1=\lambda>0$. In that example, we found that the transition matrix for any $t \geq 0$ is given by \begin{equation} \nonumber P(t) = \begin{bmatrix} \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} \\[5pt] \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} \\[5pt] \end{bmatrix}. \end{equation}
- Find the generator matrix $G$.
- Show that for any $t \geq 0$, we have \begin{align*} P'(t)= P(t) G= G P(t), \end{align*} where $P'(t)$ is the derivative of $P(t)$.
- Solution
-
- First, we have \begin{align*} g_{00}&=-\lambda_0 \\ &=-\lambda,\\ g_{11}&=-\lambda_1 \\ &=-\lambda. \end{align*} The transition matrix for the corresponding jump chain is given by \begin{equation} \nonumber P = \begin{bmatrix} p_{00} & p_{01} \\ p_{10} & p_{11} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}. \end{equation} Therefore, we have \begin{align*} g_{01}&=\lambda_0 p_{01} \\ &=\lambda,\\ g_{10}&=\lambda_1 p_{10}\\ &=\lambda. \end{align*} Thus, the generator matrix is given by \begin{align*} G= \begin{bmatrix} -\lambda & \lambda \\[5pt] \lambda & -\lambda \\[5pt] \end{bmatrix}. \end{align*}
- We have \begin{align*} P'(t)= \begin{bmatrix} -\lambda e^{-2\lambda t} & \lambda e^{-2\lambda t} \\[5pt] \lambda e^{-2\lambda t} & -\lambda e^{-2\lambda t} \\[5pt] \end{bmatrix}, \end{align*} where $P'(t)$ is the derivative of $P(t)$. We also have \begin{equation} \nonumber P(t) G= \begin{bmatrix} \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} \\[5pt] \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} \\[5pt] \end{bmatrix} \begin{bmatrix} -\lambda & \lambda \\[5pt] \lambda & -\lambda \\[5pt] \end{bmatrix}= \begin{bmatrix} -\lambda e^{-2\lambda t} & \lambda e^{-2\lambda t} \\[5pt] \lambda e^{-2\lambda t} & -\lambda e^{-2\lambda t} \\[5pt] \end{bmatrix}, \end{equation} \begin{equation} \nonumber G P(t) = \begin{bmatrix} -\lambda & \lambda \\[5pt] \lambda & -\lambda \\[5pt] \end{bmatrix} \begin{bmatrix} \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} \\[5pt] \frac{1}{2}-\frac{1}{2}e^{-2\lambda t} & \frac{1}{2}+\frac{1}{2}e^{-2\lambda t} \\[5pt] \end{bmatrix} = \begin{bmatrix} -\lambda e^{-2\lambda t} & \lambda e^{-2\lambda t} \\[5pt] \lambda e^{-2\lambda t} & -\lambda e^{-2\lambda t} \\[5pt] \end{bmatrix}. \end{equation} We conclude \begin{align*} P'(t)= P(t) G= G P(t). \end{align*}
-
The equation $P'(t)= P(t) G= G P(t)$ (in the above example) is in fact true in general. To see the proof idea, we can argue as follows. Let $\delta$ be small. By Example 11.20, we have \begin{align*} p_{jj}(\delta) &\approx 1+g_{jj} \delta,\\ p_{kj}(\delta)&\approx \delta g_{kj}, \quad \textrm{for } k \neq j. \end{align*} Using the Chapman-Kolmogorov equation, we can write \begin{align*} P_{ij}(t+\delta) &=\sum_{k \in S} P_{ik}(t)p_{kj}(\delta) \\ &=p_{ij}(t)p_{jj}(\delta)+\sum_{k \neq j} P_{ik}(t)p_{kj}(\delta)\\ &\approx p_{ij}(t)(1+g_{jj} \delta)+\sum_{k \neq j} P_{ik}(t)\delta g_{kj}\\ &= p_{ij}(t)+ \delta p_{ij}(t) g_{jj} + \delta \sum_{k \neq j} P_{ik}(t) g_{kj}\\ &= p_{ij}(t)+ \delta \sum_{k \in S} P_{ik}(t) g_{kj}. \end{align*} Thus, \begin{align*} \frac{P_{ij}(t+\delta)-p_{ij}(t)}{\delta} \approx \sum_{k \in S} P_{ik}(t) g_{kj}, \end{align*} which is the $(i,j)$th element of $P(t)G$. The above argument can be made rigorous.
The forward equations state that $$P'(t)=P(t)G,$$ which is equivalent to $$p'_{ij}(t)=\sum_{k \in S} p_{ik}(t) g_{kj}, \textrm{ for all }i,j \in S.$$
The backward equations state that $$P'(t)=GP(t),$$ which is equivalent to $$p'_{ij}(t)=\sum_{k \in S} g_{ik} p_{kj}(t), \textrm{ for all }i,j \in S.$$
One of the main uses of the generator matrix is finding the stationary distribution. So far, we have seen how to find the stationary distribution using the jump chain. The following result tells us how to find the stationary matrix using the generator matrix.
Consider a continuous Markov chain $X(t)$ with the state space $S$ and the generator Matrix $G$. The probability distribution $\pi$ on $S$ is a stationary distribution for $X(t)$ if and only if it satisfies \begin{align*} \pi G=0. \end{align*}
Proof:
For simplicity, let's assume that $S$ is finite, i.e., $\pi=[\pi_0, \pi_1, \cdots, \pi_r]$, for some $r \in \mathbb{N}$. If $\pi$ is a stationary distribution, then $\pi=\pi P(t)$. Differentiating both sides, we obtain
\begin{align*} 0&= \frac{d}{dt} [\pi P(t)] \\ &= \pi P'(t)\\ &=\pi G P(t) \quad (\textrm{backward equations}) \end{align*} Now, let $t=0$ and remember that $P(0)=I$, the identity matrix. We obtain \begin{align*} 0=\pi G P(0)=\pi G. \end{align*} Next, let $\pi$ be a probability distribution on $S$ that satisfies $\pi G=0$. Then, by backward equations, \begin{align*} P'(t)&= G P(t). \end{align*} Multiplying both sides by $\pi$, we obtain \begin{align*} \pi P'(t)=\pi G P(t)=0. \end{align*} Note that $\pi P'(t)$ is the derivative of $\pi P(t)$. Thus, we conclude $\pi P(t)$ does not depend on $t$. In particular, for any $t \geq 0$, we have \begin{align*} \pi P(t)=\pi P(0)=\pi. \end{align*} Therefore, $\pi$ is a stationary distribution.
Example
The generator matrix for the continuous Markov chain of Example 11.17 is given by \begin{align*} G= \begin{bmatrix} -\lambda & \lambda \\[5pt] \lambda & -\lambda \\[5pt] \end{bmatrix}. \end{align*} Find the stationary distribution for this chain by solving $\pi G=0$.
- Solution
- We obtain \begin{equation} \nonumber \pi G = [\pi_0, \pi_1] \begin{bmatrix} -\lambda & \lambda \\[5pt] \lambda & -\lambda \\[5pt] \end{bmatrix} = 0. \end{equation} which results in \begin{align*} \pi_0=\pi_1. \end{align*} We also need \begin{align*} \pi_0+\pi_1=1. \end{align*} Solving the above equations, we obtain \begin{align*} \pi_0=\pi_1=\frac{1}{2}. \end{align*}
Transition Rate Diagram:
A continuous-time Markov chain can be shown by its transition rate diagram. In this diagram, the values $g_{ij}$ are shown on the edges. The values of $g_{ii}$'s are not usually shown because they are implied by the other values, i.e., $$g_{ii}=-\sum_{j \neq i} g_{ij}.$$ For example, Figure 11.24 shows the transition rate diagram for the following generator matrix \begin{equation} G = \begin{bmatrix} -5 & 5 & 0 \\[5pt] 1 & -2 & 1\\[5pt] 3 & 1 & -4 \\[5pt] \end{bmatrix}, \hspace{30pt} (11.8) \end{equation}
The print version of the book is available through Amazon here.
Source: https://www.probabilitycourse.com/chapter11/11_3_3_the_generator_matrix.php
0 Response to "Consider the Continuous Time Markov Chain With State Space 1 2 3 4 and Infintesimal Generator"
Postar um comentário