Computing the Steady-State Vector of a Markov Chain
A Markov Chain is a weighted digraph representing a discrete-time system that can be in any number of discrete states. The nodes of the digraph represent the states, and the directed edge weight between two states a and b represents the probability (called the transition probability from a to b) that the system will move to state b in the next time period, given that it is currently in state a. The sum of the transition probabilities out of any node is, by definition, 1. The set of probabilities is stored in a transition matrix P, where entry (i, j) is the transition probability from state i to state j. Clearly, the sum of each row of P is 1.
A well-known theorem of Markov chains states that the probability of the system being in state j after k time periods, given that the system begins in state i, is the (i, j) entry of P^k .
A common question arising in Markov-chain models is, what is the long-term probability that the system will be in each state? The vector containing these long-term probabilities, denoted Pi , is called the steady-state vector of the Markov chain.
This Maple application creates a procedure for answering this question. As a case study, we'll analyze a two-server computer network whose servers have known probabilities of going down or being fixed in any given hour. The goal is to compute the long-term probability that at least one server is working.