This is a sequel to my last post about the Umbrella Problem. As a Midwesterner, one of the problems with using an umbrella is that they get blown inside out, since it’s really windy in the Midwest. As a result, umbrellas have a short lifespan. Here, I extend the Umbrella problem to consider an umbrella being destroyed with probability q. (Only when the umbrella is used, of course. It still rains with probability p.)
In the Markov chain, let our state reflect the total number of umbrellas (this part is new) and the number of available umbrellas at the professors current location (either home or her office). Then the Markov chain has Ux(U+1) states, given by (U,A), where A=number of available umbrellas. The transition probability matrix for U = 2 is:
(0,0) (1,0) (1,1) (2,0) (2,1) (2,2) P= (0,0) [1 0 0 0 0 0 <-- when there are no umbrellas available at all (1,0) 0 0 1 0 0 0 <--in this state and in (2,0), we don't currently have any umbrellas to move (1,1) qp 1-p (1-q)p 0 0 0 <-- in this state and in (2,1) and (2,2), we can move or lose am umbrella (2,0) 0 0 0 0 0 1 (2,1) 0 qp 0 0 1-p (1-q)p (2,2) 0 qp 0 1-p (1-q)p 0]
This Markov chain has an absorbing state of (0,0), meaning that in the long-run, we will have zero umbrellas with certainty. Therefore, we cannot identify the steady state proportion of time the professor gets wet as we did before.
However, Midwesterners periodically replenish their umbrella supplies. This would lead to the design of a (u, U) inventory model, where the number of umbrellas must be replenished to have U umbrellas when their number dwindles down to a mere u umbrellas.Let’s consider a (0,2) inventory model, which means that the professor immediately buys two umbrellas when her last umbrella is destroyed. The Markov chain is then ergodic (all states are in a single, recurrent class, aperiodic). The changes are in boldface:
(0,0) (1,0) (1,1) (2,0) (2,1) (2,2) P= (0,0) [0 0 0 0 0 1 (1,0) 0 0 1 0 0 0 (1,1) qp 1-p (1-q)p 0 0 0 (2,0) 0 0 0 0 0 1 (2,1) 0 qp 0 0 1-p (1-q)p (2,2) 0 qp 0 1-p (1-q)p 0]
Now we can analyze this Markov chain by solving for the limiting distribution π in the usual way. The probability that the professor gets wet is now p(π[0,0]+π[1,0]+π[2,0]). This assumes that when the umbrella brakes, the professor is able to stay dry. That may be a dubious assumption, but Midwesterners are crafty like that.
A sensitivity analysis on p with q = 0.1 yields the following results:
A sensitivity analysis on q with p = 0.25 yields the following results: