Back to the basics — gambler’s ruin

Recently, I needed to use a result that was, at the heart of it, a glorified gambler’s ruin problem. I have not looked at gambler’s ruin for a while, so thought that it will be fun to rederive the results. Gambler’s ruin refers to various scenarios, the most common of which is the following.

A gambler enters a casino with $n in cash and starts playing a game where he wins with probability p and looses with probability q = 1-p The gampler plays the game repeatedly, betting $1 in each round. He leaves the gave it his total fortune reaches $N or he runs out of money (he is ruined), whichever happens first. What is the probability that the gambler is ruined.

A gambler’s ruin can be modeled as a one-dimensional random walk in which we are interested in the hitting probability of the absorbing states. Calculating these probabilities is fairly straightforward. Let P_N(n) denote the probability that the gambler’s fortune reaches $N before he is ruined on the condition that his current fortune is n. Then,

P_N(n) = p P_N(n+1) + q P_N(n-1)

which can be rewritten as

\displaystyle [P_N(n+1) - P_N(n)] = \left(\frac q p \right)[ P_N(n) - P_N(n-1)]

Since P_N(0) = 0, we have that

\displaystyle P_N(2) - P_N(1) = \left(\frac qp \right) P_N(1)

and similarly

\displaystyle P_N(3) - P_N(2) = \left(\frac qp \right) [P_N(2) - P_N(1)] = \left( \frac qp \right)^2 P_N(1)

Continuing this way, we get that

\displaystyle P_N(n) - P_N(n-1) = \left( \frac qp \right)^{n-1} P_N(1) .

and therefore, by adding the first n such terms, we get

\displaystyle P_N(n) = \sum_{k=0}^{n-1} \left( \frac qp \right)^k P_N(1).

Moreover, we know that

\displaystyle P_N(N) = \sum_{k=0}^{N-1} \left( \frac qp \right)^k P_N(1) = 1.


\displaystyle P_N(1) = \frac 1{\sum_{k=0}^{N-1} \left( \frac qp \right)^k} = \begin{cases} \frac { 1 - (q/p)}{\strut 1 - (q/p)^N}, & p \neq q \\ \frac 1N, & p = q \end{cases}.

Combining with the previous expression for P_N(n) we get,

\displaystyle P_N(n) = \begin{cases} \frac{ 1 - (q/p)^n} {\strut 1 - (q/p)^N}, & p \neq 1/2 \\ \frac{n}{N}, & p = 1/2 \end{cases}.

For ease of representation let \lambda = q/p. Then, the probability of winning are

\displaystyle P_N(n) = \begin{cases} \frac{ 1 - \lambda^n} {\strut 1 - \lambda^N}, & \lambda \neq 1 \\ \frac{n}{N}, &\lambda = 1 \end{cases}.

This was simple, but I am interested in finding the expected time when the game stops.

Expected number of steps until stopping

Let C_N(n) denote the number of plays until the game stops on the condition that the player’s current fortune is $n. As before, we can write

\displaystyle C_N(n) = p C_N(n+1) + q C_N(n-1) + 1.

Rewriting this and observing 1/p = (1 + \lambda), we get

\displaystyle C_N(n+1) - C_N(n) = \lambda [ C_N(n) - C_N(n-1)] + (1 + \lambda).

Rearranging terms, we have

\displaystyle C_N(n+1) - C_N(n) - \frac { 1 + \lambda} {1 - \lambda} = \lambda \left[ C_N(n) - C_N(n-1) - \frac {1+ \lambda}{1-\lambda} \right].

Since $C_N(0) = 0$, we have that

\displaystyle C_N(2) = 2 \alpha + \lambda[ C_N(1) - \alpha]  .

where \alpha = (1+\lambda)/(1-\lambda). And similarly,

\displaystyle C_N(3) - C_N(2) - \alpha = \lambda[ C_N(2) - C_N(1) - \alpha] = \lambda^2 [ C_N(1) - \alpha] .

Continuing this way, we have that

\displaystyle C_N(n) - C_N(n-1) - \alpha = \lambda^{n-1}[ C_N(1) - \alpha] .

Adding the first n such terms, we get

\displaystyle C_N(n) = n \alpha + \sum_{k=0}^{n-1} \lambda^k [ C_N(1) - \alpha]  .

Moreover, we know that C_N(N) = 0. Thus,

\displaystyle [C_N(1) - \alpha ] = -\frac{N \alpha}{\sum_{k=0}^{N-1} \lambda^k }= \frac {N (1+\lambda}{1 - \lambda^N}.

Substituting this back in the expression for C_N(n), we get

\displaystyle C_N(n) = \frac{ (1+\lambda)}{(1 - \lambda)} \left[ n - N \frac{ (1 - \lambda^n)}{(1 - \lambda^N)} \right].

When \lambda = 1, we obtain a simplified expression

\displaystyle C_N(n) = \left( \frac N2 \right)^2 - \left( n - \frac N2 \right)^2 = n(N-n).

OK. Itch scratched. That feels better.


2 thoughts on “Back to the basics — gambler’s ruin

  1. Pingback: A martingale solution to gambler’s ruin « Random Determinism

  2. The Martingale method in another article is great! However, I guess you made a mistake when calculating the expected number of steps in both the Markov Chain and the Martingale method.

    In the Markov Chain method, the second equation should be minus (1+\lambda) instead of plus, i.e.:
    displaystyle C_N(n+1) – C_N(n) = \lambda [ C_N(n) – C_N(n-1)] – (1 + \lambda)

    Therefore, the final formula should be:
    \displaystyle C_N(n) = \frac{ (1+\lambda)}{(1 – \lambda)} \left[ -n + N \frac{ (1 – \lambda^n)}{(1 – \lambda^N)} \right]

    This makes sense because in the Martingale method,

    E[\sum_t=1^\tau Z_t] = E[\sum_t=1^\tau (X_\tau – X_0)] = {N*P_N(n) + 0*[1-P_N(n)]}-n

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s