0 like 0 dislike
0 like 0 dislike
Effective probability when there is a guarantee after X attempts

2 Answers

0 like 0 dislike
0 like 0 dislike
One slightly brute-force-y way is to write a full Markov chain with 120 states. Each state has a 99% chance of going to the next, and an 1% chance of going to the start (including the start itself), except for the final one, which has 100% chance of resetting to start. Then calculate the expected number of visits to the start state. The transition matrix is ergodic, so we have no problems calculating its highest-eigenvalue (1) eigenvector which gives the stationary distribution (fraction of time spent in each state), and you can invert the first value to find the visitaton frequency.

Instead of creating a big matrix with 120 x 120 entries, most of them zero, you can do this same calculation with a bit of thinking. The stationary distribution must be such that each state has 99% times the previous state's fraction of time spent in it, except the first one which we'll call x. So you have 120 values, x + 0.99x + 0.99^(2)x + ... + 0.99^(119)x, adding up to 1. Factor to x(1 + 0.99 + 0.99^(2) + ... + 0.99^(119)) = 1, then use the formula for sums of geometric series to get x(1-0.99^(120))/(1-0.99) = 1, or 1/x = (1-0.99^(120))/(1-0.99) = approximately 70.06.

For time between visits, we're looking for 1/x, so we're done. The effective probability of getting a prize each step is just the probability of being in the first state each step x, or 1.43ish%. The time between visits of the last state (and therefore pity prizes/resets), then, is 1/(fraction of time spent there) = 1/(0.99^(119)x) = approx 231.68.
0 like 0 dislike
0 like 0 dislike
I have spent a bit of time thinking about this and I think it is a rather tricky problem, actually. Suppose we win with probability p, unless we lost all of the previous k rounds, in which case we win with probability 1. (So in your hypothetical problem, p = 0.01 and k = 119.) We can derive a rather ugly sort of recurrence relation for the probability a\_n of winning on round n for n>k+1:

a\_n = pa\_(n-1) + (1-a\_(n-1))(pa\_(n-2) + (1-a\_(n-2))(pa\_(n-3) + (1-a\_(n-3))( ... (1-a\_(n-k+1))(pa\_(n-k) + 1*(1-a\_(n-k))...)

For n < k+1, it's pretty easy since everything is still independent: a\_n = p. And for n = k+1, it's not bad: we get

a\_k = 1 \* (1-p)^(k) + p(1 - (1-p)^(k)).

But I have no idea how to get a closed form for these, much less how to compute the long-running "expected probability of winning." To define this meaningfully, I think we'd need to formalize the n-th attempt in the lottery as a Bernoulli random variable X\_n that is equal to 1 with probability a\_n and 0 with probability 1-a\_n, but with dependence on previous X\_i: X\_n is 1 with probability p provided that X\_(n-1) through X\_(n-k) were not all 0; if X\_(n-1) through X\_(n-k) were all 0 then X\_n is 1 with probability 1. Then we'd be interested in the limit

limit_(n->infinity) ( (sum from i = 1 to n of X\_i) / n)

But this is hard to compute; ordinarily we'd use a law of large numbers, but these X\_i are neither independent nor identically distributed, so while there may be some LLN that applies, I certainly do not know about it.

Perhaps this question is sophisticated enough to get interesting answers (or references to answers) on math stack exchange—you might want to try posting it there?

No related questions found

24.8k questions

103k answers

0 comments

33.7k users

OhhAskMe is a math solving hub where high school and university students ask and answer loads of math questions, discuss the latest in math, and share their knowledge. It’s 100% free!