One slightly brute-force-y way is to write a full Markov chain with 120 states. Each state has a 99% chance of going to the next, and an 1% chance of going to the start (including the start itself), except for the final one, which has 100% chance of resetting to start. Then calculate the expected number of visits to the start state. The transition matrix is ergodic, so we have no problems calculating its highest-eigenvalue (1) eigenvector which gives the stationary distribution (fraction of time spent in each state), and you can invert the first value to find the visitaton frequency.
Instead of creating a big matrix with 120 x 120 entries, most of them zero, you can do this same calculation with a bit of thinking. The stationary distribution must be such that each state has 99% times the previous state's fraction of time spent in it, except the first one which we'll call x. So you have 120 values, x + 0.99x + 0.99^(2)x + ... + 0.99^(119)x, adding up to 1. Factor to x(1 + 0.99 + 0.99^(2) + ... + 0.99^(119)) = 1, then use the formula for sums of geometric series to get x(1-0.99^(120))/(1-0.99) = 1, or 1/x = (1-0.99^(120))/(1-0.99) = approximately 70.06.
For time between visits, we're looking for 1/x, so we're done. The effective probability of getting a prize each step is just the probability of being in the first state each step x, or 1.43ish%. The time between visits of the last state (and therefore pity prizes/resets), then, is 1/(fraction of time spent there) = 1/(0.99^(119)x) = approx 231.68.