I'm mulling how to approach a project I've been assigned at my work relating to a manufacturing process. In this process, there are three or more kinds of warnings that are tallied on a month-to-month basis. We label them by grades of 1, 2, 3, and so forth in order of severity.
What is of interest to stakeholders is if a modeled month-to-month fitted predicted trend line in event counts exceeds some absolute threshold. The count threshold is different for each severity of error (e.g. the monthly threshold is lower for Grade 3 events than Grade 1 events). For example, company leadership wants the process to be flagged if in some month the modeled trend line crosses the thresholds for, say, >= 100 Grade 1 events, 20 Grade 2 events, and/or 10 Grade 3 events.
With this being ordinal data, I immediately think about using an ordinal logit/probit model. However, as the changing proportion of events per month is presently less important to the stakeholders than predicted absolute counts of each grade (though they are interested in proportion changes later on), I was thinking that it may be more appropriate to partition the data by grade into univariate time series and then start by running a Poisson time series regression for each grade. If over-dispersion seemed present, I was thinking of opting for a generalized Poisson regression, as the negative binomial interpretation of binary failures and successes does not make as much sense in light of there being multiple "failure" outcomes across all the warning types.
Is this approach appropriate or very misguided? I appreciate your time and input, thank you very much.