A client has asked us to calculate the following.

For each of the approx. **100** Variables...

1. Calculate the main effect of the survey wave on the dependent variable... for each subgroup (e.g. men, women, people with high education, people with low education etc.). Per dependent variable, this would be **40** tests.
2. If significant, calculate post-hoc tests between the 8 survey waves (there will be **28** tests in total)

Obviously, this makes no sense whatsoever but since they are the client and don't understand why this is nonsensical, I want to try to make it as correct as possible. So, I want to correct for the family-wise-error rate.

I will probably go with False Discovery Rate but since it's easier to demonstrate with Holmer-Bonferroni...: I know that if I only had 1 dependent variable, the corrected level with Bonferroni would be  0.05/40 = 0.00125 for the main tests and  0.00125/28 = \*very small\* for the post-hoc tests.

What I'm unsure about is how to account for the fact that I'm conducting the same procedure on about 100 different dependent variables. **Do I (and if so how exactly) have to take this into account when adusting the p-value?**

Thanks for helping me make a nonsensical endeavour at least somewhat statistically sound.
This does seem like data mining. Do p-values even make sense in whatever context this is? Does the client really want to perform all of those hypothesis tests? Is this just a data-mining attempt to "find" statistically significant relationships? Without context about the goal of the study it's hard to gauge what the family-wise error rate should be (or if p-values should be used at all).
I recognize the situation. I treat it as a large scale simultaneous hypothesis testing problem, use Benjamini and Hochberg procedure to control FDR and use the FCR method for the confidence intervals.

But agree that it is far from an ideal situation.