Endogeneity leads to biased estimates, ie your statistical analysis/inference can become invalid. That being said, the usual culprit is omitted variable bias, so you can often times attempt to resolve the issue by finding and including confounding variables.
Multicollinearity doesn’t make your inference invalid, you just have a hard time getting stat sig (or coefficients in the correct sign) in the first place. There are some work arounds like aggregating collinear variables or taking a ratio of collinear variables. However, if you want to keep all of your input variables as-is in the model, then the only real solution is to get more observations from the data generating process (ie get more samples).
Whether endogeneity or multicollinearity is the more problematic issue will depend on what you are trying to do. Personally for me, the work I do generally is on post-hoc analysis on historical data, so getting more observations is usually impossible, hence I usually have a harder time resolving multicollinearity issues in my variables.
But often times you might realize you have omitted variable bias, but it’s not feasible to get data/observations on the omitted variable(s). For example, if you’re doing market research, you might realize you need to control for competitor spending, but this data is generally very hard to obtain. In such cases, omitted variable bias can be more problematic.
So in true data science/statistics fashion, the answer is “it depends” :)
As for the 2nd question: the answer is “yes”. If all you are trying to do is minimize Mean Squared Error (ie get the most accurate predictions) then you don’t actually care about the estimated coefficients or the standard errors. In this case neither endogeneity (biased estimates) nor multicollinearity (large standard errors) are issues, since you’re not even looking at these things. For prediction, all you care about is the model’s ability to output a number on unseen data, so what matters the most is finding variables that can get you the lowest prediction errors on unseen data (usually measured using some kind of Cross Validation score).