I was reading through the definition of MSE and the formula I have found in all articles is the one shown in https://en.wikipedia.org/wiki/Mean_squared_error , which is the expected squared difference between the estimated values (y hat) and what is estimated (y). So far so good.
However, when looking at the MSE in a regression setting, I have seen various articles stating that the term MSE is sometimes used to refer to the unbiased estimate of error variance, in which case the denominator is not the sample size n but rather the degrees of freedom.
I agree that the formula for the unbiased estimator of the error variance is the one divided with the d.f, but my question is when comparing regression models with the test MSE should I use the formula that has as a denominator the sample size (n) or the d.f (n-p-1)? Also do you know if different packages in r use different formulas?