# Statistical inefficiency = bias, or, Increasing efficiency will reduce bias (on average), or, There is no bias-variance tradeoff

Statisticians often talk about a bias-variance tradeoff, comparing a simple unbiased estimator (for example, a difference in differences) to something more efficient but possibly biased (for example, a regression). There’s commonly the attitude that the unbiased estimate is a better or safer choice. My only point here is that, by using a less efficient estimate, we are generally choosing to estimate fewer parameters (for example, estimating an average incumbency effect over a 40-year period rather than estimating a separate effect for each year or each decade). Or estimating an overall effect of a treatment rather than separate estimates for men and women. If we do this–make the seemingly conservative choice to not estimate interactions, we are implicitly estimating these interactions at zero, which is not unbiased at all!

I’m not saying that there are any easy answers to this; for example, see here for one of my struggles with interactions in an applied problem—in this case (estimating the effect of incentives in sample surveys), we were particularly interested in certain interactions even thought they could not be estimated precisely from data.

Usually we try to find best estimate in a class of unbiased estimators and search the one having lowest variance. Sometimes we may further try to reduce this lowest variance at the cost of sacrificing some amount of un-biasedness. So a position of tradeoff always remains. We may not reach to the end saying it the best.

I'd put it this way: there's a tradeoff between statistical bias and statistical variance, but not "bias" the way we instinctively think of bias, as a difference between a specific estimate and reality. This also relates to the classical vs. Bayesian dispute over whether to care about the average properties of an estimator, or the particular estimate we got.