Statisticians often talk about a bias-variance tradeoff, comparing a simple unbiased estimator (for example, a difference in differences) to something more efficient but possibly biased (for example, a regression). There’s commonly the attitude that the unbiased estimate is a better or safer choice. My only point here is that, by using a less efficient estimate, we are generally choosing to estimate fewer parameters (for example, estimating an average incumbency effect over a 40-year period rather than estimating a separate effect for each year or each decade). Or estimating an overall effect of a treatment rather than separate estimates for men and women. If we do this–make the seemingly conservative choice to not estimate interactions, we are implicitly estimating these interactions at zero, which is not unbiased at all!
Usually we try to find best estimate in a class of unbiased estimators and search the one having lowest variance. Sometimes we may further try to reduce this lowest variance at the cost of sacrificing some amount of un-biasedness. So a position of tradeoff always remains. We may not reach to the end saying it the best.
I'd put it this way: there's a tradeoff between statistical bias and statistical variance, but not "bias" the way we instinctively think of bias, as a difference between a specific estimate and reality. This also relates to the classical vs. Bayesian dispute over whether to care about the average properties of an estimator, or the particular estimate we got.