The bias-variance tradeoff refers to the problem of minimizing two different sources of error when training a supervised learning model:
Bias - Bias is a consistent error, possibly from the algorithm having made an incorrect assumption about the training data. Bias is often related to underfitting.
Variance - Variances comes from a high sensitivity to differences in training data. Variance is often related to overfitting.
It is typically difficult to simultaneously minimize bias and variance.