Use this tag for questions related to overfitting, which is a modeling error (especially sampling error) where instead of improving model fit statistics, replicable and informative relationships among variables reduce parsimony, and worsens explanatory and predictive validity.
Models that involve complex polynomial functions or too many independent variables may fit particular samples' covariance structures overly well, such that some existing (and any potential, additional) terms increase model fit by modeling sampling error, not systematic covariance that is likely to replicate or represent theoretically useful relationships. When used to predict other data (e.g., future outcomes, out-of-sample data), overfitting increases prediction error.
The Wikipedia page offers illustrations, lists of potential solutions, and special treatment of the topic as it relates to machine learning.
See also:
Leinweber, D. J. (2007). Stupid data miner tricks: Overfitting the S&P 500. The Journal of Investing, 16(1), 15–22. [PDF]
Tetko, I. V., Livingstone, D. J., & Luik, A. I. (1995). Neural network studies. 1. Comparison of overfitting and overtraining. Journal of chemical information and computer sciences, 35(5), 826-833. [doi]