Trusted by world-class organizations
Innerview — fast insights, stop rewatching interviews
Start for freeTrusted by world-class organizations
Innerview — fast insights, stop rewatching interviews
Start for freeVariance in artificial intelligence refers to the variability or sensitivity of a model's predictions to changes in the training data. It measures how much the model's output fluctuates when trained on different subsets of data.
Synonyms: model variance, prediction variance, variance in machine learning, variance in AI models

Variance is crucial because it helps determine a model's ability to generalize to new, unseen data. High variance indicates that the model is overfitting, meaning it performs well on training data but poorly on test data. Understanding variance helps in building robust AI models that balance accuracy and generalization.
Variance is used to diagnose and improve machine learning models. By analyzing variance, data scientists can decide whether to simplify the model, gather more training data, or apply regularization techniques to reduce overfitting and improve model performance.
For example, a complex neural network trained on a small dataset might have high variance, showing very different results when trained on different data samples. Conversely, a simpler model like linear regression might have low variance but higher bias.