📈 Bias–Variance Tradeoff Explorer

Fit polynomial regression to a noisy sine wave. Explore how model complexity (degree) affects training and validation error, and find the sweet spot between underfitting and overfitting.


Data & Model Fit
Dataset: y = sin(2πx) + noise, x ∈ [0, 1]
Train MSE
Validation MSE
Blue curve: model fit. Scatter: training (solid) and validation (hollow) points.
Training vs Validation Error
The validation curve typically goes down then up: the lowest point marks the sweet spot (best generalization).
📦 Dataset
Samples (N)120
Noise σ0.20
Train / Val split70 / 30
🤖 Model (Polynomial Regression)
Degree3
Ridge λ1e-6
🎯 Sweet Spot
Min validation error at degree:
Val MSE at sweet spot:

How to use & What to learn

Why this is useful: It shows the bias–variance tradeoff in a single place: as model complexity increases, training error keeps dropping, but validation error first drops then rises due to overfitting.

How to use: Drag Degree to change model capacity and watch the blue fit line and the error curves. Use Noise and Samples to change data difficulty, and Ridge λ to regularize (smooth) the fit.

Pros
  • Immediate intuition for underfitting vs overfitting.
  • Shows how regularization (λ) can reduce variance.
  • “Sweet spot” highlights the degree that minimizes validation error.
Cons
  • Toy dataset; real data can behave differently.
  • Focuses on MSE; other tasks may use different metrics.
  • Only polynomial regression—other models may respond differently.
Reading the graphs
  • Data & Fit: Blue curve is the model; solid points are training data; hollow points are validation. If degree is too low, the fit is too smooth (high bias). If degree is too high, the curve wiggles through noise (high variance).
  • Error curves: Training MSE typically decreases with degree, while Validation MSE forms a U-shape. The minimum of the validation curve is the best generalization (“sweet spot”).
  • Ridge λ: Increasing λ smooths the curve, often raising training error slightly but lowering validation error when the model is too wiggly.
Takeaway: choose capacity (and regularization) that minimizes validation error, not just training error. More data and appropriate regularization reduce variance.

How to use & What to learn
Quick start
  1. Adjust Samples and Noise, then click Generate.
  2. Move the Degree slider to change model complexity and click Fit Model.
  3. Click Compute Curve to plot Training vs Validation MSE across degrees.
Underfitting vs Overfitting
  • Low degree (e.g., 0–2): high bias → both Train and Val error high.
  • High degree (e.g., 12–20): low bias but high variance → Train low, Val rises.
  • Sweet spot: the degree with minimum Val MSE → best generalization.
Tips
  • Use more samples or add Ridge λ to stabilize high-degree fits.
  • Re-shuffle split to see how sensitive the sweet spot is.
  • Try extreme noise to see validation error increase overall.
This demo targets “bias variance tradeoff visual”, “underfitting vs overfitting demo”, and “polynomial regression visualization”.

Support This Free Tool

Every coffee helps keep the servers running. Every book sale funds the next tool I'm dreaming up. You're not just supporting a site — you're helping me build what developers actually need.

500K+ users
200+ tools
100% private
Privacy Guarantee: Private keys you enter or generate are never stored on our servers. All tools are served over HTTPS.