Fit polynomial regression to a noisy sine wave. Explore how model complexity (degree) affects training and validation error, and find the sweet spot between underfitting and overfitting.
Why this is useful: It shows the bias–variance tradeoff in a single place: as model complexity increases, training error keeps dropping, but validation error first drops then rises due to overfitting.
How to use: Drag Degree to change model capacity and watch the blue fit line and the error curves. Use Noise and Samples to change data difficulty, and Ridge λ to regularize (smooth) the fit.
Every coffee helps keep the servers running. Every book sale funds the next tool I'm dreaming up. You're not just supporting a site — you're helping me build what developers actually need.