Understand what a predicted probability means, how calibration works, and how thresholds affect decisions. Tune class imbalance, score separation, and calibration method to see effects on calibration error, loss, and ROC/PR performance.
What: A predicted probability is the model’s confidence (0–1) that an outcome is positive. Over many similar cases, a prediction of 0.8 should be positive about 80% of the time.
Calibration: Well-calibrated models match predicted probabilities to observed frequencies. Use Platt scaling (a logistic remap) or Isotonic (monotonic, non-parametric) to fix over/under-confidence.
Discrimination vs calibration: ROC/PR show how well the model ranks examples; metrics like ECE, Brier, and Log Loss show how trustworthy the probabilities are.
Thresholds: The default 0.5 may not be optimal—adjust τ for class imbalance or different error costs and watch Accuracy/F1 change.
Every coffee helps keep the servers running. Every book sale funds the next tool I'm dreaming up. You're not just supporting a site — you're helping me build what developers actually need.
Simulates predicted scores under tunable class imbalance and separation, then maps scores to probabilities with optional calibration (Platt/Isotonic). Plots reliability diagrams, ROC/PR curves, and reports ECE/Brier/Log Loss—all in your browser.