Mitigating Bias in Calibration Error Estimation - Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, Michael C. Mozer (AISTATS 2022)
Calibration of Neural Networks using Splines - Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, Richard Hartley (ICLR 2021)
- top-k calibration
Revisiting the Calibration of Modern Neural Networks - Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, Mario Lucic (NeurIPS 2021)
Soft Calibration Objectives for Neural Networks - Archit Karandikar, Nicholas Cain, Dustin Tran, Balaji Lakshminarayanan, Jonathon Shlens, Michael C. Mozer, Becca Roelofs (NeurIPS 2021)
Calibrating Predictions to Decisions: A Novel Approach to Multi-Class Calibration - Shengjia Zhao, Michael P. Kim, Roshni Sahoo, Tengyu Ma, Stefano Ermon (NeurIPS 2021)
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification - Youngseog Chung, Ian Char, Han Guo, Jeff Schneider, Willie Neiswanger
Measuring Calibration in Deep Learning - Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, Dustin Tran
- SCE, ACE
Calibrating Deep Neural Networks using Focal Loss - Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H.S. Torr, Puneet K. Dokania (NeurIPS 2020)
Individual Calibration with Randomized Forecasting - Shengjia Zhao, Tengyu Ma, Stefano Ermon (ICML 2020)
Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning - Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han (ICML 2020)
Verified Uncertainty Calibration - Ananya Kumar, Percy Liang, Tengyu Ma (NeurIPS 2019)
Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration - Meelis Kull, Miquel Perello-Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter Flach (NeurIPS 2019)
- Classwise-ECE
Evaluating model calibration in classification - Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, Thomas B. Schön (AISTATS 2019)
Calibration tests in multi-class classification: A unifying framework - David Widmann, Fredrik Lindsten, Dave Zachariah (NeurIPS 2019)
On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks - Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak (NeurIPS 2019)
Deep Anomaly Detection with Outlier Exposure - Dan Hendrycks, Mantas Mazeika, Thomas Dietterich (ICLR 2019)
- RMS, MAD and Soft F1 Score
Trainable Calibration Measures for Neural Networks from Kernel Mean Embeddings - Aviral Kumar, Sunita Sarawagi, Ujjwal Jain (ICML 2018)
- MMCE
On Calibration of Modern Neural Networks - Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger (ICML 2017)
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers - Meelis Kull, Telmo Silva Filho, Peter Flach (AIStats 2017)
Obtaining Well Calibrated Probabilities Using Bayesian Binning - Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht (AAAI 2015)
- ECE, MCE