Enhancing uncertainty estimation and outlier detection through confidence calibration for out-of-distribution data.
This repository explores the effectiveness of estimating uncertainty and recognizing outliers. These are critical for generating trustworthy, robust machine learning algorithms or deep neural networks. While machine and deep learning models have demonstrated outstanding performance in various tasks, they frequently struggle with out-of-distribution (OoD) data. In this thesis, we explore and take further the prominent approaches like outlier exporter (OE) and decomposed confidence architecture, using in-distribution and out-distribution data. This method improves and generalizes the model's performance on unseen data.
The primary goal is to enhance confidence calibration methods in machine learning models and improve anomaly detection using out-of-distribution data with outlier exposure and decomposed confidence.
To reproduce the experiments and analyses conducted in this thesis, follow these steps:
-
Clone this repository:
git clone https://github.com/ashishsaini01/master-thesis.git
-
Run to install the required dependencies:
pip install -e .
[1] Hendrycks,Dan and Kevin Gimpel: A baselinefordetectingmisclassifiedand out-of-distribution examplesinneuralnetworks. arXivpreprintarXiv:1610.02136, 2016.
[2] Hendrycks,Dan, MantasMazeika and Thomas Dietterich: Deepanomaly detectionwithoutlierexposure. arXivpreprintarXiv:1812.04606,2018.
[3] Hsu, Yen-Chang, Yilin Shen, Hongxia Jin and ZsoltKira: Generalizedodin: Detectingout-of-distributionimagewithoutlearningfromout-of-distributiondata. In ProceedingsoftheIEEE/CVFconferenceoncomputervisionandpatternrecognition, pages 10951–10960,2020.
If you have any questions, suggestions, or issues regarding this repository or the implemented model, please feel free to contact the author:
Ashish Saini
Email: [email protected] | [email protected]