Skip to content

hertie-dsl-interim-backup/tutorial-new-grp5

 
 

Repository files navigation

Addressing fairness and bias in facial recognition with Explainable AI

This tutorial focuses on explainable AI (XAI) and bias of the image classification models. We believe that explainability of a model can provide insights into why it can be constructing biased decisions, and how to prevent them. Within this approach, we replicate a resnet18 model from the paper 'Face Recognition: Too Bias, or Not Too Bias?' (Robinson et al., 2020) and demonstrate the usage of two packages that provide tools for explainable AI purposes: Xplique and Captum.

In the tutorial, we:

  1. Go through various bias assessment techniques and metrics

  2. Give a quick overview of attribution methods and metrics for those methods' evaluation

  3. Apply a naive approach to qualitatively analyze a small sample of images through attribution/saliency maps to derive differences between highlighed regions among classes.

To download the data with classified face images, please use this Dropbox link.

Since Xplique is mainly working with Tensorflow, which is not supported by the latest versions of Python, please use Python 3.9–3.12.

Video Tutorial

The file was too heavy for Github, so the video can be found on this link.

Contributions

Giulia Maria Petrilli (236888) - wrote helpers code for data and model loading, preprocessing and bias assessment, prepared an in-class presentation together with Fanus, reviewed and tested code and added text throughout tutorial preparation. You can track related GitHub commits in the current AND our first GitHub repo, that was used before the official link was fixed.

Laia Domenech Burin (241597) - refactored and modularized code, created the section of the same name in the notebook with plots and accompanying text, debugged the final code and enriched interpretations. Last but not least, starred in the main tutorial video.

Fanus Ghorjani (248835) - created a code and text about Captum library, worked on an in-class presentation with Giulia.

Sofiya Berdiyeva (246934) - prepared parts related to Xplique library and naive attribution maps analysis, README.md and requirements.txt.

References

Robinson, J. P., Livitz, G., Henon, Y., Qin, C., Fu, Y., & Timoner, S. (2020). Face Recognition: Too Bias, or Not Too Bias? (No. arXiv:2002.06483). arXiv. https://doi.org/10.48550/arXiv.2002.06483

Tutorials—Xplique. (n.d.). Retrieved December 9, 2025, from https://deel-ai.github.io/xplique/latest/tutorials/

About

hertie-school-deep-learning-fall-2025-tutorial-new-deep-learning-2025-tutorial created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Jupyter Notebook 99.9%
  • Python 0.1%