This is a TensorFlow implementation of the face recognizer described in the paper "FaceNet: A Unified Embedding for Face Recognition and Clustering". The project also uses ideas from the paper "A Discriminative Feature Learning Approach for Deep Face Recognition" as well as the paper "Deep Face Recognition" from the Visual Geometry Group at Oxford.
Currently this repo is compatible with Tensorflow r0.12.
Date | Update |
---|---|
2017-02-03 | Added models where only trainable variables has been stored in the checkpoint. These are therefore significantly smaller. |
2017-01-27 | Added a model trained on a subset of the MS-Celeb-1M dataset. The LFW accuracy of this model is around 0.994. |
2017-01-02 | Updated to code to run with Tensorflow r0.12. Not sure if it runs with older versions of Tensorflow though. |
Model name | LFW accuracy | Training dataset |
---|---|---|
20170131-005910 | 0.985 | CASIA-WebFace |
20170131-234652 | 0.993 | MS-Celeb-1M |
The code is heavly inspired by the OpenFace implementation.
The CASIA-WebFace dataset has been used for training. This training set consists of total of 453 453 images over 10 575 identities after face detection. Some performance improvement has been seen if the dataset has been filtered before training. Some more information about how this was done will come later. The best performing model has been trained on a subset of the MS-Celeb-1M dataset. This dataset is significantly larger but also contains significantly more label noise, and therefore it is crucial to apply dataset filtering on this dataset.
One problem with the above approach seems to be that the Dlib face detector misses some of the hard examples (partial occlusion, siluettes, etc). This makes the training set to "easy" which causes the model to perform worse on other benchmarks. To solve this, other face landmark detectors has been tested. One face landmark detector that has proven to work very well in this setting is the Multi-task CNN. A Matlab/Caffe implementation can be found here and this has been used for face alignment with very good results. Experimental code that has been used to align the training datasets can be found here. However, work is ongoing to reimplement MTCNN face alignment in Python/Tensorflow. Currently some work still remains on this but the implementation can be found here.
Currently, the best results are achieved by training the model as a classifier with the addition of Center loss. Details on how to train a model as a classifier can be found on the page Classifier training of Inception-ResNet-v1.
Currently, the best performing model is an Inception-Resnet-v1 model trained on CASIA-Webface aligned with MTCNN. This alignment step requires Matlab and Caffe installed which requires some extra work. This will be easier when the Python/Tensorflow implementation is fully functional.
The accuracy on LFW for the model 20170131-234652 is 0.993+-0.004. A description of how to run the test can be found on the page Validate on LFW.