【AAAI 2024 】Text-based Occluded Person Re-identification via Multi-Granularity Contrastive Consistency Learning
This repository offers the official implementation of MGCC in PyTorch.
-
PyTorch version = 1.7.1
-
Install other libraries via
pip install -r requirements.txt-
CUHK-PEDES
Download the CUHK-PEDES dataset from here
Organize them in
./dataset/CUHK-PEDES/folder as follows:|-- dataset/ | |-- CUHK-PEDES/ | |-- imgs |-- cam_a |-- cam_b |-- ... | |-- reid_raw.json |-- others/ -
ICFG-PEDES
Download the ICFG-PEDES dataset from here
Organize them in
./dataset/ICFG-PEDES/folder as follows:|-- dataset/ | |-- ICFG-PEDES/ | |-- imgs |-- test |-- train | |-- ICFG-PEDES.json |-- others/ -
RSTPReid
Download the RSTPReid dataset from here
Organize them in
./dataset/RSTPReid/folder as follows:|-- dataset/ | |-- RSTPReid/ | |-- imgs | |-- data_captions.json |-- others/ -
Occlusion Instance Augmentation
After changing the parameters of
parse_argsfuction inprocess_data.pyaccording to different datasets, run theprocess_data.pyin thedatasetfolder.
-
About the pretrained CLIP and Bert checkpoints
Download the pretrained CLIP checkpoints from here and save it in path
./src/pretrain/clip-vit-base-patch32/Download the pretrained Bert checkpoints from here and save it in path
./src/pretrain/bert-base-uncased/ -
About the running scripts
Use CUHK-PEDES as examples:
sh experiment/CUHK-PEDES/train.shAfter training done, you can test your model by run:
sh experiment/CUHK-PEDES/test.shAs for the usage of different parameters, you can refer to
src/option/options.pyfor the detailed meaning of each parameter.
If you find our method useful in your work, please consider staring 🌟 this repo and citing 📑 our paper:
@inproceedings{wu2024text,
title={Text-based Occluded Person Re-identification via Multi-Granularity Contrastive Consistency Learning},
author={Wu, Xinyi and Ma, Wentao and Guo, Dan and Zhou, Tongqing and Zhao, Shan and Cai, Zhiping},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={6},
pages={6162--6170},
year={2024}
}
The implementation of our paper relies on resources from SSAN, CLIP and XCLIP. We thank the original authors for their open-sourcing.
