About the Project (ArXiv)
3D panoramic multi-person localization and tracking are prominent in many applications, however, conventional methods using LiDAR equipment could be economically expensive and also computationally inefficient due to the processing of point cloud data. In this work, we propose an effective and efficient approach at a low cost. First, we utilize RGB panoramic videos instead of LiDAR data. Then, we transform human locations from a 2D panoramic image coordinate to a 3D panoramic camera coordinate using camera geometry and human bio-metric property (i.e., height). Finally, we generate 3D tracklets by associating human appearance and 3D trajectory. We verify the effectiveness of our method on three datasets including a new one built by us, in terms of 3D single-view multi-person localization, 3D single-view multi-person tracking, and 3D panoramic multi-person localization and tracking.
Machine learning check if you do keep enough distance to prevent CORD-19
The code was tested on Ubuntu 18.04, with Anaconda Python 3.6 and PyTorch v1.1.0.
You may need to install requirements.txt by
pip3 install requirements.txt
- Download data and put them to /data folder
- Download model weight and put it to /reid folder
- Run pano_detector.ipynb to generate and save 2D detection boxes.
- Run tracking.ipynb to generate and save tracking links (we will update the tracker from DeepSort to ours later).
- Run generate_video.ipynb to generate visulation videos.
The code is distributed under the MIT License. See LICENSE
for more information.
@inproceedings{yang2020mplt,
title={Using panoramic videos for multi-person localization and tracking in a 3D panoramic coordinate},
author={Fan Yang, Feiran Li, Yang Wu, Sakriani Sakti, and Satoshi Nakamura},
booktitle={International Conference on Acoustics, Speech, and Signal Processing},
year={2020}
}