Skip to content

Commit 2acf9cf

Browse files
authored
[MOT] add centertrack and refine centernet (#7510)
* add centertrack base codes * fix deploy and docs * fix tracker * fix * fix pre_img * fix deploy * fix
1 parent 3f7e70d commit 2acf9cf

40 files changed

+2372
-155
lines changed

README_cn.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -253,10 +253,10 @@ PaddleDetection整理工业、农业、林业、交通、医疗、金融、能
253253
<td>
254254
<ul>
255255
<li><a href="ppdet/modeling/losses/smooth_l1_loss.py">Smooth-L1</a></li>
256-
<li><a href="ppdet/modeling/losses/detr_loss.py">Detr Loss</a></li>
256+
<li><a href="ppdet/modeling/losses/detr_loss.py">Detr Loss</a></li>
257257
<li><a href="ppdet/modeling/losses/fairmot_loss.py">Fairmot Loss</a></li>
258258
<li><a href="ppdet/modeling/losses/fcos_loss.py">Fcos Loss</a></li>
259-
<li><a href="ppdet/modeling/losses/gfocal_loss.py">GFocal Loss</a></li>
259+
<li><a href="ppdet/modeling/losses/gfocal_loss.py">GFocal Loss</a></li>
260260
<li><a href="ppdet/modeling/losses/jde_loss.py">JDE Loss</a></li>
261261
<li><a href="ppdet/modeling/losses/keypoint_loss.py">KeyPoint Loss</a></li>
262262
<li><a href="ppdet/modeling/losses/solov2_loss.py">SoloV2 Loss</a></li>
@@ -288,7 +288,7 @@ PaddleDetection整理工业、农业、林业、交通、医疗、金融、能
288288
</ul>
289289
<li><b>Common</b></li>
290290
<ul>
291-
<ul>
291+
<ul>
292292
<li><a href="ppdet/modeling/backbones/resnet.py#L41">Sync-BN</a></li>
293293
<li><a href="configs/gn/README.md">Group Norm</a></li>
294294
<li><a href="configs/dcn/README.md">DCNv2</a></li>
@@ -350,7 +350,7 @@ PaddleDetection整理工业、农业、林业、交通、医疗、金融、能
350350
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO">YOLOX</a></li>
351351
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO">YOLOv6</a></li>
352352
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO">YOLOv7</a></li>
353-
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO">RTMDet</a></li>
353+
<li><a href="https://github.com/PaddlePaddle/PaddleYOLO">RTMDet</a></li>
354354
<li><a href="configs/ppyolo/README_cn.md">PP-YOLO</a></li>
355355
<li><a href="configs/ppyolo#pp-yolo-tiny">PP-YOLO-Tiny</a></li>
356356
<li><a href="configs/picodet">PP-PicoDet</a></li>
@@ -378,6 +378,7 @@ PaddleDetection整理工业、农业、林业、交通、医疗、金融、能
378378
<li><a href="configs/mot/deepsort">DeepSORT</a></li>
379379
<li><a href="configs/mot/bytetrack">ByteTrack</a></li>
380380
<li><a href="configs/mot/ocsort">OC-SORT</a></li>
381+
<li><a href="configs/mot/centertrack">CenterTrack</a></li>
381382
</ul>
382383
</td>
383384
<td>
@@ -799,4 +800,4 @@ PP-Vehicle囊括四大交通场景核心功能:车牌识别、属性识别、
799800
@misc{ppdet2019,
800801
title={PaddleDetection, Object detection and instance segmentation toolkit based on PaddlePaddle.},
801802
author={PaddlePaddle Authors},
802-
howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},
803+
howpublished = {\url{https://github.com/PaddlePaddle/PaddleDetection}},

README_en.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -156,6 +156,7 @@
156156
<li>DeepSORT</li>
157157
<li>ByteTrack</li>
158158
<li>OC-SORT</li>
159+
<li>CenterTrack</li>
159160
</ul></details>
160161
<details><summary><b>KeyPoint-Detection</b></summary>
161162
<ul>

configs/mot/README.md

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,15 @@ PaddleDetection中提供了SDE和JDE两个系列的多种算法实现:
2424
- [ByteTrack](./bytetrack)
2525
- [OC-SORT](./ocsort)
2626
- [DeepSORT](./deepsort)
27+
- [CenterTrack](./centertrack)
2728
- JDE
2829
- [JDE](./jde)
2930
- [FairMOT](./fairmot)
3031
- [MCFairMOT](./mcfairmot)
3132

3233
**注意:**
3334
- 以上算法原论文均为单类别的多目标跟踪,PaddleDetection团队同时也支持了[ByteTrack](./bytetrack)和FairMOT([MCFairMOT](./mcfairmot))的多类别的多目标跟踪;
34-
- [DeepSORT](./deepsort)[JDE](./jde)均只支持单类别的多目标跟踪;
35+
- [DeepSORT](./deepsort)[JDE](./jde)[CenterTrack](./centertrack)均只支持单类别的多目标跟踪;
3536
- [DeepSORT](./deepsort)需要额外添加ReID权重一起执行,[ByteTrack](./bytetrack)可加可不加ReID权重,默认不加;
3637

3738

@@ -96,6 +97,7 @@ pip install lap motmetrics sklearn filterpy
9697
- [DeepSORT](deepsort/README_cn.md)
9798
- [JDE](jde/README_cn.md)
9899
- [FairMOT](fairmot/README_cn.md)
100+
- [CenterTrack](centertrack/README_cn.md)
99101
- 特色垂类模型
100102
- [行人跟踪](pedestrian/README_cn.md)
101103
- [人头跟踪](headtracking21/README_cn.md)
@@ -111,7 +113,7 @@ pip install lap motmetrics sklearn filterpy
111113

112114
| MOT方式 | 经典算法 | 算法流程 | 数据集要求 | 其他特点 |
113115
| :--------------| :--------------| :------- | :----: | :----: |
114-
| SDE系列 | DeepSORT,ByteTrack,OC-SORT | 分离式,两个独立模型权重先检测后ReID,也可不加ReID | 检测和ReID数据相对独立,不加ReID时即纯检测数据集 |检测和ReID可分别调优,鲁棒性较高,AI竞赛常用|
116+
| SDE系列 | DeepSORT,ByteTrack,OC-SORT,CenterTrack | 分离式,两个独立模型权重先检测后ReID,也可不加ReID | 检测和ReID数据相对独立,不加ReID时即纯检测数据集 |检测和ReID可分别调优,鲁棒性较高,AI竞赛常用|
115117
| JDE系列 | FairMOT,JDE | 联合式,一个模型权重端到端同时检测和ReID | 必须同时具有检测和ReID标注 | 检测和ReID联合训练,不易调优,泛化性不强|
116118

117119
**注意:**
@@ -266,4 +268,25 @@ MOT17
266268
journal={arXiv preprint arXiv:2004.01888},
267269
year={2020}
268270
}
271+
272+
@article{zhang2021bytetrack,
273+
title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
274+
author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
275+
journal={arXiv preprint arXiv:2110.06864},
276+
year={2021}
277+
}
278+
279+
@article{cao2022observation,
280+
title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
281+
author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
282+
journal={arXiv preprint arXiv:2203.14360},
283+
year={2022}
284+
}
285+
286+
@article{zhou2020tracking,
287+
title={Tracking Objects as Points},
288+
author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
289+
journal={ECCV},
290+
year={2020}
291+
}
269292
```

configs/mot/README_en.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ pip install -r requirements.txt
6464
- [DeepSORT](deepsort/README.md)
6565
- [JDE](jde/README.md)
6666
- [FairMOT](fairmot/README.md)
67+
- [CenterTrack](centertrack/README.md)
6768
- Feature models
6869
- [Pedestrian](pedestrian/README.md)
6970
- [Head](headtracking21/README.md)
@@ -184,4 +185,25 @@ In the annotation text, each line is describing a bounding box and has the follo
184185
journal={arXiv preprint arXiv:2004.01888},
185186
year={2020}
186187
}
188+
189+
@article{zhang2021bytetrack,
190+
title={ByteTrack: Multi-Object Tracking by Associating Every Detection Box},
191+
author={Zhang, Yifu and Sun, Peize and Jiang, Yi and Yu, Dongdong and Yuan, Zehuan and Luo, Ping and Liu, Wenyu and Wang, Xinggang},
192+
journal={arXiv preprint arXiv:2110.06864},
193+
year={2021}
194+
}
195+
196+
@article{cao2022observation,
197+
title={Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking},
198+
author={Cao, Jinkun and Weng, Xinshuo and Khirodkar, Rawal and Pang, Jiangmiao and Kitani, Kris},
199+
journal={arXiv preprint arXiv:2203.14360},
200+
year={2022}
201+
}
202+
203+
@article{zhou2020tracking,
204+
title={Tracking Objects as Points},
205+
author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
206+
journal={ECCV},
207+
year={2020}
208+
}
187209
```

configs/mot/centertrack/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
README_cn.md

configs/mot/centertrack/README_cn.md

Lines changed: 156 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,156 @@
1+
简体中文 | [English](README.md)
2+
3+
# CenterTrack (Tracking Objects as Points)
4+
5+
## 内容
6+
- [模型库](#模型库)
7+
- [快速开始](#快速开始)
8+
- [引用](#引用)
9+
10+
## 模型库
11+
12+
### MOT17
13+
14+
| 训练数据集 | 输入尺度 | 总batch_size | val MOTA | test MOTA | FPS | 配置文件 | 下载链接|
15+
| :---------------: | :-------: | :------------: | :----------------: | :---------: | :-------: | :----: | :-----: |
16+
| MOT17-half train | 544x960 | 32 | 69.2(MOT17-half) | - | - |[config](./centertrack_dla34_70e_mot17half.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/centertrack_dla34_70e_mot17half.pdparams) |
17+
| MOT17 train | 544x960 | 32 | 87.9(MOT17-train) | 70.5(MOT17-test) | - |[config](./centertrack_dla34_70e_mot17.yml) | [download](https://paddledet.bj.bcebos.com/models/mot/centertrack_dla34_70e_mot17.pdparams) |
18+
| MOT17 train(paper) | 544x960| 32 | - | 67.8(MOT17-test) | - | - | - |
19+
20+
21+
**注意:**
22+
- CenterTrack默认使用2 GPUs总batch_size为32进行训练,如改变GPU数或单卡batch_size,最好保持总batch_size为32去训练。
23+
- **val MOTA**可能会有1.0 MOTA左右的波动,最好使用2 GPUs和总batch_size为32的默认配置去训练。
24+
- **MOT17-half train**是MOT17的train序列(共7个)每个视频的**前一半帧**的图片和标注用作训练集,而用每个视频的后一半帧组成的**MOT17-half val**作为验证集去评估得到**val MOTA**,数据集可以从[此链接](https://bj.bcebos.com/v1/paddledet/data/mot/MOT17.zip)下载,并解压放在`dataset/mot/`文件夹下。
25+
- **MOT17 train**是MOT17的train序列(共7个)每个视频的所有帧的图片和标注用作训练集,由于MOT17数据集有限也使用**MOT17 train**数据集去评估得到**val MOTA**,而**test MOTA**为交到[MOT Challenge官网](https://motchallenge.net)评测的结果。
26+
27+
28+
## 快速开始
29+
30+
### 1.训练
31+
通过如下命令一键式启动训练和评估
32+
```bash
33+
# 单卡训练(不推荐)
34+
CUDA_VISIBLE_DEVICES=0 python tools/train.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml --amp
35+
# 多卡训练
36+
python -m paddle.distributed.launch --log_dir=centertrack_dla34_70e_mot17half/ --gpus 0,1 tools/train.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml --amp
37+
```
38+
**注意:**
39+
- `--eval`暂不支持边训练边验证跟踪的MOTA精度,如果需要开启`--eval`边训练边验证检测mAP,需设置**注释配置文件中的`mot_metric: True``metric: MOT`**
40+
- `--amp`表示混合精度训练避免显存溢出;
41+
- CenterTrack默认使用2 GPUs总batch_size为32进行训练,如改变GPU数或单卡batch_size,最好保持总batch_size仍然为32;
42+
43+
44+
### 2.评估
45+
46+
#### 2.1 评估检测效果
47+
48+
注意首先需要**注释配置文件中的`mot_metric: True``metric: MOT`**:
49+
```python
50+
### for detection eval.py/infer.py
51+
mot_metric: False
52+
metric: COCO
53+
54+
### for MOT eval_mot.py/infer_mot_mot.py
55+
#mot_metric: True # 默认是不注释的,评估跟踪需要为 True,会覆盖之前的 mot_metric: False
56+
#metric: MOT # 默认是不注释的,评估跟踪需要使用 MOT,会覆盖之前的 metric: COCO
57+
```
58+
59+
然后执行以下语句:
60+
```bash
61+
CUDA_VISIBLE_DEVICES=0 python tools/eval.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml -o weights=output/centertrack_dla34_70e_mot17half/model_final.pdparams
62+
```
63+
64+
**注意:**
65+
- 评估检测使用的是```tools/eval.py```, 评估跟踪使用的是```tools/eval_mot.py```
66+
67+
#### 2.2 评估跟踪效果
68+
69+
注意首先确保设置了**配置文件中的`mot_metric: True``metric: MOT`**
70+
71+
然后执行以下语句:
72+
73+
```bash
74+
CUDA_VISIBLE_DEVICES=0 python tools/eval_mot.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml -o weights=output/centertrack_dla34_70e_mot17half/model_final.pdparams
75+
```
76+
**注意:**
77+
- 评估检测使用的是```tools/eval.py```, 评估跟踪使用的是```tools/eval_mot.py```
78+
- 跟踪结果会存于`{output_dir}/mot_results/`中,里面每个视频序列对应一个txt,每个txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`, 此外`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`
79+
80+
81+
### 3.预测
82+
83+
#### 3.1 预测检测效果
84+
注意首先需要**注释配置文件中的`mot_metric: True``metric: MOT`**:
85+
```python
86+
### for detection eval.py/infer.py
87+
mot_metric: False
88+
metric: COCO
89+
90+
### for MOT eval_mot.py/infer_mot_mot.py
91+
#mot_metric: True # 默认是不注释的,评估跟踪需要为 True,会覆盖之前的 mot_metric: False
92+
#metric: MOT # 默认是不注释的,评估跟踪需要使用 MOT,会覆盖之前的 metric: COCO
93+
```
94+
95+
然后执行以下语句:
96+
```bash
97+
CUDA_VISIBLE_DEVICES=0 python tools/infer.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml -o weights=output/centertrack_dla34_70e_mot17half/model_final.pdparams --infer_img=demo/000000014439_640x640.jpg --draw_threshold=0.5
98+
```
99+
100+
**注意:**
101+
- 预测检测使用的是```tools/infer.py```, 预测跟踪使用的是```tools/infer_mot.py```
102+
103+
104+
#### 3.2 预测跟踪效果
105+
106+
注意首先确保设置了**配置文件中的`mot_metric: True``metric: MOT`**
107+
108+
然后执行以下语句:
109+
```bash
110+
# 下载demo视频
111+
wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4
112+
# 预测视频
113+
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml --video_file=mot17_demo.mp4 --draw_threshold=0.5 --save_videos -o weights=output/centertrack_dla34_70e_mot17half/model_final.pdparams
114+
#或预测图片文件夹
115+
CUDA_VISIBLE_DEVICES=0 python tools/infer_mot.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml --image_dir=mot17_demo/ --draw_threshold=0.5 --save_videos -o weights=output/centertrack_dla34_70e_mot17half/model_final.pdparams
116+
```
117+
118+
**注意:**
119+
- 请先确保已经安装了[ffmpeg](https://ffmpeg.org/ffmpeg.html), Linux(Ubuntu)平台可以直接用以下命令安装:`apt-get update && apt-get install -y ffmpeg`
120+
- `--save_videos`表示保存可视化视频,同时会保存可视化的图片在`{output_dir}/mot_outputs/`中,`{output_dir}`可通过`--output_dir`设置,默认文件夹名为`output`
121+
122+
123+
### 4. 导出预测模型
124+
125+
注意首先确保设置了**配置文件中的`mot_metric: True``metric: MOT`**
126+
127+
```bash
128+
CUDA_VISIBLE_DEVICES=0 python tools/export_model.py -c configs/mot/centertrack/centertrack_dla34_70e_mot17half.yml -o weights=https://paddledet.bj.bcebos.com/models/mot/centertrack_dla34_70e_mot17half.pdparams
129+
```
130+
131+
### 5. 用导出的模型基于Python去预测
132+
133+
注意首先应在`deploy/python/tracker_config.yml`中设置`type: CenterTracker`
134+
135+
```bash
136+
# 预测某个视频
137+
# wget https://bj.bcebos.com/v1/paddledet/data/mot/demo/mot17_demo.mp4
138+
python deploy/python/mot_centertrack_infer.py --model_dir=output_inference/centertrack_dla34_70e_mot17half/ --tracker_config=deploy/python/tracker_config.yml --video_file=mot17_demo.mp4 --device=GPU --save_images=True --save_mot_txts
139+
# 预测图片文件夹
140+
python deploy/python/mot_centertrack_infer.py --model_dir=output_inference/centertrack_dla34_70e_mot17half/ --tracker_config=deploy/python/tracker_config.yml --image_dir=mot17_demo/ --device=GPU --save_images=True --save_mot_txts
141+
```
142+
143+
**注意:**
144+
- 跟踪模型是对视频进行预测,不支持单张图的预测,默认保存跟踪结果可视化后的视频,可添加`--save_mot_txts`(对每个视频保存一个txt)或`--save_mot_txt_per_img`(对每张图片保存一个txt)表示保存跟踪结果的txt文件,或`--save_images`表示保存跟踪结果可视化图片。
145+
- 跟踪结果txt文件每行信息是`frame,id,x1,y1,w,h,score,-1,-1,-1`
146+
147+
148+
## 引用
149+
```
150+
@article{zhou2020tracking,
151+
title={Tracking Objects as Points},
152+
author={Zhou, Xingyi and Koltun, Vladlen and Kr{\"a}henb{\"u}hl, Philipp},
153+
journal={ECCV},
154+
year={2020}
155+
}
156+
```
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
pretrain_weights: https://bj.bcebos.com/v1/paddledet/models/pretrained/crowdhuman_centertrack.pdparams
2+
architecture: CenterTrack
3+
for_mot: True
4+
mot_metric: True
5+
6+
### model
7+
CenterTrack:
8+
detector: CenterNet
9+
plugin_head: CenterTrackHead
10+
tracker: CenterTracker
11+
12+
13+
### CenterTrack.detector
14+
CenterNet:
15+
backbone: DLA
16+
neck: CenterNetDLAFPN
17+
head: CenterNetHead
18+
post_process: CenterNetPostProcess
19+
for_mot: True # Note
20+
21+
DLA:
22+
depth: 34
23+
pre_img: True # Note
24+
pre_hm: True # Note
25+
26+
CenterNetDLAFPN:
27+
down_ratio: 4
28+
last_level: 5
29+
out_channel: 0
30+
dcn_v2: True
31+
32+
CenterNetHead:
33+
head_planes: 256
34+
prior_bias: -4.6 # Note
35+
regress_ltrb: False
36+
size_loss: 'L1'
37+
loss_weight: {'heatmap': 1.0, 'size': 0.1, 'offset': 1.0}
38+
39+
CenterNetPostProcess:
40+
max_per_img: 100 # top-K
41+
regress_ltrb: False
42+
43+
44+
### CenterTrack.plugin_head
45+
CenterTrackHead:
46+
head_planes: 256
47+
task: tracking
48+
loss_weight: {'tracking': 1.0, 'ltrb_amodal': 0.1}
49+
add_ltrb_amodal: True
50+
51+
52+
### CenterTrack.tracker
53+
CenterTracker:
54+
min_box_area: -1
55+
vertical_ratio: -1
56+
track_thresh: 0.4
57+
pre_thresh: 0.5

0 commit comments

Comments
 (0)