Skip to content

Commit dfffb99

Browse files
hhaAndroidCycyes
andauthored
MMGroundingDINO-A replicable and more comprehensive GroundingDINO (#11295)
Co-authored-by: Cycyes <[email protected]>
1 parent ee2e542 commit dfffb99

File tree

89 files changed

+8953
-187
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

89 files changed

+8953
-187
lines changed

configs/glip/README.md

+71
Original file line numberDiff line numberDiff line change
@@ -99,3 +99,74 @@ Note:
9999
1. The above are zero-shot evaluation results.
100100
2. The evaluation metric we used is LVIS FixAP. For specific details, please refer to [Evaluating Large-Vocabulary Object Detectors: The Devil is in the Details](https://arxiv.org/pdf/2102.01066.pdf).
101101
3. We found that the performance on small models is better than the official results, but it is lower on large models. This is mainly due to the incomplete alignment of the GLIP post-processing.
102+
103+
## ODinW (Object Detection in the Wild) Results
104+
105+
Learning visual representations from natural language supervision has recently shown great promise in a number of pioneering works. In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets and tasks. However, it remains challenging to evaluate the transferablity of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. To tackle this, we build ELEVATER 1 , the first benchmark and toolkit for evaluating (pre-trained) language-augmented visual models. ELEVATER is composed of three components. (i) Datasets. As downstream evaluation suites, it consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge. (ii) Toolkit. An automatic hyper-parameter tuning toolkit is developed to facilitate model evaluation on downstream tasks. (iii) Metrics. A variety of evaluation metrics are used to measure sample-efficiency (zero-shot and few-shot) and parameter-efficiency (linear probing and full model fine-tuning). ELEVATER is platform for Computer Vision in the Wild (CVinW), and is publicly released at https://computer-vision-in-the-wild.github.io/ELEVATER/
106+
107+
### Results and models of ODinW13
108+
109+
| Method | GLIP-T(A) | Official | GLIP-T(B) | Official | GLIP-T(C) | Official | GroundingDINO-T | GroundingDINO-B |
110+
| --------------------- | --------- | --------- | --------- | --------- | --------- | --------- | --------------- | --------------- |
111+
| AerialMaritimeDrone | 0.123 | 0.122 | 0.110 | 0.110 | 0.130 | 0.130 | 0.173 | 0.281 |
112+
| Aquarium | 0.175 | 0.174 | 0.173 | 0.169 | 0.191 | 0.190 | 0.195 | 0.445 |
113+
| CottontailRabbits | 0.686 | 0.686 | 0.688 | 0.688 | 0.744 | 0.744 | 0.799 | 0.808 |
114+
| EgoHands | 0.013 | 0.013 | 0.003 | 0.004 | 0.314 | 0.315 | 0.608 | 0.764 |
115+
| NorthAmericaMushrooms | 0.502 | 0.502 | 0.367 | 0.367 | 0.297 | 0.296 | 0.507 | 0.675 |
116+
| Packages | 0.589 | 0.589 | 0.083 | 0.083 | 0.699 | 0.699 | 0.687 | 0.670 |
117+
| PascalVOC | 0.512 | 0.512 | 0.541 | 0.540 | 0.565 | 0.565 | 0.563 | 0.711 |
118+
| pistols | 0.339 | 0.339 | 0.502 | 0.501 | 0.503 | 0.504 | 0.726 | 0.771 |
119+
| pothole | 0.007 | 0.007 | 0.030 | 0.030 | 0.058 | 0.058 | 0.215 | 0.478 |
120+
| Raccoon | 0.075 | 0.074 | 0.285 | 0.288 | 0.241 | 0.244 | 0.549 | 0.541 |
121+
| ShellfishOpenImages | 0.253 | 0.253 | 0.337 | 0.338 | 0.300 | 0.302 | 0.393 | 0.650 |
122+
| thermalDogsAndPeople | 0.372 | 0.372 | 0.475 | 0.475 | 0.510 | 0.510 | 0.657 | 0.633 |
123+
| VehiclesOpenImages | 0.574 | 0.566 | 0.562 | 0.547 | 0.549 | 0.534 | 0.613 | 0.647 |
124+
| Average | **0.325** | **0.324** | **0.320** | **0.318** | **0.392** | **0.392** | **0.514** | **0.621** |
125+
126+
### Results and models of ODinW35
127+
128+
| Method | GLIP-T(A) | Official | GLIP-T(B) | Official | GLIP-T(C) | Official | GroundingDINO-T | GroundingDINO-B |
129+
| --------------------------- | --------- | --------- | --------- | --------- | --------- | --------- | --------------- | --------------- |
130+
| AerialMaritimeDrone_large | 0.123 | 0.122 | 0.110 | 0.110 | 0.130 | 0.130 | 0.173 | 0.281 |
131+
| AerialMaritimeDrone_tiled | 0.174 | 0.174 | 0.172 | 0.172 | 0.172 | 0.172 | 0.206 | 0.364 |
132+
| AmericanSignLanguageLetters | 0.001 | 0.001 | 0.003 | 0.003 | 0.009 | 0.009 | 0.002 | 0.096 |
133+
| Aquarium | 0.175 | 0.175 | 0.173 | 0.171 | 0.192 | 0.182 | 0.195 | 0.445 |
134+
| BCCD | 0.016 | 0.016 | 0.001 | 0.001 | 0.000 | 0.000 | 0.161 | 0.584 |
135+
| boggleBoards | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.134 |
136+
| brackishUnderwater | 0.016 | 0..013 | 0.021 | 0.027 | 0.020 | 0.022 | 0.021 | 0.454 |
137+
| ChessPieces | 0.001 | 0.001 | 0.000 | 0.000 | 0.001 | 0.001 | 0.000 | 0.000 |
138+
| CottontailRabbits | 0.710 | 0.709 | 0.683 | 0.683 | 0.752 | 0.752 | 0.806 | 0.797 |
139+
| dice | 0.005 | 0.005 | 0.004 | 0.004 | 0.004 | 0.004 | 0.004 | 0.082 |
140+
| DroneControl | 0.016 | 0.017 | 0.006 | 0.008 | 0.005 | 0.007 | 0.042 | 0.638 |
141+
| EgoHands_generic | 0.009 | 0.010 | 0.005 | 0.006 | 0.510 | 0.508 | 0.608 | 0.764 |
142+
| EgoHands_specific | 0.001 | 0.001 | 0.004 | 0.006 | 0.003 | 0.004 | 0.002 | 0.687 |
143+
| HardHatWorkers | 0.029 | 0.029 | 0.023 | 0.023 | 0.033 | 0.033 | 0.046 | 0.439 |
144+
| MaskWearing | 0.007 | 0.007 | 0.003 | 0.002 | 0.005 | 0.005 | 0.004 | 0.406 |
145+
| MountainDewCommercial | 0.218 | 0.227 | 0.199 | 0.197 | 0.478 | 0.463 | 0.430 | 0.580 |
146+
| NorthAmericaMushrooms | 0.502 | 0.502 | 0.450 | 0.450 | 0.497 | 0.497 | 0.471 | 0.501 |
147+
| openPoetryVision | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.051 |
148+
| OxfordPets_by_breed | 0.001 | 0.002 | 0.002 | 0.004 | 0.001 | 0.002 | 0.003 | 0.799 |
149+
| OxfordPets_by_species | 0.016 | 0.011 | 0.012 | 0.009 | 0.013 | 0.009 | 0.011 | 0.872 |
150+
| PKLot | 0.002 | 0.002 | 0.000 | 0.000 | 0.000 | 0.000 | 0.001 | 0.774 |
151+
| Packages | 0.569 | 0.569 | 0.279 | 0.279 | 0.712 | 0.712 | 0.695 | 0.728 |
152+
| PascalVOC | 0.512 | 0.512 | 0.541 | 0.540 | 0.565 | 0.565 | 0.563 | 0.711 |
153+
| pistols | 0.339 | 0.339 | 0.502 | 0.501 | 0.503 | 0.504 | 0.726 | 0.771 |
154+
| plantdoc | 0.002 | 0.002 | 0.007 | 0.007 | 0.009 | 0.009 | 0.005 | 0.376 |
155+
| pothole | 0.007 | 0.010 | 0.024 | 0.025 | 0.085 | 0.101 | 0.215 | 0.478 |
156+
| Raccoons | 0.075 | 0.074 | 0.285 | 0.288 | 0.241 | 0.244 | 0.549 | 0.541 |
157+
| selfdrivingCar | 0.071 | 0.072 | 0.074 | 0.074 | 0.081 | 0.080 | 0.089 | 0.318 |
158+
| ShellfishOpenImages | 0.253 | 0.253 | 0.337 | 0.338 | 0.300 | 0.302 | 0.393 | 0.650 |
159+
| ThermalCheetah | 0.028 | 0.028 | 0.000 | 0.000 | 0.028 | 0.028 | 0.087 | 0.290 |
160+
| thermalDogsAndPeople | 0.372 | 0.372 | 0.475 | 0.475 | 0.510 | 0.510 | 0.657 | 0.633 |
161+
| UnoCards | 0.000 | 0.000 | 0.000 | 0.001 | 0.002 | 0.003 | 0.006 | 0.754 |
162+
| VehiclesOpenImages | 0.574 | 0.566 | 0.562 | 0.547 | 0.549 | 0.534 | 0.613 | 0.647 |
163+
| WildfireSmoke | 0.000 | 0.000 | 0.000 | 0.000 | 0.017 | 0.017 | 0.134 | 0.410 |
164+
| websiteScreenshots | 0.003 | 0.004 | 0.003 | 0.005 | 0.005 | 0.006 | 0.012 | 0.175 |
165+
| Average | **0.134** | **0.134** | **0.138** | **0.138** | **0.179** | **0.178** | **0.227** | **0.492** |
166+
167+
### Results on Flickr30k
168+
169+
| Model | Official | Pre-Train Data | Val R@1 | Val R@5 | Val R@10 | Test R@1 | Test R@5 | Test R@10 |
170+
| ------------- | -------- | -------------- | ------- | ------- | -------- | -------- | -------- | --------- |
171+
| **GLIP-T(C)** || O365, GoldG | 84.8 | 94.9 | 96.3 | 85.5 | 95.4 | 96.6 |
172+
| **GLIP-T(C)** | | O365, GoldG | 84.9 | 94.9 | 96.3 | 85.6 | 95.4 | 96.7 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
_base_ = '../glip_atss_swin-t_a_fpn_dyhead_pretrain_obj365.py'
2+
3+
lang_model_name = 'bert-base-uncased'
4+
5+
model = dict(bbox_head=dict(early_fuse=True), )
6+
7+
dataset_type = 'Flickr30kDataset'
8+
data_root = 'data/flickr30k/'
9+
10+
test_pipeline = [
11+
dict(
12+
type='LoadImageFromFile', backend_args=None,
13+
imdecode_backend='pillow'),
14+
dict(
15+
type='FixScaleResize',
16+
scale=(800, 1333),
17+
keep_ratio=True,
18+
backend='pillow'),
19+
dict(type='LoadAnnotations', with_bbox=True),
20+
dict(
21+
type='PackDetInputs',
22+
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
23+
'scale_factor', 'text', 'custom_entities',
24+
'tokens_positive', 'phrase_ids', 'phrases'))
25+
]
26+
27+
dataset_Flickr30k_val = dict(
28+
type=dataset_type,
29+
data_root=data_root,
30+
ann_file='mdetr_annotations/final_flickr_separateGT_val.json',
31+
data_prefix=dict(img='flickr30k_images/'),
32+
pipeline=test_pipeline,
33+
)
34+
35+
dataset_Flickr30k_test = dict(
36+
type=dataset_type,
37+
data_root=data_root,
38+
ann_file='mdetr_annotations/final_flickr_separateGT_test.json',
39+
data_prefix=dict(img='flickr30k_images/'),
40+
pipeline=test_pipeline,
41+
)
42+
43+
val_evaluator_Flickr30k = dict(type='Flickr30kMetric', )
44+
45+
test_evaluator_Flickr30k = dict(type='Flickr30kMetric', )
46+
47+
# ----------Config---------- #
48+
dataset_prefixes = ['Flickr30kVal', 'Flickr30kTest']
49+
datasets = [dataset_Flickr30k_val, dataset_Flickr30k_test]
50+
metrics = [val_evaluator_Flickr30k, test_evaluator_Flickr30k]
51+
52+
val_dataloader = dict(
53+
dataset=dict(_delete_=True, type='ConcatDataset', datasets=datasets))
54+
test_dataloader = val_dataloader
55+
56+
val_evaluator = dict(
57+
_delete_=True,
58+
type='MultiDatasetsEvaluator',
59+
metrics=metrics,
60+
dataset_prefixes=dataset_prefixes)
61+
test_evaluator = val_evaluator

configs/odinw/glip_atss_swin-t_a_fpn_dyhead_pretrain_odinw13.py renamed to configs/glip/odinw/glip_atss_swin-t_a_fpn_dyhead_pretrain_odinw13.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
_base_ = '../glip/glip_atss_swin-t_a_fpn_dyhead_pretrain_obj365.py'
1+
_base_ = '../glip_atss_swin-t_a_fpn_dyhead_pretrain_obj365.py'
22

33
dataset_type = 'CocoDataset'
44
data_root = 'data/odinw/'

configs/odinw/glip_atss_swin-t_a_fpn_dyhead_pretrain_odinw35.py renamed to configs/glip/odinw/glip_atss_swin-t_a_fpn_dyhead_pretrain_odinw35.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
_base_ = '../glip/glip_atss_swin-t_a_fpn_dyhead_pretrain_obj365.py'
1+
_base_ = '../glip_atss_swin-t_a_fpn_dyhead_pretrain_obj365.py'
22

33
dataset_type = 'CocoDataset'
44
data_root = 'data/odinw/'
@@ -518,7 +518,7 @@
518518
caption_prompt = {
519519
'pothole': {
520520
'name': 'holes',
521-
'prefix': 'there are some',
521+
'prefix': 'there are some ',
522522
'suffix': ' on the road'
523523
}
524524
}

0 commit comments

Comments
 (0)