Skip to content
forked from ufukefe/DFM

Python (Pytorch) and Matlab (MatConvNet) implementations of CVPR 2021 Image Matching Workshop paper DFM: A Performance Baseline for Deep Feature Matching

License

Notifications You must be signed in to change notification settings

mirkosprojects/DFM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DFM: A Performance Baseline for Deep Feature Matching

Python (Pytorch) and Matlab (MatConvNet) implementations of our paper DFM: A Performance Baseline for Deep Feature Matching at CVPR 2021 Image Matching Workshop.

Paper (CVF) | Paper (arXiv)
Presentation (live) | Presentation (recording)

Overview

Environment Setup

We strongly recommend using Anaconda. Open a terminal in ./python folder, and simply run the following lines to create the environment:

conda env create -f environment.yml
conda activate dfm

Dependencies
If you do not use conda, DFM needs the following dependencies:
(Versions are not strict; however, we have tried DFM with these specific versions.)

  • python=3.7.1
  • pytorch=1.7.1
  • torchvision=0.8.2
  • cudatoolkit=11.0
  • matplotlib=3.3.4
  • pillow=8.2.0
  • opencv=3.4.2
  • ipykernel=5.3.4
  • pyyaml=5.4.1

Enjoy with DFM!

Now you are ready to test DFM by the following command:

python dfm.py --input_pairs image_pairs.txt

You should make the image_pairs.txt file as following:

<path_of_image1A> <path_of_image1B>
<path_of_image2A> <path_of_image2B>
.
.
.
<path_of_imagenA> <path_of_imagenB>

If you want to run DFM with a specific configuration, you can make changes to the following arguments in config.yml:

  • Use enable_two_stage to enable or disable two stage approach (default: True)
    (Note: Make it enable for planar scenes with significant viewpoint changes, otherwise disable.)
  • Use model to change the pre-trained model (default: VGG19)
    (Note: DFM only supports VGG19 and VGG19_BN right now, we plan to add other backbones.)
  • Use ratio_th to change ratio test thresholds (default: [0.9, 0.9, 0.9, 0.9, 0.95, 1.0])
    (Note: These ratio test thresholds are for 1st to 5th layer, the last threshold (6th) are for Stage-0 and only usable when --enable_two_stage=True)
  • Use bidirectional to enable or disable bidirectional ratio test. (default: True)
    (Note: Make it enable to find more robust matches. Naturally, it should be enabled, make it False is only for similar results with our Matlab implementation since Matlab's matchFeatures function does not execute ratio test in a bidirectional way.)
  • Use display_results to enable or disable displaying results (default: True)
    (Note: If True, DFM saves matched image pairs to output_directory.)
  • Use output_directory to define output directory. (default: 'results')
    (Note: imageA_imageB_matches.npz will be created in output_directory for each image pair.)

Evaluation

You can use our Image Matching Evaluation (IME) repository, in which we have support to evaluate DFM and 8 additional algorithms which are SIFT, SURF, ORB, KAZE, AKAZE, SuperPoint, SuperGlue and Patch2Pix on HPatches dataset. Also, you can use our Matlab implementation (see For Matlab Users section) to reproduce the results presented in the paper.

Notice

To reproduce our results given in the paper, use our Matlab implementation.
You can get more accurate results (but with fewer features) using Python implementation. It is mainly because MATLAB’s matchFeatures function does not execute ratio test in a bidirectional way, where our Python implementation performs bidirectional ratio test. Nevertheless, we made bidirectionality adjustable in our Python implementation as well.

For Matlab Users

We have implemented and tested DFM on MATLAB R2017b.

Prerequisites

You need to install MatConvNet (we have support for matconvnet-1.0-beta24). Follow the instructions on the official website.

Once you finished the installation of MatConvNet, you should download pretratined VGG-19 network to the ./matlab/models folder.

Running DFM

Now, you are ready to try DFM!

Just open and run main_DFM.m with your own images.

Evaluation on HPatches

Download HPatches sequences and extract it to ./matlab/data folder.

Run main_hpatches.m which is in ./matlab/HPatches Evaluation folder.

A results.txt file will be generetad in ./matlab/results/HPatches folder.

  • In the first column you can find the pair names.
  • In the 2-11 column you can find the Mean Matching Accuracy (MMA) results for 1-10 pixel thresholds.
  • In 12th column you can find number of matched features.
  • Columns 13-17 are for best homography estimation results (denoted as boe in the paper)
  • Columns 18-22 are for worst homography estimation results (denoted as woe in the paper)
  • Columns 22-71 are for 10 different homography estimation tests.

BibTeX Citation

Please cite our paper if you use the code:

@InProceedings{Efe_2021_CVPR,
    author    = {Efe, Ufuk and Ince, Kutalmis Gokalp and Alatan, Aydin},
    title     = {DFM: A Performance Baseline for Deep Feature Matching},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2021},
    pages     = {4284-4293}
}

About

Python (Pytorch) and Matlab (MatConvNet) implementations of CVPR 2021 Image Matching Workshop paper DFM: A Performance Baseline for Deep Feature Matching

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 54.7%
  • Python 45.3%