Contains code for Deep Kernelized Dense Geometric Matching

Related tags

Deep LearningDKM
Overview

DKM - Deep Kernelized Dense Geometric Matching

Contains code for Deep Kernelized Dense Geometric Matching

We provide pretrained models and code for evaluation and running on your own images. We do not curently provide code for training models, but you can basically copy paste the model code into your own training framework and run it.

Note that the performance of current models is greater than in the pre-print. This is due to continued development since submission.

Install

Run pip install -e .

Using a (Pretrained) Model

Models can be imported by:

from dkm import dkm_base
model = dkm_base(pretrained=True, version="v11")

This creates a model, and loads pretrained weights.

Running on your own images

from dkm import dkm_base
from PIL import Image
model = dkm_base(pretrained=True, version="v11")
im1, im2 = Image.open("im1.jpg"), Image.open("im2.jpg")
# Note that matches are produced in the normalized grid [-1, 1] x [-1, 1] 
dense_matches, dense_certainty = model.match(im1, im2)
# You may want to process these, e.g. we found dense_certainty = dense_certainty.sqrt() to work quite well in some cases.
# Sample 10000 sparse matches
sparse_matches, sparse_certainty = model.sample(dense_matches, dense_certainty, 10000)

Downloading Benchmarks

HPatches

First, make sure that the "data/hpatches" path exists. I usually prefer to do this by:

ln -s place/where/your/datasets/are/stored/hpatches data/hpatches

Then run (if you don't already have hpatches downloaded) bash scripts/download_hpatches.sh

Yfcc100m (OANet Split)

We use the split introduced by OANet, this split can be found from e.g. https://github.com/PruneTruong/DenseMatching

Megadepth (LoFTR Split)

Currently we do not support the LoFTR split, as we trained on one of the scenes used there. Future releases may support this split, stay tuned.

Scannet (SuperGlue Split)

We use the same split of scannet as superglue. LoFTR provides the split here: https://drive.google.com/drive/folders/1nTkK1485FuwqA0DbZrK2Cl0WnXadUZdc

Evaluation

Here we provide approximate performance numbers for DKM using this codebase. Note that the randomness involved in geometry estimation means that the numbers are not exact. (+- 0.5 typically)

HPatches

To evaluate on HPatches Homography Estimation, run:

from dkm import dkm_base
from dkm.benchmarks import HpatchesHomogBenchmark

model = dkm_base(pretrained=True, version="v11")
homog_benchmark = HpatchesHomogBenchmark("data/hpatches")
homog_benchmark.benchmark_hpatches(model)

Results

HPatches Homography Estimation

AUC
@3px @5px @10px
LoFTR (CVPR'21) 65.9 75.6 84.6
DKM (Ours) 71.2 80.6 88.7

Scannet Pose Estimation

Here we compare the performance on Scannet of models not trained on Scannet. (For reference we also include the version LoFTR specifically trained on Scannet)

AUC mAP
@5 @10 @20 @5 @10 @20
SuperGlue (CVPR'20) Trained on Megadepth 16.16 33.81 51.84 - - -
LoFTR (CVPR'21) Trained on Megadepth 16.88 33.62 50.62 - - -
LoFTR (CVPR'21) Trained on Scannet 22.06 40.8 57.62 - - -
PDCNet (CVPR'21) Trained on Megadepth 17.70 35.02 51.75 39.93 50.17 60.87
PDCNet+ (Arxiv) Trained on Megadepth 19.02 36.90 54.25 42.93 53.13 63.95
DKM (Ours) Trained on Megadepth 22.3 42.0 60.2 48.4 59.5 70.3
DKM (Ours) Trained on Megadepth Square root Confidence Sampling 22.9 43.6 61.4 51.2 62.1 72.0

Yfcc100m Pose Estimation

Here we compare to recent methods using a single forward pass. PDC-Net+ using multiple passes comes closer to our method, reaching AUC-5 of 37.51. However, comparing to that method is somewhat unfair as their inference is much slower.

AUC mAP
@5 @10 @20 @5 @10 @20
PDCNet (CVPR'21) 32.21 52.61 70.13 60.52 70.91 80.30
PDCNet+ (Arxiv) 34.76 55.37 72.55 63.93 73.81 82.74
DKM (Ours) 40.0 60.2 76.2 69.8 78.5 86.1

TODO

  • Add Model Code
  • Upload Pretrained Models
  • Add HPatches Homography Benchmark
  • Add More Benchmarks

Acknowledgement

We have used code and been inspired by (among others) https://github.com/PruneTruong/DenseMatching , https://github.com/zju3dv/LoFTR , and https://github.com/GrumpyZhou/patch2pix

BibTeX

If you find our models useful, please consider citing our paper!

@article{edstedt2022deep,
  title={Deep Kernelized Dense Geometric Matching},
  author={Edstedt, Johan and Wadenb{\"a}ck, M{\aa}rten and Felsberg, Michael},
  journal={arXiv preprint arXiv:2202.00667},
  year={2022}
}
Owner
Johan Edstedt
PhD Student at CVL LiU.
Johan Edstedt
Learning Representational Invariances for Data-Efficient Action Recognition

Learning Representational Invariances for Data-Efficient Action Recognition Official PyTorch implementation for Learning Representational Invariances

Virginia Tech Vision and Learning Lab 27 Nov 22, 2022
网络协议2天集训

网络协议2天集训 抓包工具安装 Wireshark wireshark下载地址 Tcpdump CentOS yum install tcpdump -y Ubuntu apt-get install tcpdump -y k8s抓包测试环境 查看虚拟网卡veth pair 查看

120 Dec 12, 2022
ChatBot-Pytorch - A GPT-2 ChatBot implemented using Pytorch and Huggingface-transformers

ChatBot-Pytorch A GPT-2 ChatBot implemented using Pytorch and Huggingface-transf

ParZival 42 Dec 09, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"

Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning" Getting started Prerequisites CUD

70 Dec 02, 2022
Rate-limit-semaphore - Semaphore implementation with rate limit restriction for async-style (any core)

Rate Limit Semaphore Rate limit semaphore for async-style (any core) There are t

Yan Kurbatov 4 Jun 21, 2022
Simple converter for deploying Stable-Baselines3 model to TFLite and/or Coral

Running SB3 developed agents on TFLite or Coral Introduction I've been using Stable-Baselines3 to train agents against some custom Gyms, some of which

Gary Briggs 16 Oct 11, 2022
This project is for a Twitter bot that monitors a bird feeder in my backyard. Any detected birds are identified and posted to Twitter.

Backyard Birdbot Introduction This is a silly hobby project to use existing ML models to: Detect any birds sighted by a webcam Identify whic

Chi Young Moon 71 Dec 25, 2022
NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 01, 2022
The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

This repository is the official PyTorch implementation of SAINT. Find the paper on arxiv SAINT: Improved Neural Networks for Tabular Data via Row Atte

Gowthami Somepalli 284 Dec 21, 2022
Code for HLA-Face: Joint High-Low Adaptation for Low Light Face Detection (CVPR21)

HLA-Face: Joint High-Low Adaptation for Low Light Face Detection The official PyTorch implementation for HLA-Face: Joint High-Low Adaptation for Low L

Wenjing Wang 77 Dec 08, 2022
(AAAI 2021) Progressive One-shot Human Parsing

End-to-end One-shot Human Parsing This is the official repository for our two papers: Progressive One-shot Human Parsing (AAAI 2021) End-to-end One-sh

54 Dec 30, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
Attention mechanism with MNIST dataset

[TensorFlow] Attention mechanism with MNIST dataset Usage $ python run.py Result Training Loss graph. Test Each figure shows input digit, attention ma

YeongHyeon Park 12 Jun 10, 2022
Algorithmic trading with deep learning experiments

Deep-Trading Algorithmic trading with deep learning experiments. Now released part one - simple time series forecasting. I plan to implement more soph

Alex Honchar 1.4k Jan 02, 2023
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
The implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets.

Joint t-sne This is the implementation for paper Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets. abstract: We present Jo

IDEAS Lab 7 Dec 18, 2022
nnFormer: Interleaved Transformer for Volumetric Segmentation

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022