Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

Overview

SegSwap

Pytorch implementation of paper "Learning Co-segmentation by Segment Swapping for Retrieval and Discovery"

[PDF] [Project page]

teaser

teaser

If our project is helpful for your research, please consider citing :

@article{shen2021learning,
  title={Learning Co-segmentation by Segment Swapping for Retrieval and Discovery},
  author={Shen, Xi and Efros, Alexei A and Joulin, Armand and Aubry, Mathieu},
  journal={arXiv},
  year={2021}

Table of Content

1. Installation

1.1. Dependencies

Our model can be learnt on a a single GPU Tesla-V100-16GB. The code has been tested in Pytorch 1.7.1 + cuda 10.2

Other dependencies can be installed via (tqdm, kornia, opencv-python, scipy) :

bash requirement.sh

1.2. Pre-trained MocoV2-resnet50 + cross-transformer (~300M)

Quick download :

cd model/pretrained
bash download_model.sh

2. Training Data Generation

2.1. Download COCO (~20G)

This command will download coco2017 training set + annotations (~20G).

cd data/COCO2017/download_coco.sh
bash download_coco.sh

2.2. Image Pairs with One Repeated Object

2.2.1 Generating 100k pairs (~18G)

This command will generate 100k image pairs with one repeated object.

cd data/
python generate_1obj.py --out-dir pairs_1obj_100k 

2.2.1 Examples of image pairs

Source Blended Obj + Background Stylised Source Stylised Background

2.2.2 Visualizing correspondences and masks of the generated pairs

This command will generate 10 pairs and visualize correspondences and masks of the pairs.

cd data/
bash vis_pair.sh

These pairs can be illustrated via vis10_1obj/vis.html

2.3. Image Pairs with Two Repeated Object

2.3.1 Generating 100k pairs (~18G)

This command will generate 100k image pairs with one repeated object.

cd data/
python generate_2obj.py --out-dir pairs_2obj_100k 

2.3.1 Examples of image pairs

Source Blended Obj + Background Stylised Source Stylised Background

2.3.2 Visualizing correspondences and masks of the generated pairs

This command will generate 10 pairs and visualize correspondences and masks of the pairs.

cd data/
bash vis_pair.sh

These pairs can be illustrated via vis10_2obj/vis.html

3. Evaluation

3.1 One-shot Art Detail Detection on Brueghel Dataset

3.1.1 Visual results: top-3 retrieved images

teaser

3.1.2 Data

Brueghel dataset has been uploaded in this repo

3.1.3 Quantitative results

The following command conduct evaluation on Brueghel with pre-trained cross-transformer:

cd evalBrueghel
python evalBrueghel.py --out-coarse out_brueghel.json --resume-pth ../model/hard_mining_neg5.pth --label-pth ../data/Brueghel/brueghelTest.json

Note that this command will save the features of Brueghel(~10G).

3.2 Place Recognition on Tokyo247 Dataset

3.2.1 Visual results: top-3 retrieved images

teaser

3.2.2 Data

Download Tokyo247 from its project page

Download the top-100 results used by patchVlad(~1G).

The data needs to be organised:

./SegSwap/data/Tokyo247
                    ├── query/
                        ├── 247query_subset_v2/
                    ├── database/
...

./SegSwap/evalTokyo
                    ├── top100_patchVlad.npy

3.2.3 Quantitative results

The following command conduct evaluation on Tokyo247 with pre-trained cross-transformer:

cd evalTokyo
python evalTokyo.py --qry-dir ../data/Tokyo247/query/247query_subset_v2 --db-dir ../data/Tokyo247/database --resume-pth ../model/hard_mining_neg5.pth

3.3 Place Recognition on Pitts30K Dataset

3.3.1 Visual results: top-3 retrieved images

teaser

3.3.2 Data

Download Pittsburgh dataset from its project page

Download the top-100 results used by patchVlad (~4G).

The data needs to be organised:

./SegSwap/data/Pitts
                ├── queries_real/
...

./SegSwap/evalPitts
                    ├── top100_patchVlad.npy

3.3.3 Quantitative results

The following command conduct evaluation on Pittsburgh30K with pre-trained cross-transformer:

cd evalPitts
python evalPitts.py --qry-dir ../data/Pitts/queries_real --db-dir ../data/Pitts --resume-pth ../model/hard_mining_neg5.pth

3.4 Discovery on Internet Dataset

3.4.1 Visual results

teaser

3.4.2 Data

Download Internet dataset from its project page

We provide a script to quickly download and preprocess the data (~400M):

cd data/Internet
bash download_int.sh

The data needs to be organised:

./SegSwap/data/Internet
                ├── Airplane100
                    ├── GroundTruth                
                ├── Horse100
                    ├── GroundTruth                
                ├── Car100
                    ├── GroundTruth                                

3.4.3 Quantitative results

The following commands conduct evaluation on Internet with pre-trained cross-transformer

cd evalInt
bash run_pair_480p.sh
bash run_best_only_cycle.sh

4. Training

Stage 1: standard training

Supposing that the generated pairs are saved in ./SegSwap/data/pairs_1obj_100k and ./SegSwap/data/pairs_2obj_100k.

Training command can be found in ./SegSwap/train/run.sh.

Note that this command should be able to be launched on a single GPU with 16G memory.

cd train
bash run.sh

Stage 2: hard mining

In train/run_hardmining.sh, replacing --resume-pth by the model trained in the 1st stage, than running:

cd train
bash run_hardmining.sh

5. Acknowledgement

We appreciate helps from :

Part of code is borrowed from our previous projects: ArtMiner and Watermark

6. ChangeLog

  • 21/10/21, model, evaluation + training released

7. License

This code is distributed under an MIT LICENSE.

Note that our code depends on other libraries, including Kornia, Pytorch, and uses datasets which each have their own respective licenses that must also be followed.

Owner
xshen
Ph.D, Computer Vision, Deep Learning.
xshen
Activity image-based video retrieval

Cross-modal-retrieval Our approach is focus on Activity Image-to-Video Retrieval (AIVR) task. The compared methods are state-of-the-art single modalit

BCMI 75 Oct 21, 2021
Self-training for Few-shot Transfer Across Extreme Task Differences

Self-training for Few-shot Transfer Across Extreme Task Differences (STARTUP) Introduction This repo contains the official implementation of the follo

Cheng Perng Phoo 33 Oct 31, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'

Re-implementation of the paper 'Grokking: Generalization beyond overfitting on small algorithmic datasets' Paper Original paper can be found here Data

Tom Lieberum 38 Aug 09, 2022
FaceAPI: AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using TensorFlow/JS

FaceAPI AI-powered Face Detection & Rotation Tracking, Face Description & Recognition, Age & Gender & Emotion Prediction for Browser and NodeJS using

Vladimir Mandic 395 Dec 29, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
[TNNLS 2021] The official code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement"

CSDNet-CSDGAN this is the code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement" Environment Preparing pyt

Jiaao Zhang 17 Nov 05, 2022
HTSeq is a Python library to facilitate processing and analysis of data from high-throughput sequencing (HTS) experiments.

HTSeq DEVS: https://github.com/htseq/htseq DOCS: https://htseq.readthedocs.io A Python library to facilitate programmatic analysis of data from high-t

HTSeq 57 Dec 20, 2022
Implementation of Bagging and AdaBoost Algorithm

Bagging-and-AdaBoost Implementation of Bagging and AdaBoost Algorithm Dataset Red Wine Quality Data Sets For simplicity, we will have 2 classes of win

Zechen Ma 1 Nov 01, 2021
Implementation for Homogeneous Unbalanced Regularized Optimal Transport

HUROT: An Homogeneous formulation of Unbalanced Regularized Optimal Transport. This repository provides code related to this preprint. This is an alph

Théo Lacombe 1 Feb 17, 2022
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
BoxInst: High-Performance Instance Segmentation with Box Annotations

Introduction This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge, the paper is BoxInst: High-Performan

88 Dec 21, 2022
Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

SCAPT-ABSA Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training" Overvie

Zhengyan Li 66 Dec 04, 2022
A basic reminder tool written in Python.

A simple Python Reminder Here's a basic reminder tool written in Python that speaks to the user and sends a notification. Run pip3 install pyttsx3 w

Sachit Yadav 4 Feb 05, 2022
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

Mesa: A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for

Zhuang AI Group 105 Dec 06, 2022
Patch SVDD for Image anomaly detection

Patch SVDD Patch SVDD for Image anomaly detection. Paper: https://arxiv.org/abs/2006.16067 (published in ACCV 2020). Original Code : https://github.co

Hong-Jeongmin 0 Dec 03, 2021
RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

RM Operation can equivalently convert ResNet to VGG, which is better for pruning; and can help RepVGG perform better when the depth is large.

184 Jan 04, 2023
Classifies galaxy morphology with Bayesian CNN

Zoobot Zoobot classifies galaxy morphology with deep learning. This code will let you: Reproduce and improve the Galaxy Zoo DECaLS automated classific

Mike Walmsley 39 Dec 20, 2022
Real-Time High-Resolution Background Matting

Real-Time High-Resolution Background Matting Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires captur

Peter Lin 6.1k Jan 03, 2023