A Transformer-Based Siamese Network for Change Detection

Overview

ChangeFormer: A Transformer-Based Siamese Network for Change Detection (Under review at IGARSS-2022)

Wele Gedara Chaminda Bandara, Vishal M. Patel

Here, we provide the pytorch implementation of the paper: A Transformer-Based Siamese Network for Change Detection.

For more information, please see our paper at arxiv.

image-20210228153142126

Requirements

Python 3.8.0
pytorch 1.10.1
torchvision 0.11.2
einops  0.3.2

Please see requirements.txt for all the other requirements.

Installation

Clone this repo:

git clone https://github.com/wgcban/ChangeFormer.git
cd ChangeFormer

Quick Start on LEVIR dataset

We have some samples from the LEVIR-CD dataset in the folder samples_LEVIR for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained model——by DropBox. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_LEVIR/.

Then, run a demo to get started as follows:

python demo_LEVIR.py

After that, you can find the prediction results in samples/predict_LEVIR.

Quick Start on DSFIN dataset

We have some samples from the DSFIN-CD dataset in the folder samples_DSFIN for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained model——by DropBox. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_LEVIR/.

Then, run a demo to get started as follows:

python demo_DSFIN.py

After that, you can find the prediction results in samples/predict_DSFIN.

Train on LEVIR-CD

You can find the training script run_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/run_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file run_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

#GPUs
gpus=0

#Set paths
checkpoint_root=/media/lidan/ssd2/ChangeFormer/checkpoints
vis_root=/media/lidan/ssd2/ChangeFormer/vis
data_name=LEVIR


img_size=256    
batch_size=16   
lr=0.0001         
max_epochs=200
embed_dim=256

net_G=ChangeFormerV6        #ChangeFormerV6 is the finalized verion

lr_policy=linear
optimizer=adamw                 #Choices: sgd (set lr to 0.01), adam, adamw
loss=ce                         #Choices: ce, fl (Focal Loss), miou
multi_scale_train=True
multi_scale_infer=False
shuffle_AB=False

#Initializing from pretrained weights
pretrain=/media/lidan/ssd2/ChangeFormer/pretrained_segformer/segformer.b2.512x512.ade.160k.pth

#Train and Validation splits
split=train         #trainval
split_val=test      #test
project_name=CD_${net_G}_${data_name}_b${batch_size}_lr${lr}_${optimizer}_${split}_${split_val}_${max_epochs}_${lr_policy}_${loss}_multi_train_${multi_scale_train}_multi_infer_${multi_scale_infer}_shuffle_AB_${shuffle_AB}_embed_dim_${embed_dim}

CUDA_VISIBLE_DEVICES=1 python main_cd.py --img_size ${img_size} --loss ${loss} --checkpoint_root ${checkpoint_root} --vis_root ${vis_root} --lr_policy ${lr_policy} --optimizer ${optimizer} --pretrain ${pretrain} --split ${split} --split_val ${split_val} --net_G ${net_G} --multi_scale_train ${multi_scale_train} --multi_scale_infer ${multi_scale_infer} --gpu_ids ${gpus} --max_epochs ${max_epochs} --project_name ${project_name} --batch_size ${batch_size} --shuffle_AB ${shuffle_AB} --data_name ${data_name}  --lr ${lr} --embed_dim ${embed_dim}

Train on DSFIN-CD

Follow the similar procedure mentioned for LEVIR-CD. Use run_ChangeFormer_DSFIN.sh in scripts folder to train on DSFIN-CD.

Evaluate on LEVIR

You can find the evaluation script eval_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file eval_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

gpus=0

data_name=LEVIR
net_G=ChangeFormerV6 #This is the best version
split=test
vis_root=/media/lidan/ssd2/ChangeFormer/vis
project_name=CD_ChangeFormerV6_LEVIR_b16_lr0.0001_adamw_train_test_200_linear_ce_multi_train_True_multi_infer_False_shuffle_AB_False_embed_dim_256
checkpoints_root=/media/lidan/ssd2/ChangeFormer/checkpoints
checkpoint_name=best_ckpt.pt
img_size=256
embed_dim=256 #Make sure to change the embedding dim (best and default = 256)

CUDA_VISIBLE_DEVICES=0 python eval_cd.py --split ${split} --net_G ${net_G} --embed_dim ${embed_dim} --img_size ${img_size} --vis_root ${vis_root} --checkpoints_root ${checkpoints_root} --checkpoint_name ${checkpoint_name} --gpu_ids ${gpus} --project_name ${project_name} --data_name ${data_name}

Evaluate on LEVIR

Follow the same evaluation procedure mentioned for LEVIR-CD. You can find the evaluation script eval_ChangeFormer_DSFIN.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_DSFIN.sh in the command environment.

Dataset Preparation

Data structure

"""
Change detection data set with pixel-level binary labels;
├─A
├─B
├─label
└─list
"""

A: images of t1 phase;

B:images of t2 phase;

label: label maps;

list: contains train.txt, val.txt and test.txt, each file records the image names (XXX.png) in the change detection dataset.

Data Download

LEVIR-CD: https://justchenhao.github.io/LEVIR/

WHU-CD: https://study.rsgis.whu.edu.cn/pages/download/building_dataset.html

DSIFN-CD: https://github.com/GeoZcx/A-deeply-supervised-image-fusion-network-for-change-detection-in-remote-sensing-images/tree/master/dataset

License

Code is released for non-commercial and research purposes only. For commercial purposes, please contact the authors.

Citation

If you use this code for your research, please cite our paper:

@Article{
}

References

Appreciate the work from the following repositories:

Comments
  • How could use?

    How could use?

    Hi, I'm very interested in using your code in my project. I was able to run demo scripts. Now I want to use your code for my own data, but unfortunately I do not know how I can do this on my data. I put them in files A, B But I think I need guidance to test Please tell me how I can find a difference for my images (I am a newcomer, thank you)

    help wanted 
    opened by p00uya 23
  • How to train more classes label instead of two?

    How to train more classes label instead of two?

    Hi I really appreciate your work.Now I want to train on changesim,a dataset with 4 classes label ,such as "missing","new","ratation","replaced object". I tried change n_class , but it didnt work. what should I do to train on a dataset which is more than 2 classes? thx~

    help wanted 
    opened by xgyyao 14
  • Question about training on LEVIR-CD

    Question about training on LEVIR-CD

    Hi , I found when i load the pretrained model(trained on ade160k dataset), the keys of checkpoint are not matched. The pretrained model: BFA7F864-1D89-4f31-9F0F-5C87B1584CF8 The self.net_G: image

    So the keys of pretrained model are all missing keys.

    question 
    opened by Youskrpig 10
  • About using a new dataset

    About using a new dataset

    opened by SnycradJuice 8
  • DSIFN accuracy

    DSIFN accuracy

    Hi wgcban, I notice that DSIFN-CD dataset has much higher accuracy than BIT[4] (IoU from BIT 52.97% to ChangeFormer 76.48%). On another dataset, LEVIR-CD, the difference is not as large as DSIFN-CD. Could you please explain the main source of the large improvement on DSIFN-CD? e.g. training strategy, data augmentation, model structure... Thanks Wesley

    question 
    opened by WesleyZhang1991 8
  • Some Questions about Code and Paper Details

    Some Questions about Code and Paper Details

    Hi~ Thx for your great work, :clap: it's really inspired a lot in siamese Transformer network realizing.

    However, I still have some questions about the code implementation and the details of the paper.

    1. In the code, the implementation of Sequence Reduction was completed through the Conv2d non-overlapping cutting feature map before MHSA. https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L316 The effect of the implementation is similar to that of the first shape and then linear projection in the paper, but this code implementation will result in a reduction of the sequence length R^2 times (similar to the idea of cutting the image into 16 * 16 patches at the beginning of the ViT). However, the formula 2 in the paper shows that the reduction is times. Is there an error here?

    2. In this code:https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L507 the actual code implements two skips connected. In the pink block diagram of Transformer Block explained in the upper right corner of Fig1 in the paper, do you need to draw two skips connected?(Add skip bypass connecting Sequence Reduction input an MHSA output)

    3. About Depth-wise Conv in Transformer Block as PE.Why you do this? How to realize position coding(Can you explain it)? Why is this effective?

    4. patch_block1 is not used in the code. What is this module used for? Why is it inconsistent with the previous block1 dimension? (dim=embed_dims[1] and dim=embed_dims[0] respectively)https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/ChangeFormer.py#L52

    Looking forward to your early reply!:smiley:

    opened by zafirshi 6
  • about multi classes

    about multi classes

    hi, I tried to train my personal data with 4 classes = {0,1,2,3}

    pixels are like

    0000000002220000 0000000002200000 0000000002000000 0010000000000000 0111110000000000 1111000000000000

    grayscale.

    when I train this data, the accuracy converges to 0.5 and never changes. Is there any problem that I miss?

    what I changed is only n_classes = 4

    thanks.

    opened by g7199 5
  • The code runs too long

    The code runs too long

    Hello, it's a nice code. It takes me a lot of time to run the program using the LEVIR-CD-256 dataset you have processed. Is this normal? How long will it take you to train the model? Looking forward to your early reply!

    opened by Mengtao-ship 5
  • How to

    How to

    Hi I really appreciate your work. I have a few questions about the model. First of all is it possible to modify the size of the input images? Then how can we retrain the model with our data? I noticed that the model detects the changes appeared in the image B. How can we generate a map for the disappearance of elements in A? Thanks. I remain open to your answers and suggestions.

    question 
    opened by choumie 5
  • A question about the difference module

    A question about the difference module

    HI~,after reading your paper, I still can't understand your design of Difference Module which consists of Conv2D, ReLU and BatchNorm2d,What is the reason for this design? in the eqn: Fidiff = BN(ReLU(Conv2D3×3(Cat(Fipre, Fipost)))) in the code: nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.ReLU(), nn.BatchNorm2d(out_channels), nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), nn.ReLU() Looking forward to your early reply!

    opened by Herxinsasa 4
  • F1 values of WHU-CD and DSIFN-CD

    F1 values of WHU-CD and DSIFN-CD

    Hello, I am trying to reproduce the BIT-CD model code, and it looks incorrect when using WHU-CD dataset as well as DSIFN-CD dataset, and the result appears too high than your result. But it looks normal when using LEVID-CD dataset. Do I need to change the pre-training?

    opened by yuwanting828 4
  • A question about visualization.

    A question about visualization.

    Hello again! I'm training the net on the former dataset,a problem happened when the trainer saves visualization, the vis_pred, vis_gt of saved images do not look normal. Though I found your visualization method right here https://github.com/wgcban/ChangeFormer/blob/9025e26417cf8f10f29a48f34a05758498216465/models/trainer.py#L220-L232 still don't know how to make it fit to save multiple classes gt and pred images.

    opened by SnycradJuice 0
  • multi class

    multi class

    You want to perform multiclass classification. Even if I change the code to args.n_class = 9, the class is predicted to be 2. What should I do? sry im korean 캡처 It shouldn't be possible to modify only n_class, but should there be multiple classes of labels?

    opened by taemin6697 20
Releases(v0.1.0)
Owner
Wele Gedara Chaminda Bandara
I am a second-year Ph.D. student in the ECE at Johns Hopkins University.
Wele Gedara Chaminda Bandara
This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametric Head Model (CVPR 2022)".

HeadNeRF: A Real-time NeRF-based Parametric Head Model This repository contains a pytorch implementation of "HeadNeRF: A Real-time NeRF-based Parametr

294 Jan 01, 2023
Dataset for the Research2Clinics @ NeurIPS 2021 Paper: What Do You See in this Patient? Behavioral Testing of Clinical NLP Models

Behavioral Testing of Clinical NLP Models This repository contains code for testing the behavior of clinical prediction models based on patient letter

Betty van Aken 2 Sep 20, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 06, 2023
Recurrent Conditional Query Learning

Recurrent Conditional Query Learning (RCQL) This repository contains the Pytorch implementation of One Model Packs Thousands of Items with Recurrent C

Dongda 4 Nov 28, 2022
This is a official repository of SimViT.

SimViT This is a official repository of SimViT. We will open our models and codes about object detection and semantic segmentation soon. Our code refe

ligang 57 Dec 15, 2022
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Authors implementation of LieTransformer: Equivariant Self-Attention for Lie Groups

LieTransformer This repository contains the implementation of the LieTransformer used for experiments in the paper LieTransformer: Equivariant self-at

35 Oct 18, 2022
PyTorch-based framework for Deep Hedging

PFHedge: Deep Hedging in PyTorch PFHedge is a PyTorch-based framework for Deep Hedging. PFHedge Documentation Neural Network Architecture for Efficien

139 Dec 30, 2022
To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

Larissa Sayuri Futino Castro dos Santos 1 Jan 20, 2022
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
The world's largest toxicity dataset.

The Toxicity Dataset by Surge AI Saving the internet is fun. Combing through thousands of online comments to build a toxicity dataset isn't. That's wh

Surge AI 134 Dec 19, 2022
Imagededup - 😎 Finding duplicate images made easy

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

idealo 4.3k Jan 07, 2023
Pythonic particle-based (super-droplet) warm-rain/aqueous-chemistry cloud microphysics package with box, parcel & 1D/2D prescribed-flow examples in Python, Julia and Matlab

PySDM PySDM is a package for simulating the dynamics of population of particles. It is intended to serve as a building block for simulation systems mo

Atmospheric Cloud Simulation Group @ Jagiellonian University 32 Oct 18, 2022
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data (NeurIPS 2021) This repository will provide the official PyTorch implementa

Liming Jiang 238 Nov 25, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
An unofficial PyTorch implementation of a federated learning algorithm, FedAvg.

Federated Averaging (FedAvg) in PyTorch An unofficial implementation of FederatedAveraging (or FedAvg) algorithm proposed in the paper Communication-E

Seok-Ju Hahn 123 Jan 06, 2023
HeartRate detector with ArduinoandPython - Use Arduino and Python create a heartrate detector.

Syllabus of Contents Syllabus of Contents Introduction Of Project Features Develop With Python code introduction Installation License Developer Contac

1 Jan 05, 2022
[AI6122] Text Data Management & Processing

[AI6122] Text Data Management & Processing is an elective course of MSAI, SCSE, NTU, Singapore. The repository corresponds to the AI6122 of Semester 1, AY2021-2022, starting from 08/2021. The instruc

HT. Li 1 Jan 17, 2022
Viperdb - A tiny log-structured key-value database written in pure Python

ViperDB 🐍 ViperDB is a lightweight embedded key-value store written in pure Pyt

17 Oct 17, 2022
A fuzzing framework for SMT solvers

yinyang A fuzzing framework for SMT solvers. Given a set of seed SMT formulas, yinyang generates mutant formulas to stress-test SMT solvers. yinyang c

Project Yin-Yang for SMT Solver Testing 145 Jan 04, 2023