Change is Everywhere: Single-Temporal Supervised Object Change Detection in Remote Sensing Imagery (ICCV 2021)

Overview

Change is Everywhere
Single-Temporal Supervised Object Change Detection
in Remote Sensing Imagery

by Zhuo Zheng, Ailong Ma, Liangpei Zhang and Yanfei Zhong

[Paper] [BibTeX]



This is an official implementation of STAR and ChangeStar in our ICCV 2021 paper Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery.

We hope that STAR will serve as a solid baseline and help ease future research in weakly-supervised object change detection.


News

  • 2021/08/28, The code is available.
  • 2021/07/23, The code will be released soon.
  • 2021/07/23, This paper is accepted by ICCV 2021.

Features

  • Learning a good change detector from single-temporal supervision.
  • Strong baselines for bitemporal and single-temporal supervised change detection.
  • A clean codebase for weakly-supervised change detection.
  • Support both bitemporal and single-temporal supervised settings

Citation

If you use STAR or ChangeStar (FarSeg) in your research, please cite the following paper:

@inproceedings{zheng2021change,
  title={Change is Everywhere: Single-Temporal Supervised Object Change Detection for High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Ma, Ailong and Liangpei Zhang and Zhong, Yanfei},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={},
  year={2021}
}

@inproceedings{zheng2020foreground,
  title={Foreground-Aware Relation Network for Geospatial Object Segmentation in High Spatial Resolution Remote Sensing Imagery},
  author={Zheng, Zhuo and Zhong, Yanfei and Wang, Junjue and Ma, Ailong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4096--4105},
  year={2020}
}

Getting Started

Install EVer

pip install --upgrade git+https://github.com/Z-Zheng/ever.git

Requirements:

  • pytorch >= 1.6.0
  • python >=3.6

Prepare Dataset

  1. Download xView2 dataset (training set and tier3 set) and LEVIR-CD dataset.

  2. Create soft link

ln -s </path/to/xView2> ./xView2
ln -s </path/to/LEVIR-CD> ./LEVIR-CD

Training and Evaluation under Single-Temporal Supervision

bash ./scripts/trainxView2/r50_farseg_changemixin_symmetry.sh

Training and Evaluation under Bitemporal Supervision

bash ./scripts/bisup_levircd/r50_farseg_changemixin.sh

License

ChangeStar is released under the Apache License 2.0.

Copyright (c) Zhuo Zheng. All rights reserved.

Comments
  • Can ChangeStar be used for general CD?

    Can ChangeStar be used for general CD?

    hi,

    Thanks for the great work. I wonder, can this work be used for general change detection? i.e., multi-class not just single class.

    If yes, do you have done the experiments? Thanks!

    opened by Richardych 3
  • hello, how to add changemixin when use bitemporal supervised

    hello, how to add changemixin when use bitemporal supervised

    hello I have question about your repo:

    1. how to add changeminxin when use bitemporal supervised, i see it in your paper table 4 but i cant find in codes?
    2. could changestar use LEVIR-CD train Single-Temporal(another dataset is too big for train, i cant download it)
    3. are your bitemporal suprvised methods just use torch.cat in the final layer? sorry for ask these question,
    opened by csliuchang 3
  • ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    ValueError: Requested crop size (512, 512) is larger than the image size (384, 384)

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 117, in run test_data_loader=kw_dataloader['testdata_loader']) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 232, in train_by_config signal_loss_dict = self.train_iters(train_data_loader, test_data_loader=test_data_loader, **config) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/launcher.py", line 174, in train_iters is_master=self._master) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/ever/core/iterator.py", line 30, in next data = next(self._iterator) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/utils/data/dataset.py", line 218, in getitem return self.datasets[dataset_idx][sample_idx] File "/home/yujianzhi/tem/ChangeStar-master/data/levir_cd/dataset.py", line 30, in getitem blob = self.transforms(**dict(image=imgs, mask=gt)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/composition.py", line 191, in call data = t(force_apply=force_apply, **data) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 90, in call return self.apply_with_params(params, **kwargs) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/core/transforms_interface.py", line 103, in apply_with_params res[key] = target_function(arg, **dict(params, **target_dependencies)) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/transforms.py", line 48, in apply return F.random_crop(img, self.height, self.width, h_start, w_start) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/albumentations/augmentations/crops/functional.py", line 28, in random_crop crop_height=crop_height, crop_width=crop_width, height=height, width=width ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) Traceback (most recent call last): File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in main() File "/home/yujianzhi/anaconda3/envs/CStar/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/yujianzhi/anaconda3/envs/CStar/bin/python', '-u', './train_sup_change.py', '--local_rank=0', '--config_path=levircd.r50_farseg_changestar_bisup', '--model_dir=./log/bisup-LEVIRCD/r50_farseg_changestar']' returned non-zero exit status 1.

    it says: ValueError: Requested crop size (512, 512) is larger than the image size (384, 384) but my img is 512*512 exactly.

    opened by themoongodyue 3
  • How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    How to get the bitemporal images' labels if the model is trained on LEVIR-CD dataset?

    Hello, I'm very interested in your work, but I encountered a problem in the process of research. If the model is trained on the LEVIR-CD dataset, how to obtain the changed labels when there are no segmentation maps for each bitemporal image in the dataset? I would appreciate it if you could solve my problems.

    opened by SONGLEI-arch 2
  • Reproduction Problem

    Reproduction Problem

    Hello author.

    Your work is great!

    But I ran into a problem while running your code.

    The performance came as shown in the picture below, but this number is much higher than the number in table1 of your paper. (IoU) Can you tell me the reason? Screen Shot 2022-01-01 at 7 44 17 PM

    All hyperparameters and data are identical.

    opened by seominseok0429 1
  • AssertionError error

    AssertionError error

    Hello, this is really great work. I have one question for you. The LEVIR-CD dataset trains well, but the xview2 dataset gives the following unknown error.

    Do you have any idea how to fix it? All processes follow the recipe exactly Screen Shot 2021-12-31 at 4 57 41 PM .

    opened by seominseok0429 1
  • RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8

    i have crazy,help me please

    Traceback (most recent call last): File "./train_sup_change.py", line 48, in blob = trainer.run(after_construct_launcher_callbacks=[register_evaluate_fn]) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 98, in run kwargs.update(dict(model=self.make_model())) File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/ever/api/trainer/th_amp_ddp_trainer.py", line 87, in make_model model = nn.parallel.DistributedDataParallel( File "/home/cy/miniconda3/envs/STAnet/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in init dist._verify_model_across_ranks(self.process_group, parameters) RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 31335) of binary: /home/cy/miniconda3/envs/STAnet/bin/python ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed

    opened by themoongodyue 1
  • Evaluation

    Evaluation

    Excuse me, I want to know how this module behave inference after training the model. And if you can offer an link for usage of 'ever' Lib, that will be fantastic

    opened by LIUZIJING-CHN 1
  • changestar_sisup results

    changestar_sisup results

    Hi, I have trained the model under single-temporal supervision, but the F1 result is only 0.73,which is worse than the result in your paper. Is there anything wrong with my experiment, below is my training log:

    1666753326.225779.log

    After training I only test the LEVIR-CD test set.

    opened by max2857 0
  • A question about PCC

    A question about PCC

    Hello,I have a question about PCC:

    PCC is mentioned in the paper. After obtaining the classification result through the segmentation model, how to obtain the change detection result through the classification result? Is it a direct subtraction?

    opened by Hyd1999618 0
  • [Feature] support [0~255] gt

    [Feature] support [0~255] gt

    The original dataset of LEVIR-CD consists of 0 and 255.

    However, the segmentation loss of this code works only when it consists of 0 and 1.

    Therefore, I added a code to change gt's 255 to 1.

    opened by seominseok0429 1
Releases(v0.1.0)
Owner
Zhuo Zheng
CV IN RS. Ph.D. Student.
Zhuo Zheng
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration (NeurIPS 2021) PyTorch implementation of the paper: CoFiNet: Reli

76 Jan 03, 2023
PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages

PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages Abstract NLP applications for code-mixed (CM) or mix-li

Mohsin Ali, Mohammed 1 Nov 12, 2021
Python package for dynamic system estimation of time series

PyDSE Toolset for Dynamic System Estimation for time series inspired by DSE. It is in a beta state and only includes ARMA models right now. Documentat

Blue Yonder GmbH 40 Oct 07, 2022
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Cascading Feature Extraction for Fast Point Cloud Registration (BMVC 2021)

Cascading Feature Extraction for Fast Point Cloud Registration This repository contains the source code for the paper [Arxive link comming soon]. Meth

7 May 26, 2022
A pre-trained language model for social media text in Spanish

RoBERTuito A pre-trained language model for social media text in Spanish READ THE FULL PAPER Github Repository RoBERTuito is a pre-trained language mo

25 Dec 29, 2022
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
Pytorch implementations of the paper Value Functions Factorization with Latent State Information Sharing in Decentralized Multi-Agent Policy Gradients

LSF-SAC Pytorch implementations of the paper Value Functions Factorization with Latent State Information Sharing in Decentralized Multi-Agent Policy G

Hanhan 2 Aug 14, 2022
The code uses SegFormer for Semantic Segmentation on Drone Dataset.

SegFormer_Segmentation The code uses SegFormer for Semantic Segmentation on Drone Dataset. The details for the SegFormer can be obtained from the foll

Dr. Sander Ali Khowaja 1 May 08, 2022
Sound Event Detection with FilterAugment

Sound Event Detection with FilterAugment Official implementation of Heavily Augmented Sound Event Detection utilizing Weak Predictions (DCASE2021 Chal

43 Aug 28, 2022
Programming with Neural Surrogates of Programs

Programming with Neural Surrogates of Programs

0 Dec 12, 2021
Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"

Self-Training for Neural Sequence Generation This repo includes instructions for running noisy self-training algorithms from the following paper: Revi

Junxian He 45 Dec 31, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

fastgradio is a python library to quickly build and share gradio interfaces of your trained fastai models.

Ali Abdalla 34 Jan 05, 2023
Instance-level Image Retrieval using Reranking Transformers

Instance-level Image Retrieval using Reranking Transformers Fuwen Tan, Jiangbo Yuan, Vicente Ordonez, ICCV 2021. Abstract Instance-level image retriev

UVA Computer Vision 87 Jan 03, 2023
Official implementation of "Learning Proposals for Practical Energy-Based Regression", 2021.

ebms_proposals Official implementation (PyTorch) of the paper: Learning Proposals for Practical Energy-Based Regression, 2021 [arXiv] [project]. Fredr

Fredrik Gustafsson 10 Oct 22, 2022
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)

Cryptocurrency Prediction with Artificial Intelligence (Deep Learning via LSTM Neural Networks)- Emirhan BULUT

Emirhan BULUT 102 Nov 18, 2022
Implementation of Bidirectional Recurrent Independent Mechanisms (Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules)

BRIMs Bidirectional Recurrent Independent Mechanisms Implementation of the paper Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neura

Sarthak Mittal 26 May 26, 2022
Snscrape-jsonl-urls-extractor - Extracts urls from jsonl produced by snscrape

snscrape-jsonl-urls-extractor extracts urls from jsonl produced by snscrape Usag

1 Feb 26, 2022