Skip to content

qinenergy/cotta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoTTA: Continual Test-Time Adaptation

Official code for Continual Test-Time Domain Adaptation, published in CVPR 2022.

This repository also includes other continual test-time adaptation methods for classification and segmentation. We provide benchmarking and comparison for the following methods:

  • CoTTA
  • AdaBN / BN Adapt
  • TENT

on the following tasks

  • CIFAR10/100 -> CIFAR10C/100C (standard/gradual)
  • ImageNet -> ImageNetC
  • Cityscapes -> ACDC (segmentation)

Prerequisite

Please create and activate the following conda envrionment. To reproduce our results, please kindly create and use this environment.

# It may take several minutes for conda to solve the environment
conda update conda
conda env create -f environment.yml
conda activate cotta 

Classification Experiments

CIFAR10-to-CIFAR10C-standard task

# Tested on RTX2080TI
cd cifar
# This includes the comparison of all three methods as well as baseline
bash run_cifar10.sh 

CIFAR10-to-CIFAR10C-gradual task

# Tested on RTX2080TI
bash run_cifar10_gradual.sh

CIFAR100-to-CIFAR100C task

# Tested on RTX3090
bash run_cifar100.sh

ImageNet-to-ImageNetC task

# Tested on RTX3090
cd imagenet
bash run.sh

Segmentation Experiments

Cityscapes-to-ACDC segmentation task

Since April 2022, we also offer the segmentation code based on Segformer. You can download it here

## environment setup: a new conda environment is needed for segformer
## You may also want to check https://github.com/qinenergy/cotta/issues/13 if you have problem installing mmcv
conda env create -f environment_segformer.yml
pip install -e . --user
conda activate segformer
## Run
bash run_base.sh
bash run_tent.sh
bash run_cotta.sh
# Example logs are included in ./example_logs/base.log, tent.log, and cotta.log.
## License for Cityscapses-to-ACDC code
Non-commercial. Code is heavily based on Segformer. Please also check Segformer's LICENSE.

Data links

Citation

Please cite our work if you find it useful.

@inproceedings{wang2022continual,
  title={Continual Test-Time Domain Adaptation},
  author={Wang, Qin and Fink, Olga and Van Gool, Luc and Dai, Dengxin},
  booktitle={Proceedings of Conference on Computer Vision and Pattern Recognition},
  year={2022}
}

Acknowledgement

External data link

For questions regarding the code, please contact wang@qin.ee .

About

[CVPR 2022] Official CoTTA Code for our paper Continual Test-Time Domain Adaptation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published