SWA Object Detection

Overview

SWA Object Detection

This project hosts the scripts for training SWA object detectors, as presented in our paper:

@article{zhang2020swa,
  title={SWA Object Detection},
  author={Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:2012.12645},
  year={2020}
}

The full paper is available at: https://arxiv.org/abs/2012.12645.

Introduction

Do you want to improve 1.0 AP for your object detector without any inference cost and any change to your detector? Let us tell you such a recipe. It is surprisingly simple: train your detector for an extra 12 epochs using cyclical learning rates and then average these 12 checkpoints as your final detection model. This potent recipe is inspired by Stochastic Weights Averaging (SWA), which is proposed in [1] for improving generalization in deep neural networks. We found it also very effective in object detection. In this work, we systematically investigate the effects of applying SWA to object detection as well as instance segmentation. Through extensive experiments, we discover a good policy of performing SWA in object detection, and we consistently achieve ~1.0 AP improvement over various popular detectors on the challenging COCO benchmark. We hope this work will make more researchers in object detection know this technique and help them train better object detectors.

SWA Object Detection: averaging multiple detection models leads to a better one.

Updates

  • 2020.01.08 Reimplement the code and now it is more convenient, more flexible and easier to perform both the conventional training and SWA training. See Instructions.
  • 2020.01.07 Update to MMDetection v2.8.0.
  • 2020.12.24 Release the code.

Installation

  • This project is based on MMDetection. Therefore the installation is the same as original MMDetection.

  • Please check get_started.md for installation. Note that you should change the version of PyTorch and CUDA to yours when installing mmcv in step 3 and clone this repo instead of MMdetection in step 4.

  • If you run into problems with pycocotools, please install it by:

    pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=pycocotools"
    

Usage of MMDetection

MMDetection provides colab tutorial, and full guidance for quick run with existing dataset and with new dataset for beginners. There are also tutorials for finetuning models, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and useful tools.

Please refer to FAQ for frequently asked questions.

Instructions

We add a SWA training phase to the object detector training process, implement a SWA hook that helps process averaged models, and write a SWA config for conveniently deploying SWA training in training various detectors. We also provide many config files for reproducing the results in the paper.

By including the SWA config in detector config files and setting related parameters, you can have different SWA training modes.

  1. Two-pahse mode. In this mode, the training will begin with the traditional training phase, and it continues for epochs. After that, SWA training will start, with loading the best model on the validation from the previous training phase (becasue swa_load_from = 'best_bbox_mAP.pth'in the SWA config).

    As shown in swa_vfnet_r50 config, the SWA config is included at line 4 and only the SWA optimizer is reset at line 118 in this script. Note that configuring parameters in local scripts will overwrite those values inherited from the SWA config.

    You can change those parameters that are included in the SWA config to use different optimizers or different learning rate schedules for the SWA training. For example, to use a different initial learning rate, say 0.02, you just need to set swa_optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) in the SWA config (global effect) or in the swa_vfnet_r50 config (local effect).

    To start the training, run:

    ./tools/dist_train.sh configs/swa/swa_vfnet_r50_fpn_1x_coco.py 8
    
    
  2. Only-SWA mode. In this mode, the traditional training is skipped and only the SWA training is performed. In general, this mode should work with a pre-trained detection model which you can download from the MMDetection model zoo.

    Have a look at the swa_mask_rcnn_r101 config. By setting only_swa_training = True and swa_load_from = mask_rcnn_pretraind_model, this script conducts only SWA training, starting from a pre-trained detection model. To start the training, run:

    ./tools/dist_train.sh configs/swa/swa_mask_rcnn_r101_fpn_2x_coco.py 8
    
    

In both modes, we have implemented the validation stage and saving functions for the SWA model. Thus, it would be easy to monitor the performance and select the best SWA model.

Results and Models

For your convenience, we provide the following SWA models. These models are obtained by averaging checkpoints that are trained with cyclical learning rates for 12 epochs.

Model bbox AP (val) segm AP (val)     Download    
SWA-MaskRCNN-R50-1x-0.02-0.0002-38.2-34.7 39.1, +0.9 35.5, +0.8 model | config
SWA-MaskRCNN-R101-1x-0.02-0.0002-40.0-36.1 41.0, +1.0 37.0, +0.9 model | config
SWA-MaskRCNN-R101-2x-0.02-0.0002-40.8-36.6 41.7, +0.9 37.4, +0.8 model | config
SWA-FasterRCNN-R50-1x-0.02-0.0002-37.4 38.4, +1.0 - model | config
SWA-FasterRCNN-R101-1x-0.02-0.0002-39.4 40.3, +0.9 - model | config
SWA-FasterRCNN-R101-2x-0.02-0.0002-39.8 40.7, +0.9 - model | config
SWA-RetinaNet-R50-1x-0.01-0.0001-36.5 37.8, +1.3 - model | config
SWA-RetinaNet-R101-1x-0.01-0.0001-38.5 39.7, +1.2 - model | config
SWA-RetinaNet-R101-2x-0.01-0.0001-38.9 40.0, +1.1 - model | config
SWA-FCOS-R50-1x-0.01-0.0001-36.6 38.0, +1.4 - model | config
SWA-FCOS-R101-1x-0.01-0.0001-39.2 40.3, +1.1 - model | config
SWA-FCOS-R101-2x-0.01-0.0001-39.1 40.2, +1.1 - model | config
SWA-YOLOv3(320)-D53-273e-0.001-0.00001-27.9 28.7, +0.8 - model | config
SWA-YOLOv3(680)-D53-273e-0.001-0.00001-33.4 34.2, +0.8 - model | config
SWA-VFNet-R50-1x-0.01-0.0001-41.6 42.8, +1.2 - model | config
SWA-VFNet-R101-1x-0.01-0.0001-43.0 44.3, +1.3 - model | config
SWA-VFNet-R101-2x-0.01-0.0001-43.5 44.5, +1.0 - model | config

Notes:

  • SWA-MaskRCNN-R50-1x-0.02-0.0002-38.2-34.7 means this SWA model is produced based on the pre-trained Mask RCNN model that has a ResNet50 backbone, is trained under 1x schedule with the initial learning rate 0.02 and ending learning rate 0.0002, and achieves 38.2 bbox AP and 34.7 mask AP on the COCO val2017 respectively. This SWA model acheives 39.1 bbox AP and 35.5 mask AP, which are higher than the pre-trained model by 0.9 bbox AP and 0.8 mask AP respectively. This rule applies to other object detectors.

  • In addition to these baseline detectors, SWA can also improve more powerful detectors. One example is VFNetX whose performance on the COCO val2017 is improved from 52.2 AP to 53.4 AP (+1.2 AP).

  • More detailed results including AP50 and AP75 can be found here.

Contributing

Any pull requests or issues are welcome.

Citation

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows:

@article{zhang2020swa,
  title={SWA Object Detection},
  author={Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S{\"u}nderhauf, Niko},
  journal={arXiv preprint arXiv:2012.12645},
  year={2020}
}

Acknowledgment

Many thanks to Dr Marlies Hankel and MASSIVE HPC for supporting precious GPU computation resources!

We also would like to thank MMDetection team for producing this great object detection toolbox.

License

This project is released under the Apache 2.0 license.

References

[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018

A `Neural = Symbolic` framework for sound and complete weighted real-value logic

Logical Neural Networks LNNs are a novel Neuro = symbolic framework designed to seamlessly provide key properties of both neural nets (learning) and s

International Business Machines 138 Dec 19, 2022
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022
pixelNeRF: Neural Radiance Fields from One or Few Images

pixelNeRF: Neural Radiance Fields from One or Few Images Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa UC Berkeley arXiv: http://arxiv.org/abs/2

Alex Yu 1k Jan 04, 2023
Using contrastive learning and OpenAI's CLIP to find good embeddings for images with lossy transformations

The official code for the paper "Inverse Problems Leveraging Pre-trained Contrastive Representations" (to appear in NeurIPS 2021).

Sriram Ravula 26 Dec 10, 2022
Unofficial implementation of Fast-SCNN: Fast Semantic Segmentation Network

Fast-SCNN: Fast Semantic Segmentation Network Unofficial implementation of the model architecture of Fast-SCNN. Real-time Semantic Segmentation and mo

Philip Popien 69 Aug 11, 2022
Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable.

Diffrax Numerical differential equation solvers in JAX. Autodifferentiable and GPU-capable. Diffrax is a JAX-based library providing numerical differe

Patrick Kidger 717 Jan 09, 2023
An unopinionated replacement for PyTorch's Dataset and ImageFolder, that handles Tar archives

Simple Tar Dataset An unopinionated replacement for PyTorch's Dataset and ImageFolder classes, for datasets stored as uncompressed Tar archives. Just

Joao Henriques 47 Dec 20, 2022
Time-series-deep-learning - Developing Deep learning LSTM, BiLSTM models, and NeuralProphet for multi-step time-series forecasting of stock price.

Stock Price Prediction Using Deep Learning Univariate Time Series Predicting stock price using historical data of a company using Neural networks for

Abdultawwab Safarji 7 Nov 27, 2022
PyTorch META-DATASET (Few-shot classification benchmark)

PyTorch META-DATASET (Few-shot classification benchmark) This repo contains a PyTorch implementation of meta-dataset and a unified implementation of s

Malik Boudiaf 39 Oct 31, 2022
Evaluation suite for large-scale language models.

This repo contains code for running the evaluations and reproducing the results from the Jurassic-1 Technical Paper (see blog post), with current support for running the tasks through both the AI21 S

71 Dec 17, 2022
Implementation of the paper "Generating Symbolic Reasoning Problems with Transformer GANs"

Generating Symbolic Reasoning Problems with Transformer GANs This is the implementation of the paper Generating Symbolic Reasoning Problems with Trans

Reactive Systems Group 1 Apr 18, 2022
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
A Simple and Versatile Framework for Object Detection and Instance Recognition

SimpleDet - A Simple and Versatile Framework for Object Detection and Instance Recognition Major Features FP16 training for memory saving and up to 2.

TuSimple 3k Dec 12, 2022
Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL)

Locally Differentially Private Distributed Deep Learning via Knowledge Distillation (LDP-DL) A preprint version of our paper: Link here This is a samp

Di Zhuang 3 Jan 08, 2023
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022
Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented at RAI 2021.

Can Active Learning Preemptively Mitigate Fairness Issues? Code for the paper "Can Active Learning Preemptively Mitigate Fairness Issues?" presented a

ElementAI 7 Aug 12, 2022
UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems

[ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui*, Jianchao Tan, Zhangyang Wang, and Ji Liu

VITA 39 Dec 03, 2022
PyTorch Implementation of Backbone of PicoDet

PicoDet-Backbone PyTorch Implementation of Backbone of PicoDet Original Implementation is implemented on PaddlePaddle. Example picodet_l_backbone = ES

Yonghye Kwon 7 Jul 12, 2022
Drone detection using YOLOv5

This drone detection system uses YOLOv5 which is a family of object detection architectures and we have trained the model on Drone Dataset. Overview I

Tushar Sarkar 27 Dec 20, 2022