An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

Overview

Automatic Augmentation Zoo

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

We will post updates regularly so you can star 🌟 or watch 👓 this repository for the latest.

Introduction

This repository provides the official implementations of OHL and AWS, and will also integrate some other popular auto-aug methods (like Auto Augment, Fast AutoAugment and Adversarial autoaugment) in pure PyTorch. We use torch.distributed to conduct the distributed training. The model checkpoints will be upload to GoogleDrive or OneDrive soon.

Dependencies

It would be recommended to conduct experiments under:

  • python 3.6.3
  • pytorch 1.1.0, torchvision 0.2.1

All the dependencies are listed in requirements.txt. You may use commands like pip install -r requirements.txt to install them.

Running

  1. Create the directory for your experiment.
cd /path/to/this/repo
mkdir -p exp/aws_search1
  1. Copy configurations into your workspace.
cp scripts/search.sh configs/aws.yaml exp/aws_search1
cd exp/aws_search1
  1. Start searching
# sh ./search.sh  
sh ./search.sh Test 8

An instance of yaml:

version: 0.1.0

dist:
    type: torch
    kwargs:
        node0_addr: auto
        node0_port: auto
        mp_start_method: fork   # fork or spawn; spawn would be too slow for Dalaloader

pipeline:
    type: aws
    common_kwargs:
        dist_training: &dist_training False
#        job_name:         [will be assigned in runtime]
#        exp_root:         [will be assigned in runtime]
#        meta_tb_lg_root:  [will be assigned in runtime]

        data:
            type: cifar100               # case-insensitive (will be converted to lower case in runtime)
#            dataset_root: /path/to/dataset/root   # default: ~/datasets/[type]
            train_set_size: 40000
            val_set_size: 10000
            batch_size: 256
            dist_training: *dist_training
            num_workers: 3
            cutout: True
            cutlen: 16

        model_grad_clip: 3.0
        model:
            type: WRN
            kwargs:
#                num_classes: [will be assigned in runtime]
                bn_mom: 0.5

        agent:
            type: ppo           # ppo or REINFORCE
            kwargs:
                initial_baseline_ratio: 0
                baseline_mom: 0.9
                clip_epsilon: 0.2
                max_training_times: 5
                early_stopping_kl: 0.002
                entropy_bonus: 0
                op_cfg:
                    type: Adam         # any type in torch.optim
                    kwargs:
#                        lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                        betas: !!python/tuple [0.5, 0.999]
                        weight_decay: 0
                sc_cfg:
                    type: Constant
                    kwargs:
                        base_lr_divisor: 8      # base_lr = warmup_lr / base_lr_divisor
                        warmup_lr: 0.1          # lr at the end of warming up
                        warmup_iters: 10      # warmup_epochs = epochs / warmup_divisor
                        iters: &finetune_lp 350
        
        criterion:
            type: LSCE
            kwargs:
                smooth_ratio: 0.05


    special_kwargs:
        pretrained_ckpt_path: ~ # /path/to/pretrained_ckpt.pth.tar
        pretrain_ep: &pretrain_ep 200
        pretrain_op: &sgd
            type: SGD       # any type in torch.optim
            kwargs:
#                lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                nesterov: True
                momentum: 0.9
                weight_decay: 0.0001
        pretrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.2          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *pretrain_ep
                min_lr: &finetune_lr 0.001

        finetuned_ckpt_path: ~  # /path/to/finetuned_ckpt.pth.tar
        finetune_lp: *finetune_lp
        finetune_ep: &finetune_ep 10
        rewarded_ep: 2
        finetune_op: *sgd
        finetune_sc:
            type: Constant
            kwargs:
                base_lr: *finetune_lr
                warmup_lr: *finetune_lr
                warmup_iters: 0
                epochs: *finetune_ep

        retrain_ep: &retrain_ep 300
        retrain_op: *sgd
        retrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.4          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *retrain_ep
                min_lr: 0

Citation

If you're going to to use this code in your research, please consider citing our papers (OHL and AWS).

@inproceedings{lin2019online,
  title={Online Hyper-parameter Learning for Auto-Augmentation Strategy},
  author={Lin, Chen and Guo, Minghao and Li, Chuming and Yuan, Xin and Wu, Wei and Yan, Junjie and Lin, Dahua and Ouyang, Wanli},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={6579--6588},
  year={2019}
}

@article{tian2020improving,
  title={Improving Auto-Augment via Augmentation-Wise Weight Sharing},
  author={Tian, Keyu and Lin, Chen and Sun, Ming and Zhou, Luping and Yan, Junjie and Ouyang, Wanli},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contact for Issues

References & Opensources

Code for Blind Image Decomposition (BID) and Blind Image Decomposition network (BIDeN).

arXiv, porject page, paper Blind Image Decomposition (BID) Blind Image Decomposition is a novel task. The task requires separating a superimposed imag

64 Dec 20, 2022
Repositório para arquivos sobre o Módulo 1 do curso Top Coders da Let's Code + Safra

850-Safra-DS-ModuloI Repositório para arquivos sobre o Módulo 1 do curso Top Coders da Let's Code + Safra Para aprender mais Git https://learngitbranc

Brian Nunes 7 Dec 10, 2022
WarpDrive: Extremely Fast End-to-End Deep Multi-Agent Reinforcement Learning on a GPU

WarpDrive is a flexible, lightweight, and easy-to-use open-source reinforcement learning (RL) framework that implements end-to-end multi-agent RL on a single GPU (Graphics Processing Unit).

Salesforce 334 Jan 06, 2023
Codebase of deep learning models for inferring stability of mRNA molecules

Kaggle OpenVaccine Models Codebase of deep learning models for inferring stability of mRNA molecules, corresponding to the Kaggle Open Vaccine Challen

Eternagame 40 Dec 29, 2022
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identification in Symbolic Scores.

Symbolic Melody Identification This repository is an unofficial PyTorch implementation of the paper:A Convolutional Approach to Melody Line Identifica

Sophia Y. Chou 3 Feb 21, 2022
Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

Learning-Action-Completeness-from-Points Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal A

Pilhyeon Lee 67 Jan 03, 2023
Py-faster-rcnn - Faster R-CNN (Python implementation)

py-faster-rcnn has been deprecated. Please see Detectron, which includes an implementation of Mask R-CNN. Disclaimer The official Faster R-CNN code (w

Ross Girshick 7.8k Jan 03, 2023
Random-Afg - Afghanistan Random Old Idz Cloner Tools

AFGHANISTAN RANDOM OLD IDZ CLONER TOOLS Install $ apt update $ apt upgrade $ apt

MAHADI HASAN AFRIDI 5 Jan 26, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.

TransMix: Attend to Mix for Vision Transformers This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transf

Jie-Neng Chen 130 Jan 01, 2023
Anomaly Detection Based on Hierarchical Clustering of Mobile Robot Data

We proposed a new approach to detect anomalies of mobile robot data. We investigate each data seperately with two clustering method hierarchical and k-means. There are two sub-method that we used for

Zekeriyya Demirci 1 Jan 09, 2022
Pytorch implementation of the paper: "A Unified Framework for Separating Superimposed Images", in CVPR 2020.

Deep Adversarial Decomposition PDF | Supp | 1min-DemoVideo Pytorch implementation of the paper: "Deep Adversarial Decomposition: A Unified Framework f

Zhengxia Zou 72 Dec 18, 2022
Marine debris detection with commercial satellite imagery and deep learning.

Marine debris detection with commercial satellite imagery and deep learning. Floating marine debris is a global pollution problem which threatens mari

Inter Agency Implementation and Advanced Concepts 56 Dec 16, 2022
Fast, modular reference implementation and easy training of Semantic Segmentation algorithms in PyTorch.

TorchSeg This project aims at providing a fast, modular reference implementation for semantic segmentation models using PyTorch. Highlights Modular De

ycszen 1.4k Jan 02, 2023
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

63 Dec 16, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
SEJE Pytorch implementation

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021