An example to implement a new backbone with OpenMMLab framework.

Overview

Backbone example on OpenMMLab framework

English | 简体中文

Introduction

This is an template repo about how to use OpenMMLab framework to develop a new backbone for multiple vision tasks.

With OpenMMLab framework, you can easily develop a new backbone and use MMClassification, MMDetection and MMSegmentation to benchmark your backbone on classification, detection and segmentation tasks.

Setup environment

It requires PyTorch and the following OpenMMLab packages:

  • MIM: A command-line tool to manage OpenMMLab packages and experiments.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark. Besides classification, it's also a repository to store various backbones.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.

Assume you have prepared your Python and PyTorch environment, just use the following command to setup the environment.

pip install openmim mmcls mmdet mmsegmentation
mim install mmcv-full

Data preparation

The data structure looks like below:

data/
├── imagenet
│   ├── train
│   ├── val
│   └── meta
│       ├── train.txt
│       └── val.txt
├── ade
│   └── ADEChallengeData2016
│       ├── annotations
│       └── images
└── coco
    ├── annotations
    │   ├── instance_train2017.json
    │   └── instance_val2017.json
    ├── train2017
    └── val2017

Here, we only list the minimal files for training and validation on ImageNet (classification), ADE20K (segmentation) and COCO (object detection).

If you want benchmark on more datasets or tasks, for example, panoptic segmentation with MMDetection, just organize your dataset according to MMDetection's requirements. For semantic segmentation task, you can organize your dataset according to this tutorial

Usage

Implement your backbone

In this example repository, we use the ConvNeXt as an example to show how to implement a backbone quickly.

  1. Create your backbone file and put it in the models folder. In this example, models/convnext.py.

    In this file, just implement your backbone with PyTorch with two modifications:

    1. The backbone and modules should inherits mmcv.runner.BaseModule. The BaseModule is almost the same as the torch.nn.Module, and supports using init_cfg to specify the initizalization method includes pre-trained model.

    2. Use one-line decorator as below to register the backbone class to the mmcls.models.BACKBONES registry.

      @BACKBONES.register_module(force=True)

      What is registry? Have a look at here!

  2. [Optional] If you want to add some extra components for specific task, you can also add it refers to models/det/layer_decay_optimizer_constructor.py.

  3. Add your backbone class and custom components to models/__init__.py.

Create config files

Add your config files for each task to configs/. If your are not familiar with config files, the tutorial can help you.

In a word, use base config files of model, dataset, schedule and runtime to compose your config files. Of course, you can also override some settings of base config in your config files, even write all settings in one file.

In this template, we provide a suit of popular base config files, you can also find more useful base configs from mmcls, mmdet and mmseg.

Training and testing

For training and testing, you can directly use mim to train and test the model

At first, you need to add the current folder the the PYTHONPATH, so that Python can find your model files.

export PYTHONPATH=`pwd`:$PYTHONPATH 

On local single GPU:

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)"

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs (4 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher pytorch --gpus 4

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher pytorch --gpus 4

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4 

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher pytorch --gpus 4
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs in multiple nodes with Slurm (total 16 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself
  • PARTITION: the slurm partition you are using
Owner
Ma Zerun
Ma Zerun
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

Divyansh Garg 92 Dec 12, 2022
Adversarial Color Enhancement: Generating Unrestricted Adversarial Images by Optimizing a Color Filter

ACE Please find the preliminary version published at BMVC 2020 in the folder BMVC_version, and its extended journal version in Journal_version. Datase

28 Dec 25, 2022
Intent parsing and slot filling in PyTorch with seq2seq + attention

PyTorch Seq2Seq Intent Parsing Reframing intent parsing as a human - machine translation task. Work in progress successor to torch-seq2seq-intent-pars

Sean Robertson 160 Jan 07, 2023
FairMOT for Multi-Class MOT using YOLOX as Detector

FairMOT-X Project Overview FairMOT-X is a multi-class multi object tracker, which has been tailored for training on the BDD100K MOT Dataset. It makes

Jonathan Tan 33 Dec 28, 2022
Pytorch code for "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks".

:speaker: Deep Learning & 3D Convolutional Neural Networks for Speaker Verification

Amirsina Torfi 114 Dec 18, 2022
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Jan 09, 2023
Reviving Iterative Training with Mask Guidance for Interactive Segmentation

This repository provides the source code for training and testing state-of-the-art click-based interactive segmentation models with the official PyTorch implementation

Visual Understanding Lab @ Samsung AI Center Moscow 406 Jan 01, 2023
Awesome-google-colab - Google Colaboratory Notebooks and Repositories

Unofficial Google Colaboratory Notebook and Repository Gallery Please contact me to take over and revamp this repo (it gets around 30k views and 200k

Derek Snow 1.2k Jan 03, 2023
Self-driving car env with PPO algorithm from stable baseline3

Self-driving car with RL stable baseline3 Most of the project develop from https://github.com/GerardMaggiolino/Gym-Medium-Post Please check it out! Th

Sornsiri.P 7 Dec 22, 2022
Retinal Vessel Segmentation with Pixel-wise Adaptive Filters (ISBI 2022)

Official code of Retinal Vessel Segmentation with Pixel-wise Adaptive Filters and Consistency Training (ISBI 2022)

anonymous 14 Oct 27, 2022
For IBM Quantum Challenge Africa 2021, 9 September (07:00 UTC) - 20 September (23:00 UTC).

IBM Quantum Challenge Africa 2021 To ensure Africa is able to apply quantum computing to solve problems relevant to the continent, the IBM Research La

Qiskit Community 48 Dec 25, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cr

Scaleout 75 Nov 09, 2022
CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation

CSKG: The CommonSense Knowledge Graph CSKG is a commonsense knowledge graph that combines seven popular sources into a consolidated representation: AT

USC ISI I2 85 Dec 12, 2022
LSTM model trained on a small dataset of 3000 names written in PyTorch

LSTM model trained on a small dataset of 3000 names. Model generates names from model by selecting one out of top 3 letters suggested by model at a time until an EOS (End Of Sentence) character is no

Sahil Lamba 1 Dec 20, 2021
SAT: 2D Semantics Assisted Training for 3D Visual Grounding, ICCV 2021 (Oral)

SAT: 2D Semantics Assisted Training for 3D Visual Grounding SAT: 2D Semantics Assisted Training for 3D Visual Grounding by Zhengyuan Yang, Songyang Zh

Zhengyuan Yang 22 Nov 30, 2022
CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation

CyTran: Cycle-Consistent Transformers for Non-Contrast to Contrast CT Translation We propose a novel approach to translate unpaired contrast computed

Nicolae Catalin Ristea 13 Jan 02, 2023
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
Signals-backend - A suite of card games written in Python

Card game A suite of card games written in the Python language. Features coming

1 Feb 15, 2022
Implementation of FitVid video prediction model in JAX/Flax.

FitVid Video Prediction Model Implementation of FitVid video prediction model in JAX/Flax. If you find this code useful, please cite it in your paper:

Google Research 62 Nov 25, 2022
A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution.

Awesome Pretrained StyleGAN2 A collection of pre-trained StyleGAN2 models trained on different datasets at different resolution. Note the readme is a

Justin 1.1k Dec 24, 2022