An example to implement a new backbone with OpenMMLab framework.

Overview

Backbone example on OpenMMLab framework

English | 简体中文

Introduction

This is an template repo about how to use OpenMMLab framework to develop a new backbone for multiple vision tasks.

With OpenMMLab framework, you can easily develop a new backbone and use MMClassification, MMDetection and MMSegmentation to benchmark your backbone on classification, detection and segmentation tasks.

Setup environment

It requires PyTorch and the following OpenMMLab packages:

  • MIM: A command-line tool to manage OpenMMLab packages and experiments.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark. Besides classification, it's also a repository to store various backbones.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.

Assume you have prepared your Python and PyTorch environment, just use the following command to setup the environment.

pip install openmim mmcls mmdet mmsegmentation
mim install mmcv-full

Data preparation

The data structure looks like below:

data/
├── imagenet
│   ├── train
│   ├── val
│   └── meta
│       ├── train.txt
│       └── val.txt
├── ade
│   └── ADEChallengeData2016
│       ├── annotations
│       └── images
└── coco
    ├── annotations
    │   ├── instance_train2017.json
    │   └── instance_val2017.json
    ├── train2017
    └── val2017

Here, we only list the minimal files for training and validation on ImageNet (classification), ADE20K (segmentation) and COCO (object detection).

If you want benchmark on more datasets or tasks, for example, panoptic segmentation with MMDetection, just organize your dataset according to MMDetection's requirements. For semantic segmentation task, you can organize your dataset according to this tutorial

Usage

Implement your backbone

In this example repository, we use the ConvNeXt as an example to show how to implement a backbone quickly.

  1. Create your backbone file and put it in the models folder. In this example, models/convnext.py.

    In this file, just implement your backbone with PyTorch with two modifications:

    1. The backbone and modules should inherits mmcv.runner.BaseModule. The BaseModule is almost the same as the torch.nn.Module, and supports using init_cfg to specify the initizalization method includes pre-trained model.

    2. Use one-line decorator as below to register the backbone class to the mmcls.models.BACKBONES registry.

      @BACKBONES.register_module(force=True)

      What is registry? Have a look at here!

  2. [Optional] If you want to add some extra components for specific task, you can also add it refers to models/det/layer_decay_optimizer_constructor.py.

  3. Add your backbone class and custom components to models/__init__.py.

Create config files

Add your config files for each task to configs/. If your are not familiar with config files, the tutorial can help you.

In a word, use base config files of model, dataset, schedule and runtime to compose your config files. Of course, you can also override some settings of base config in your config files, even write all settings in one file.

In this template, we provide a suit of popular base config files, you can also find more useful base configs from mmcls, mmdet and mmseg.

Training and testing

For training and testing, you can directly use mim to train and test the model

At first, you need to add the current folder the the PYTHONPATH, so that Python can find your model files.

export PYTHONPATH=`pwd`:$PYTHONPATH 

On local single GPU:

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)"

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs (4 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher pytorch --gpus 4

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher pytorch --gpus 4

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4 

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher pytorch --gpus 4
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs in multiple nodes with Slurm (total 16 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself
  • PARTITION: the slurm partition you are using
Owner
Ma Zerun
Ma Zerun
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2020

Phillip Lippe 1.1k Jan 07, 2023
Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"

LongDocSum Code for NAACL 2021 paper "Efficient Attentions for Long Document Summarization" This repository contains data and models needed to reprodu

56 Jan 02, 2023
U-Net: Convolutional Networks for Biomedical Image Segmentation

Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras This tutorial shows how to use Keras library to build deep ne

Yihui He 401 Nov 21, 2022
The coda and data for "Measuring Fine-Grained Domain Relevance of Terms: A Hierarchical Core-Fringe Approach" (ACL '21)

We propose a hierarchical core-fringe learning framework to measure fine-grained domain relevance of terms – the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., de

Jie Huang 14 Oct 21, 2022
Real-Time High-Resolution Background Matting

Real-Time High-Resolution Background Matting Official repository for the paper Real-Time High-Resolution Background Matting. Our model requires captur

Peter Lin 6.1k Jan 03, 2023
NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

NeurIPS'21 Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows This repo contains the code for the paper Tractable Densit

Layer6 Labs 4 Dec 12, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
i3DMM: Deep Implicit 3D Morphable Model of Human Heads

i3DMM: Deep Implicit 3D Morphable Model of Human Heads CVPR 2021 (Oral) Arxiv | Poject Page This project is the official implementation our work, i3DM

Tarun Yenamandra 60 Jan 03, 2023
Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”

Official implementation for TransDA Official pytorch implement for “Transformer-Based Source-Free Domain Adaptation”. Overview: Result: Prerequisites:

stanley 54 Dec 22, 2022
Real-time ground filtering algorithm of cloud points acquired using Terrestrial Laser Scanner (TLS)

This repository contains tools to simulate the ground filtering process of a registered point cloud. The repository contains two filtering methods. The first method uses a normal vector, and fit to p

5 Aug 25, 2022
A novel Engagement Detection with Multi-Task Training (ED-MTT) system

A novel Engagement Detection with Multi-Task Training (ED-MTT) system which minimizes MSE and triplet loss together to determine the engagement level of students in an e-learning environment.

Onur Çopur 12 Nov 11, 2022
Immortal tracker

Immortal_tracker Prerequisite Our code is tested for Python 3.6. To install required liabraries: pip install -r requirements.txt Waymo Open Dataset P

74 Dec 03, 2022
Object detection, 3D detection, and pose estimation using center point detection:

Objects as Points Object detection, 3D detection, and pose estimation using center point detection: Objects as Points, Xingyi Zhou, Dequan Wang, Phili

Xingyi Zhou 6.7k Jan 03, 2023
Cross-Document Coreference Resolution

Cross-Document Coreference Resolution This repository contains code and models for end-to-end cross-document coreference resolution, as decribed in ou

Arie Cattan 29 Nov 28, 2022
simple artificial intelligence utilities

Simple AI Project home: http://github.com/simpleai-team/simpleai This lib implements many of the artificial intelligence algorithms described on the b

921 Dec 08, 2022
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
[ICLR2021oral] Rethinking Architecture Selection in Differentiable NAS

DARTS-PT Code accompanying the paper ICLR'2021: Rethinking Architecture Selection in Differentiable NAS Ruochen Wang, Minhao Cheng, Xiangning Chen, Xi

Ruochen Wang 86 Dec 27, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation

Implementation for paper LadderNet: Multi-path networks based on U-Net for medical image segmentation This implementation is based on orobix implement

Juntang Zhuang 116 Sep 06, 2022
Official code repository for the work: "The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement"

Handheld Multi-Frame Neural Depth Refinement This is the official code repository for the work: The Implicit Values of A Good Hand Shake: Handheld Mul

55 Dec 14, 2022