MegEngine implementation of YOLOX

Overview

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of MegEngine version YOLOX, there is also a PyTorch implementation.

Updates!!

  • 【2021/08/05】 We release MegEngine version YOLOX.

Comming soon

  • Faster YOLOX training speed.
  • More models of megEngine version.
  • AMP training of megEngine.

Benchmark

Light Models.

Model size mAPval
0.5:0.95
Params
(M)
FLOPs
(G)
weights
YOLOX-Tiny 416 32.2 5.06 6.45 github

Standard Models.

Comming soon!

Quick Start

Installation

Step1. Install YOLOX.

git clone [email protected]:MegEngine/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e .  # or  python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo

Step1. Download a pretrained model from the benchmark table.

Step2. Use either -n or -f to specify your detector's config. For example:

python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

or

python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

Demo for video:

python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO

Step2. Reproduce our results on COCO by specifying -n:

python tools/train.py -n yolox-tiny -d 8 -b 128
  • -d: number of gpu devices
  • -b: total batch size, the recommended number for -b is num-gpu * 8

When using -f, the above commands are equivalent to:

python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128
Evaluation

We support batch testing for fast evaluation:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse]
  • --fuse: fuse conv and bn
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs

To reproduce speed test, we use the following command:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse
Tutorials

MegEngine Deployment

MegEngine in C++

Dump mge file

NOTE: result model is dumped with optimize_for_inference and enable_fuse_conv_bias_nonlinearity.

python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge

Benchmark

  • Model Info: yolox-s @ input(1,3,640,640)

  • Testing Devices

    • x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
    • AArch64 -- xiamo phone mi9
    • CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
[email protected] +fastrun +weight_preprocess (msec) 1 thread 2 thread 4 thread 8 thread
x86_64(fp32) 516.245 318.29 253.273 222.534
x86_64(fp32+chw88) 362.020 NONE NONE NONE
aarch64(fp32+chw44) 555.877 351.371 242.044 NONE
aarch64(fp16+chw) 439.606 327.356 255.531 NONE
CUDA @ CUDA (msec) 1 batch 2 batch 4 batch 8 batch 16 batch 32 batch 64 batch
megengine(fp32+chw) 8.137 13.2893 23.6633 44.470 86.491 168.95 334.248

Third-party resources

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}
Comments
  • Why the yolox_tiny can not load the pretrain model correctly?

    Why the yolox_tiny can not load the pretrain model correctly?

    When i used this repo on MegStudio and tried to train yolox_tiny with the pretrained model, an error occurred. The detail log are as follow.

    2021-09-15 13:11:11 | INFO | yolox.core.trainer:247 - loading checkpoint for fine tuning 2021-09-15 13:11:11 | ERROR | main:93 - An error has been caught in function '', process 'MainProcess' (359), thread 'MainThread' (139974572922688): Traceback (most recent call last):

    File "tools/train.py", line 93, in main(exp, args) │ │ └ Namespace(batch_size=16, ckpt='yolox_tiny.pkl', devices=1, exp_file='exps/default/yolox_tiny.py', experiment_name='yolox_tiny... │ └ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════... └ <function main at 0x7f4e5d7308c0>

    File "tools/train.py", line 73, in main trainer.train() │ └ <function Trainer.train at 0x7f4dec68b680> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 46, in train self.before_train() │ └ <function Trainer.before_train at 0x7f4d9a6f55f0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 107, in before_train model = self.resume_train(model) │ │ └ YOLOX( │ │ (backbone): YOLOPAFPN( │ │ (backbone): CSPDarknet( │ │ (stem): Focus( │ │ (conv): BaseConv( │ │ (conv): ... │ └ <function Trainer.resume_train at 0x7f4d9a70c0e0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 249, in resume_train ckpt = mge.load(ckpt_file, map_location="cpu")["model"] │ │ └ 'yolox_tiny.pkl' │ └ <function load at 0x7f4df6c46680> └ <module 'megengine' from '/home/megstudio/.miniconda/envs/xuan/lib/python3.7/site-packages/megengine/init.py'>

    KeyError: 'model'

    opened by qunyuanchen 4
  • AssertionError: Torch not compiled with CUDA enabled

    AssertionError: Torch not compiled with CUDA enabled

     python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device gpu
    2021-09-07 18:45:49.600 | INFO     | __main__:main:250 - Args: Namespace(camid=0, ckpt='/path/to/your/yolox_tiny.pkl', conf=0.25, demo='image', device='gpu', exp_file=None, experiment_name='yolox_tiny', fp16=False, fuse=False, legacy=False, name='yolox-tiny', nms=0.45, path='assets/dog.jpg', save_result=True, trt=False, tsize=416)
    E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ..\c10/core/TensorImpl.h:1156.)
      return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    2021-09-07 18:45:49.791 | INFO     | __main__:main:260 - Model Summary: Params: 5.06M, Gflops: 6.45
    Traceback (most recent call last):
      File "tools/demo.py", line 306, in <module>
        main(exp, args)
      File "tools/demo.py", line 263, in main
        model.cuda()
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      [Previous line repeated 2 more times]
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 552, in _apply
        param_applied = fn(param)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    
    

    环境 CUDA Version: 11.2 没问题

    按照官方的教程 报错

    opened by monkeycc 4
  • Shouldn't it be Xiaomi instead of

    Shouldn't it be Xiaomi instead of "xiamo" in the Benchmark -- Testing Devices section?

    Testing Devices

    x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz AArch64 -- xiamo phone mi9 CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz

    Shouldn't it be Xiaomi phone mi9?

    opened by Matt-Kou 2
  • fix bugs

    fix bugs

    1. img_info for VOC dataset is wrong.
    2. grid for yolo_head is wrong (Similar to https://github.com/MegEngine/YOLOX/issues/9). If the image has the same height and width, it will be ok. But, when height != weight, it will be wrong.
    opened by LZHgrla 2
  • RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    当输入宽高不一致时报错, 在训练过程中报错,报错时机随缘: yolo_head.py", line 351, in get_assignments bboxes_preds_per_image = bboxes_preds_per_image[fg_mask] RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()' failed at ../../../../../../dnn/src/common/elemwise/opr_impl.cpp:281: void megdnn::ElemwiseForward::check_layout_and_broadcast(const TensorLayoutPtrArray&, const megdnn::TensorLayout&)

    opened by amazingzby 1
Releases(0.0.1)
Owner
旷视天元 MegEngine
旷视天元 MegEngine
Heat transfer problemas solved using python

heat-transfer Heat transfer problems solved using python isolation-convection.py compares the temperature distribution on the problem as shown in the

2 Nov 14, 2021
Incorporating Transformer and LSTM to Kalman Filter with EM algorithm

Deep learning based state estimation: incorporating Transformer and LSTM to Kalman Filter with EM algorithm Overview Kalman Filter requires the true p

zshicode 57 Dec 27, 2022
Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

16 Nov 19, 2022
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
A Player for Kanye West's Stem Player. Sort of an emulator.

Stem Player Player Stem Player Player Usage Download the latest release here Optional: install ffmpeg, instructions here NOTE: DOES NOT ENABLE DOWNLOA

119 Dec 28, 2022
Semantic Edge Detection with Diverse Deep Supervision

Semantic Edge Detection with Diverse Deep Supervision This repository contains the code for our IJCV paper: "Semantic Edge Detection with Diverse Deep

Yun Liu 12 Dec 31, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
Class-Balanced Loss Based on Effective Number of Samples. CVPR 2019

Class-Balanced Loss Based on Effective Number of Samples Tensorflow code for the paper: Class-Balanced Loss Based on Effective Number of Samples Yin C

Yin Cui 546 Jan 08, 2023
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
PyTorch implementation for OCT-GAN Neural ODE-based Conditional Tabular GANs (WWW 2021)

OCT-GAN: Neural ODE-based Conditional Tabular GANs (OCT-GAN) Code for reproducing the experiments in the paper: Jayoung Kim*, Jinsung Jeon*, Jaehoon L

BigDyL 7 Dec 27, 2022
GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors

GPU implementation of kNN and SNN GPU implementation of $k$-Nearest Neighbors and Shared-Nearest Neighbors Supported by numba cuda and faiss library E

Hyeon Jeon 7 Nov 23, 2022
The official PyTorch code for 'DER: Dynamically Expandable Representation for Class Incremental Learning' accepted by CVPR2021

DER.ClassIL.Pytorch This repo is the official implementation of DER: Dynamically Expandable Representation for Class Incremental Learning (CVPR 2021)

rhyssiyan 108 Jan 01, 2023
A PyTorch implementation of SIN: Superpixel Interpolation Network

SIN: Superpixel Interpolation Network This is is a PyTorch implementation of the superpixel segmentation network introduced in our PRICAI-2021 paper:

6 Sep 28, 2022
[CVPR 2022 Oral] MixFormer: End-to-End Tracking with Iterative Mixed Attention

MixFormer The official implementation of the CVPR 2022 paper MixFormer: End-to-End Tracking with Iterative Mixed Attention [Models and Raw results] (G

Multimedia Computing Group, Nanjing University 235 Jan 03, 2023
Learning embeddings for classification, retrieval and ranking.

StarSpace StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems: Learning wor

Facebook Research 3.8k Dec 22, 2022
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
Adversarial Adaptation with Distillation for BERT Unsupervised Domain Adaptation

Knowledge Distillation for BERT Unsupervised Domain Adaptation Official PyTorch implementation | Paper Abstract A pre-trained language model, BERT, ha

Minho Ryu 29 Nov 30, 2022
Real-time 3D multi-person detection made easy with OpenPose and the ZED

OpenPose ZED This sample show how to simply use the ZED with OpenPose, the deep learning framework that detects the skeleton from a single 2D image. T

blanktec 5 Nov 06, 2020
CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

Bubbliiiing 267 Dec 29, 2022
Project repo for the paper SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition

SILT: Self-supervised Lighting Transfer Using Implicit Image Decomposition (BMVC 2021) Project repo for the paper SILT: Self-supervised Lighting Trans

6 Dec 04, 2022