MegEngine implementation of YOLOX

Overview

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of MegEngine version YOLOX, there is also a PyTorch implementation.

Updates!!

  • 【2021/08/05】 We release MegEngine version YOLOX.

Comming soon

  • Faster YOLOX training speed.
  • More models of megEngine version.
  • AMP training of megEngine.

Benchmark

Light Models.

Model size mAPval
0.5:0.95
Params
(M)
FLOPs
(G)
weights
YOLOX-Tiny 416 32.2 5.06 6.45 github

Standard Models.

Comming soon!

Quick Start

Installation

Step1. Install YOLOX.

git clone [email protected]:MegEngine/YOLOX.git
cd YOLOX
pip3 install -U pip && pip3 install -r requirements.txt
pip3 install -v -e .  # or  python3 setup.py develop

Step2. Install pycocotools.

pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
Demo

Step1. Download a pretrained model from the benchmark table.

Step2. Use either -n or -f to specify your detector's config. For example:

python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

or

python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]

Demo for video:

python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu]
Reproduce our results on COCO

Step1. Prepare COCO dataset

cd <YOLOX_HOME>
ln -s /path/to/your/COCO ./datasets/COCO

Step2. Reproduce our results on COCO by specifying -n:

python tools/train.py -n yolox-tiny -d 8 -b 128
  • -d: number of gpu devices
  • -b: total batch size, the recommended number for -b is num-gpu * 8

When using -f, the above commands are equivalent to:

python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128
Evaluation

We support batch testing for fast evaluation:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse]
  • --fuse: fuse conv and bn
  • -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
  • -b: total batch size across on all GPUs

To reproduce speed test, we use the following command:

python tools/eval.py -n  yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse
Tutorials

MegEngine Deployment

MegEngine in C++

Dump mge file

NOTE: result model is dumped with optimize_for_inference and enable_fuse_conv_bias_nonlinearity.

python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge

Benchmark

  • Model Info: yolox-s @ input(1,3,640,640)

  • Testing Devices

    • x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
    • AArch64 -- xiamo phone mi9
    • CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
[email protected] +fastrun +weight_preprocess (msec) 1 thread 2 thread 4 thread 8 thread
x86_64(fp32) 516.245 318.29 253.273 222.534
x86_64(fp32+chw88) 362.020 NONE NONE NONE
aarch64(fp32+chw44) 555.877 351.371 242.044 NONE
aarch64(fp16+chw) 439.606 327.356 255.531 NONE
CUDA @ CUDA (msec) 1 batch 2 batch 4 batch 8 batch 16 batch 32 batch 64 batch
megengine(fp32+chw) 8.137 13.2893 23.6633 44.470 86.491 168.95 334.248

Third-party resources

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}
Comments
  • Why the yolox_tiny can not load the pretrain model correctly?

    Why the yolox_tiny can not load the pretrain model correctly?

    When i used this repo on MegStudio and tried to train yolox_tiny with the pretrained model, an error occurred. The detail log are as follow.

    2021-09-15 13:11:11 | INFO | yolox.core.trainer:247 - loading checkpoint for fine tuning 2021-09-15 13:11:11 | ERROR | main:93 - An error has been caught in function '', process 'MainProcess' (359), thread 'MainThread' (139974572922688): Traceback (most recent call last):

    File "tools/train.py", line 93, in main(exp, args) │ │ └ Namespace(batch_size=16, ckpt='yolox_tiny.pkl', devices=1, exp_file='exps/default/yolox_tiny.py', experiment_name='yolox_tiny... │ └ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════... └ <function main at 0x7f4e5d7308c0>

    File "tools/train.py", line 73, in main trainer.train() │ └ <function Trainer.train at 0x7f4dec68b680> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 46, in train self.before_train() │ └ <function Trainer.before_train at 0x7f4d9a6f55f0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 107, in before_train model = self.resume_train(model) │ │ └ YOLOX( │ │ (backbone): YOLOPAFPN( │ │ (backbone): CSPDarknet( │ │ (stem): Focus( │ │ (conv): BaseConv( │ │ (conv): ... │ └ <function Trainer.resume_train at 0x7f4d9a70c0e0> └ <yolox.core.trainer.Trainer object at 0x7f4d9a68a7d0>

    File "/home/megstudio/workspace/YOLOX/yolox/core/trainer.py", line 249, in resume_train ckpt = mge.load(ckpt_file, map_location="cpu")["model"] │ │ └ 'yolox_tiny.pkl' │ └ <function load at 0x7f4df6c46680> └ <module 'megengine' from '/home/megstudio/.miniconda/envs/xuan/lib/python3.7/site-packages/megengine/init.py'>

    KeyError: 'model'

    opened by qunyuanchen 4
  • AssertionError: Torch not compiled with CUDA enabled

    AssertionError: Torch not compiled with CUDA enabled

     python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device gpu
    2021-09-07 18:45:49.600 | INFO     | __main__:main:250 - Args: Namespace(camid=0, ckpt='/path/to/your/yolox_tiny.pkl', conf=0.25, demo='image', device='gpu', exp_file=None, experiment_name='yolox_tiny', fp16=False, fuse=False, legacy=False, name='yolox-tiny', nms=0.45, path='assets/dog.jpg', save_result=True, trt=False, tsize=416)
    E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ..\c10/core/TensorImpl.h:1156.)
      return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
    2021-09-07 18:45:49.791 | INFO     | __main__:main:260 - Model Summary: Params: 5.06M, Gflops: 6.45
    Traceback (most recent call last):
      File "tools/demo.py", line 306, in <module>
        main(exp, args)
      File "tools/demo.py", line 263, in main
        model.cuda()
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 530, in _apply
        module._apply(fn)
      [Previous line repeated 2 more times]
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 552, in _apply
        param_applied = fn(param)
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\nn\modules\module.py", line 637, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "E:\anaconda3\envs\YOLOX\lib\site-packages\torch\cuda\__init__.py", line 166, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    
    

    环境 CUDA Version: 11.2 没问题

    按照官方的教程 报错

    opened by monkeycc 4
  • Shouldn't it be Xiaomi instead of

    Shouldn't it be Xiaomi instead of "xiamo" in the Benchmark -- Testing Devices section?

    Testing Devices

    x86_64 -- Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz AArch64 -- xiamo phone mi9 CUDA -- 1080TI @ cuda-10.1-cudnn-v7.6.3-TensorRT-6.0.1.5.sh @ Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz

    Shouldn't it be Xiaomi phone mi9?

    opened by Matt-Kou 2
  • fix bugs

    fix bugs

    1. img_info for VOC dataset is wrong.
    2. grid for yolo_head is wrong (Similar to https://github.com/MegEngine/YOLOX/issues/9). If the image has the same height and width, it will be ok. But, when height != weight, it will be wrong.
    opened by LZHgrla 2
  • RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()'

    当输入宽高不一致时报错, 在训练过程中报错,报错时机随缘: yolo_head.py", line 351, in get_assignments bboxes_preds_per_image = bboxes_preds_per_image[fg_mask] RuntimeError: assertion `dtype == dst.dtype && dst.is_contiguous()' failed at ../../../../../../dnn/src/common/elemwise/opr_impl.cpp:281: void megdnn::ElemwiseForward::check_layout_and_broadcast(const TensorLayoutPtrArray&, const megdnn::TensorLayout&)

    opened by amazingzby 1
Releases(0.0.1)
Owner
旷视天元 MegEngine
旷视天元 MegEngine
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
OCTIS: Comparing Topic Models is Simple! A python package to optimize and evaluate topic models (accepted at EACL2021 demo track)

OCTIS : Optimizing and Comparing Topic Models is Simple! OCTIS (Optimizing and Comparing Topic models Is Simple) aims at training, analyzing and compa

MIND 478 Jan 01, 2023
Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling

CLIORA This is the official codebase for ICLR oral paper: Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling. We introduce

Bo Wan 32 Dec 23, 2022
This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods

pyLiDAR-SLAM This codebase proposes modular light python and pytorch implementations of several LiDAR Odometry methods, which can easily be evaluated

Kitware, Inc. 208 Dec 16, 2022
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
Code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in Video".

Consistent Depth of Moving Objects in Video This repository contains training code for the SIGGRAPH 2021 paper "Consistent Depth of Moving Objects in

Google 203 Jan 05, 2023
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Gen Li 91 Dec 23, 2022
Adjust Decision Boundary for Class Imbalanced Learning

Adjusting Decision Boundary for Class Imbalanced Learning This repository is the official PyTorch implementation of WVN-RS, introduced in Adjusting De

Peyton Byungju Kim 16 Jan 04, 2023
N-gram models- Unsmoothed, Laplace, Deleted Interpolation

N-gram models- Unsmoothed, Laplace, Deleted Interpolation

Ravika Nagpal 1 Jan 04, 2022
This repository contains the code used for Predicting Patient Outcomes with Graph Representation Learning (https://arxiv.org/abs/2101.03940).

Predicting Patient Outcomes with Graph Representation Learning This repository contains the code used for Predicting Patient Outcomes with Graph Repre

Emma Rocheteau 76 Dec 22, 2022
Multi-tool reverse engineering collaboration solution.

CollaRE v0.3 Intorduction CollareRE is a tool for collaborative reverse engineering that aims to allow teams that do need to use more then one tool du

105 Nov 27, 2022
The audio-video synchronization of MKV Container Format is exploited to achieve data hiding

The audio-video synchronization of MKV Container Format is exploited to achieve data hiding, where the hidden data can be utilized for various management purposes, including hyper-linking, annotation

Maxim Zaika 1 Nov 17, 2021
Flexible time series feature extraction & processing

tsflex is a toolkit for flexible time series processing & feature extraction, that is efficient and makes few assumptions about sequence data. Useful

PreDiCT.IDLab 206 Dec 28, 2022
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation This is a demo implementation of BYOL for Audio (BYOL-A), a self-sup

NTT Communication Science Laboratories 160 Jan 04, 2023
[SDM 2022] Towards Similarity-Aware Time-Series Classification

SimTSC This is the PyTorch implementation of SDM2022 paper Towards Similarity-Aware Time-Series Classification. We propose Similarity-Aware Time-Serie

Daochen Zha 49 Dec 27, 2022
A project to make Amazon Echo respond to sign language using your webcam

Making Alexa respond to Sign Language using Tensorflow.js Try the live demo Read the Blog Post on Tensorflow's Blog Coming Soon Watch the video This p

Abhishek Singh 444 Jan 03, 2023
Density-aware Single Image De-raining using a Multi-stream Dense Network (CVPR 2018)

DID-MDN Density-aware Single Image De-raining using a Multi-stream Dense Network He Zhang, Vishal M. Patel [Paper Link] (CVPR'18) We present a novel d

He Zhang 224 Dec 12, 2022
Unified learning approach for egocentric hand gesture recognition and fingertip detection

Unified Gesture Recognition and Fingertip Detection A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and finge

Mohammad 227 Dec 25, 2022
Context Axial Reverse Attention Network for Small Medical Objects Segmentation

CaraNet: Context Axial Reverse Attention Network for Small Medical Objects Segmentation This repository contains the implementation of a novel attenti

401 Dec 23, 2022