CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

Overview

CV Backbones

including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab.

News

2022/01/05 PyramidTNT: An improved TNT baseline is released.

2021/09/28 The paper of TNT (Transformer in Transformer) is accepted by NeurIPS 2021.

2021/09/18 The extended version of Versatile Filters is accepted by T-PAMI.

2021/08/30 GhostNet paper is selected as the Most Influential CVPR 2020 Papers.

2021/08/26 The codes of LegoNet and Versatile Filters has been merged into this repo.

2021/06/15 The code of TNT (Transformer in Transformer) has been released in this repo.

2020/10/31 GhostNet+TinyNet achieves better performance. See details in our NeurIPS 2020 paper: arXiv.

2020/06/10 GhostNet is included in PyTorch Hub.


GhostNet Code

This repo provides GhostNet pretrained models and inference code for TensorFlow and PyTorch:

For training, please refer to tinynet or timm.

TinyNet Code

This repo provides TinyNet pretrained models and inference code for PyTorch:

TNT Code

This repo provides training code and pretrained models of TNT (Transformer in Transformer) for PyTorch:

The code of PyramidTNT is also released:

LegoNet Code

This repo provides the implementation of paper LegoNet: Efficient Convolutional Neural Networks with Lego Filters (ICML 2019)

Versatile Filters Code

This repo provides the implementation of paper Learning Versatile Filters for Efficient Convolutional Neural Networks (NeurIPS 2018)

Citation

@inproceedings{ghostnet,
  title={GhostNet: More Features from Cheap Operations},
  author={Han, Kai and Wang, Yunhe and Tian, Qi and Guo, Jianyuan and Xu, Chunjing and Xu, Chang},
  booktitle={CVPR},
  year={2020}
}
@inproceedings{tinynet,
  title={Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets},
  author={Han, Kai and Wang, Yunhe and Zhang, Qiulin and Zhang, Wei and Xu, Chunjing and Zhang, Tong},
  booktitle={NeurIPS},
  year={2020}
}
@inproceedings{tnt,
  title={Transformer in transformer},
  author={Han, Kai and Xiao, An and Wu, Enhua and Guo, Jianyuan and Xu, Chunjing and Wang, Yunhe},
  booktitle={NeurIPS},
  year={2021}
}
@inproceedings{legonet,
    title={LegoNet: Efficient Convolutional Neural Networks with Lego Filters},
    author={Yang, Zhaohui and Wang, Yunhe and Liu, Chuanjian and Chen, Hanting and Xu, Chunjing and Shi, Boxin and Xu, Chao and Xu, Chang},
    booktitle={ICML},
    year={2019}
  }
@inproceedings{wang2018learning,
  title={Learning versatile filters for efficient convolutional neural networks},
  author={Wang, Yunhe and Xu, Chang and Chunjing, XU and Xu, Chao and Tao, Dacheng},
  booktitle={NeurIPS},
  year={2018}
}

Other versions of GhostNet

This repo provides the TensorFlow/PyTorch code of GhostNet. Other versions and applications can be found in the following:

  1. timm: code with pretrained model
  2. Darknet: cfg file, and description
  3. Gluon/Keras/Chainer: code
  4. Paddle: code
  5. Bolt inference framework: benckmark
  6. Human pose estimation: code
  7. YOLO with GhostNet backbone: code
  8. Face recognition: cavaface, FaceX-Zoo, TFace
Comments
  • TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    TypeError: __init__() got an unexpected keyword argument 'bn_tf'

    Hello, I want to ask what caused the following error when running the train.py file? thank you “ TypeError: init() got an unexpected keyword argument 'bn_tf' ”

    opened by ModeSky 16
  • Counting ReLU vs HardSwish FLOPs

    Counting ReLU vs HardSwish FLOPs

    Thank you very much for sharing the source code. I have a question related to FLOPs counting for ReLU and HardSwish. I saw in the paper the flops are the same in ReLU and HardSwish. Can you explain this situation? image

    opened by jahongir7174 10
  • kernel size in primary convolution of Ghost module

    kernel size in primary convolution of Ghost module

    Hi, It is said in your paper that the primary convolution in Ghost module can have customized kernel size, which is a major difference from existing efficient convolution schemes. However, it seems that in this code all the kernel size of primary convolution in Ghost module are set to [1, 1], and the kernel set in _CONV_DEFS_0 are only used in blocks of stride=2. Is it set intentionally?

    opened by YUHAN666 9
  • 用GhostModule替换Conv2d,loss降的很慢?

    用GhostModule替换Conv2d,loss降的很慢?

    我直接将efficientnet里面的MBConvBlock中的Conv2d替换为GhostModule: Conv2d(in_channels=inp, out_channels=oup, kernel_size=1, bias=False) 替换为 GhostModule(inp, oup), 其他参数不变,为什么损失比以前收敛的更慢了,一直降不下来?请问需要修改其他什么参数吗?

    opened by yc-cui 8
  • Training hyperparams on ImageNet

    Training hyperparams on ImageNet

    Hi, thanks for sharing such a wonderful work, I'd like to reproduce your results on ImageNet, could you please specify training parameters such as initial learning rate, how to decay it, batch size, etc. It would be even better if you can provide tricks to train GhostNet, such as label smoothing and data augmentation. Thx!

    good first issue 
    opened by sean-zhuh 8
  • Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    Why did you exclude EfficientNetB0 from Accuracy-Latency chart?

    @iamhankai Hi,

    Great work!

    1. Why did you exclude EfficientNetB0 (0.390 BFlops - 76.3% Top1) from Accuracy-Latency chart?

    2. Also what mini_batch_size did you use for training GhostNet?

    flops_latency

    opened by AlexeyAB 8
  • VIG pretrained weights

    VIG pretrained weights

    @huawei-noah-admin cna you please share the VIG pretraiend model on google drive or one drive as baidu is not accessible from our end

    THank in advance

    opened by abhigoku10 7
  • The implementation of Isotropic architecture

    The implementation of Isotropic architecture

    Hi, thanks for sharing this impressive work. The paper mentioned two architectures, Isotropic one and pyramid one. I noticed that in the code, this is a reduce_ratios, and this reduce_ratios are used by a avg_pooling operation to calculate before building the graph. I am wondering whether all I need to do is setting this reduce_ratios to [1,1,1,1] if I want to implement the Isotropic architecture. Thanks.

    self.n_blocks = sum(blocks) channels = opt.channels reduce_ratios = [4, 2, 1, 1] dpr = [x.item() for x in torch.linspace(0, drop_path, self.n_blocks)] num_knn = [int(x.item()) for x in torch.linspace(k, k, self.n_blocks)]

    opened by buptxiaofeng 6
  • Gradient overflow occurs while training tnt-ti model

    Gradient overflow occurs while training tnt-ti model

    ^@^@Train: 41 [ 0/625 ( 0%)] Loss: 4.564162 (4.5642) Time: 96.744s, 21.17/s (96.744s, 21.17/s) LR: 8.284e-04 Data: 94.025 (94.025) ^@^@^@^@Train: 41 [ 50/625 ( 8%)] Loss: 4.395192 (4.4797) Time: 2.742s, 746.96/s (7.383s, 277.38/s) LR: 8.284e-04 Data: 0.057 (4.683) ^@^@^@^@Train: 41 [ 100/625 ( 16%)] Loss: 4.424296 (4.4612) Time: 2.741s, 747.15/s (6.529s, 313.66/s) LR: 8.284e-04 Data: 0.056 (3.831) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0

    And the top-1 acc is only 0.2 after 40 epochs.

    Any tips available here, dear @iamhankai @yitongh

    opened by jimmyflycv 6
  • Bloated model

    Bloated model

    Hi, I am using Ghostnet backbone for training YoloV3 model in Tensorflow, but I am getting a bloated model. The checkpoint data size is approx. 68MB, but the checkpoint given here is of approx 20MB https://github.com/huawei-noah/ghostnet/blob/master/tensorflow/models/ghostnet_checkpoint.data-00000-of-00001

    I am also training EfficientNet model with YoloV3 and that seems to be working fine, without any bloated size.

    Could anyone or the author please confirm if this is the correct architecture or anything seems weird? I have attached the Ghostnet architecture file out of the code.

    Thanks. ghostnet_model_arch.txt

    opened by ghost 6
  • Replace Conv2d in my network, however it becomes slower, why?

    Replace Conv2d in my network, however it becomes slower, why?

    Above all, thanks for your great work! It really inspires me a lot! But now I have a question.

    I replace all the Conv2d operations in my network except the final ones, the model parameters really becomes much more less. However, when testing, I found that the average forward time decreases a lot by the replacement (from 428FPS down to 354FPS). So, is this a normal phenomenon? Or is this because of the concat operation?

    opened by FunkyKoki 6
  • VIG for segmenation

    VIG for segmenation

    @iamhankai thanks for open-sourcing the code base . Can you please let me knw how to use the pvig for segmentation related activities its really helpful

    THanks in advance

    opened by abhigoku10 0
  • higher performance of ViG

    higher performance of ViG

    I try to train ViG-S on ImageNet and get 80.54% top1 accuracy, which is higher than that in paper, 80.4%. I wonder if 80.4 is the average of multiple trainings? If yes, how many reps do you use?

    opened by tdzdog 9
Releases(GhostNetV2)
Owner
HUAWEI Noah's Ark Lab
Working with and contributing to the open source community in data mining, artificial intelligence, and related fields.
HUAWEI Noah's Ark Lab
LSTC: Boosting Atomic Action Detection with Long-Short-Term Context

LSTC: Boosting Atomic Action Detection with Long-Short-Term Context This Repository contains the code on AVA of our ACM MM 2021 paper: LSTC: Boosting

Tencent YouTu Research 9 Oct 11, 2022
quantize aware training package for NCNN on pytorch

ncnnqat ncnnqat is a quantize aware training package for NCNN on pytorch. Table of Contents ncnnqat Table of Contents Installation Usage Code Examples

62 Nov 23, 2022
Facial Image Inpainting with Semantic Control

Facial Image Inpainting with Semantic Control In this repo, we provide a model for the controllable facial image inpainting task. This model enables u

Ren Yurui 8 Nov 22, 2021
KIND: an Italian Multi-Domain Dataset for Named Entity Recognition

KIND (Kessler Italian Named-entities Dataset) KIND is an Italian dataset for Named-Entity Recognition. It contains more than one million tokens with t

Digital Humanities 5 Jun 21, 2022
Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio"

Success Predictor Implementation of the algorithm shown in the article "Modelo de Predicción de Éxito de Canciones Basado en Descriptores de Audio". B

Rodrigo Nazar Meier 4 Mar 17, 2022
atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022
PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices.

PyTorch-LIT PyTorch-LIT is the Lite Inference Toolkit (LIT) for PyTorch which focuses on easy and fast inference of large models on end-devices. With

Amin Rezaei 157 Dec 11, 2022
Semantic segmentation models, datasets and losses implemented in PyTorch.

Semantic Segmentation in PyTorch Semantic Segmentation in PyTorch Requirements Main Features Models Datasets Losses Learning rate schedulers Data augm

Yassine 1.3k Jan 07, 2023
Extremely simple and fast extreme multi-class and multi-label classifiers.

napkinXC napkinXC is an extremely simple and fast library for extreme multi-class and multi-label classification, that focus of implementing various m

Marek Wydmuch 43 Nov 14, 2022
Baselines for TrajNet++

TrajNet++ : The Trajectory Forecasting Framework PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective TrajNet

VITA lab at EPFL 183 Jan 05, 2023
U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection

The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

Xuebin Qin 6.5k Jan 09, 2023
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Jan 01, 2023
A Python implementation of active inference for Markov Decision Processes

A Python package for simulating Active Inference agents in Markov Decision Process environments. Please see our companion preprint on arxiv for an ove

235 Dec 21, 2022
Official PyTorch Implementation of Learning Architectures for Binary Networks

Learning Architectures for Binary Networks An Pytorch Implementation of the paper Learning Architectures for Binary Networks (BNAS) (ECCV 2020) If you

Computer Vision Lab. @ GIST 25 Jun 09, 2022
A collection of scripts I developed for personal and working projects.

A collection of scripts I developed for personal and working projects Table of contents Introduction Repository diagram structure List of scripts pyth

Gianluca Bianco 109 Dec 26, 2022
IGCN : Image-to-graph convolutional network

IGCN : Image-to-graph convolutional network IGCN is a learning framework for 2D/3D deformable model registration and alignment, and shape reconstructi

Megumi Nakao 7 Oct 27, 2022
SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)

SuMa++: Efficient LiDAR-based Semantic SLAM This repository contains the implementation of SuMa++, which generates semantic maps only using three-dime

Photogrammetry & Robotics Bonn 701 Dec 30, 2022
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 28 Nov 25, 2022
Code & Models for 3DETR - an End-to-end transformer model for 3D object detection

3DETR: An End-to-End Transformer Model for 3D Object Detection PyTorch implementation and models for 3DETR. 3DETR (3D DEtection TRansformer) is a simp

Facebook Research 487 Dec 31, 2022
PyTorchVideo is a deeplearning library with a focus on video understanding work

PyTorchVideo is a deeplearning library with a focus on video understanding work. PytorchVideo provides resusable, modular and efficient components needed to accelerate the video understanding researc

Facebook Research 2.7k Jan 07, 2023