Model Zoo for AI Model Efficiency Toolkit

Overview

Qualcomm Innovation Center, Inc.

Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance. Results demonstrate that quantized models can provide good accuracy, comparable to floating point models. Together with results, we also provide recipes for users to quantize floating-point models using the AI Model Efficiency ToolKit (AIMET).

Table of Contents

Introduction

Quantized inference is significantly faster than floating-point inference, and enables models to run in a power-efficient manner on mobile and edge devices. We use AIMET, a library that includes state-of-the-art techniques for quantization, to quantize various models available in TensorFlow and PyTorch frameworks. The list of models is provided in the sections below.

An original FP32 source model is quantized either using post-training quantization (PTQ) or Quantization-Aware-Training (QAT) technique available in AIMET. Example scripts for evaluation are provided for each model. When PTQ is needed, the evaluation script performs PTQ before evaluation. Wherever QAT is used, the fine-tuned model checkpoint is also provided.

Tensorflow Models

Model Zoo

Network Model Source [1] Floating Pt (FP32) Model [2] Quantized Model [3] Results [4] Documentation
ResNet-50 (v1) GitHub Repo Pretrained Model See Documentation (ImageNet) Top-1 Accuracy
FP32: 75.21%
INT8: 74.96%
ResNet50.md
MobileNet-v2-1.4 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 75%
INT8: 74.21%
MobileNetV2.md
EfficientNet Lite GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 74.93%
INT8: 74.99%
EfficientNetLite.md
SSD MobileNet-v2 GitHub Repo Pretrained Model See Example (COCO) Mean Avg. Precision (mAP)
FP32: 0.2469
INT8: 0.2456
SSDMobileNetV2.md
RetinaNet GitHub Repo Pretrained Model See Example (COCO) mAP
FP32: 0.35
INT8: 0.349
Detailed Results
RetinaNet.md
Pose Estimation Based on Ref. Based on Ref. Quantized Model (COCO) mAP
FP32: 0.383
INT8: 0.379,
Mean Avg.Recall (mAR)
FP32: 0.452
INT8: 0.446
PoseEstimation.md
SRGAN GitHub Repo Pretrained Model See Example (BSD100) PSNR/SSIM
FP32: 25.45/0.668
INT8: 24.78/0.628
INT8W/INT16Act.: 25.41/0.666
Detailed Results
SRGAN.md

[1] Original FP32 model source
[2] FP32 model checkpoint
[3] Quantized Model: For models quantized with post-training technique, refers to FP32 model which can then be quantized using AIMET. For models optimized with QAT, refers to model checkpoint with fine-tuned weights. 8-bit weights and activations are typically used. For some models, 8-bit weights and 16-bit activations (INT8W/INT16Act.) are used to further improve performance of post-training quantization.
[4] Results comparing float and quantized performance
[5] Script for quantized evaluation using the model referenced in “Quantized Model” column

Detailed Results

RetinaNet

(COCO dataset)

Average Precision/Recall @[ IoU | area | maxDets] FP32 INT8
Average Precision @[ 0.50:0.95 | all | 100 ] 0.350 0.349
Average Precision @[ 0.50 | all | 100 ] 0.537 0.536
Average Precision @[ 0.75 | all | 100 ] 0.374 0.372
Average Precision @[ 0.50:0.95 | small | 100 ] 0.191 0.187
Average Precision @[ 0.50:0.95 | medium | 100 ] 0.383 0.381
Average Precision @[ 0.50:0.95 | large | 100 ] 0.472 0.472
Average Recall @[ 0.50:0.95 | all | 1 ] 0.306 0.305
Average Recall @[0.50:0.95 | all | 10 ] 0.491 0.490
Average Recall @[ 0.50:0.95 | all |100 ] 0.533 0.532
Average Recall @[ 0.50:0.95 | small | 100 ] 0.345 0.341
Average Recall @[ 0.50:0.95 | medium | 100 ] 0.577 0.577
Average Recall @[ 0.50:0.95 | large | 100 ] 0.681 0.679

SRGAN

Model Dataset PSNR SSIM
FP32 Set5/Set14/BSD100 29.17/26.17/25.45 0.853/0.719/0.668
INT8/ACT8 Set5/Set14/BSD100 28.31/25.55/24.78 0.821/0.684/0.628
INT8/ACT16 Set5/Set14/BSD100 29.12/26.15/25.41 0.851/0.719/0.666

PyTorch Models

Model Zoo

Network Model Source [1] Floating Pt (FP32) Model [2] Quantized Model [3] Results [4] Documentation
MobileNetV2 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 71.67%
INT8: 71.14%
MobileNetV2.md
EfficientNet-lite0 GitHub Repo Pretrained Model Quantized Model (ImageNet) Top-1 Accuracy
FP32: 75.42%
INT8: 74.44%
EfficientNet-lite0.md
DeepLabV3+ GitHub Repo Pretrained Model Quantized Model (PascalVOC) mIOU
FP32: 72.62%
INT8: 72.22%
DeepLabV3.md
MobileNetV2-SSD-Lite GitHub Repo Pretrained Model Quantized Model (PascalVOC) mAP
FP32: 68.7%
INT8: 68.6%
MobileNetV2-SSD-lite.md
Pose Estimation Based on Ref. Based on Ref. Quantized Model (COCO) mAP
FP32: 0.364
INT8: 0.359
mAR
FP32: 0.436
INT8: 0.432
PoseEstimation.md
SRGAN GitHub Repo Pretrained Model (older version from here) See Example (BSD100) PSNR/SSIM
FP32: 25.51/0.653
INT8: 25.5/0.648
Detailed Results
SRGAN.md
DeepSpeech2 GitHub Repo Pretrained Model See Example (Librispeech Test Clean) WER
FP32
9.92%
INT8: 10.22%
DeepSpeech2.md

[1] Original FP32 model source
[2] FP32 model checkpoint
[3] Quantized Model: For models quantized with post-training technique, refers to FP32 model which can then be quantized using AIMET. For models optimized with QAT, refers to model checkpoint with fine-tuned weights. 8-bit weights and activations are typically used. For some models, 8-bit weights and 16-bit weights are used to further improve performance of post-training quantization.
[4] Results comparing float and quantized performance
[5] Script for quantized evaluation using the model referenced in “Quantized Model” column

Detailed Results

SRGAN Pytorch

Model Dataset PSNR SSIM
FP32 Set5/Set14/BSD100 29.93/26.58/25.51 0.851/0.709/0.653
INT8 Set5/Set14/BSD100 29.86/26.59/25.55 0.845/0.705/0.648

Examples

Install AIMET

Before you can run the example script for a specific model, you need to install the AI Model Efficiency ToolKit (AIMET) software. Please see this Getting Started page for an overview. Then install AIMET and its dependencies using these Installation instructions.

NOTE: To obtain the exact version of AIMET software that was used to test this model zoo, please install release 1.13.0 when following the above instructions.

Running the scripts

Download the necessary datasets and code required to run the example for the model of interest. The examples run quantized evaluation and if necessary apply AIMET techniques to improve quantized model performance. They generate the final accuracy results noted in the table above. Refer to the Docs for TensorFlow or PyTorch folder to access the documentation and procedures for a specific model.

Team

AIMET Model Zoo is a project maintained by Qualcomm Innovation Center, Inc.

License

Please see the LICENSE file for details.

Comments
  • Added PyTorch FFNet model, added INT4 to several models

    Added PyTorch FFNet model, added INT4 to several models

    Added the following new model: PyTorch FFNet Added INT4 quantization support to the following models:

    • Pytorch Classification (regnet_x_3_2gf, resnet18, resnet50)
    • PyTorch HRNet Posenet
    • PyTorch HRNet
    • PyTorch EfficientNet Lite0
    • PyTorch DeeplabV3-MobileNetV2

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • Added TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models

    Added TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models

    Added two new models - TensorFlow ModuleDet-EdgeTPU and PyToch InverseForm models Fixed TF version for 2 models in README file Minor updates to Tensorflow EfficientNet Lite-0 doc and PyTorch ssd_mobilenetv2 script

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • Updated post estimation evaluation code and documentation for updated…

    Updated post estimation evaluation code and documentation for updated…

    … model .pth file with weights state-dict Fixed model loading problem by including model definition in pose_estimation_quanteval.py Add Quantizer Op Assumptions to Pose Estimation document

    Signed-off-by: Bharath Ramaswamy [email protected]

    opened by quic-bharathr 0
  • error when run the pose estimation example

    error when run the pose estimation example

    $ python3.6 pose_estimation_quanteval.py pe_weights.pth ./data/

    2022-05-24 22:37:22,500 - root - INFO - AIMET defining network with shared weights Traceback (most recent call last): File "pose_estimation_quanteval.py", line 700, in pose_estimation_quanteval(args) File "pose_estimation_quanteval.py", line 687, in pose_estimation_quanteval sim = quantsim.QuantizationSimModel(model, dummy_input=(1, 3, 128, 128), quant_scheme=args.quant_scheme) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/quantsim.py", line 157, in init self.connected_graph = ConnectedGraph(self.model, dummy_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 132, in init self._construct_graph(model, model_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 254, in _construct_graph module_tensor_shapes_map = ConnectedGraph._generate_module_tensor_shapes_lookup_table(model, model_input) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/meta/connectedgraph.py", line 244, in _generate_module_tensor_shapes_lookup_table run_hook_for_layers_with_given_input(model, model_input, forward_hook, leaf_node_only=False) File "/home/jlchen/.local/lib/python3.6/site-packages/aimet_torch/utils.py", line 277, in run_hook_for_layers_with_given_input _ = model(*input_tensor) File "/home/jlchen/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1071, in _call_impl result = forward_call(*input, **kwargs) TypeError: forward() takes 2 positional arguments but 5 were given

    opened by sundyCoder 0
  • I try to quantize deepspeech demo,but error happend

    I try to quantize deepspeech demo,but error happend

    ImportError: /home/mi/anaconda3/envs/aimet/lib/python3.7/site-packages/aimet_common/x86_64-linux-gnu/aimet_tensor_quantizer-0.0.0-py3.7-linux-x86_64.egg/AimetTensorQuantizer.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK2at6Tensor8data_ptrIfEEPT_v

    platform:Ubuntu 18.04 GPU: nvidia 2070 CUDA:11.1 pytorch python:3.7

    opened by fmbao 0
  • Request for the MobileNet-V1-1.0 quantized (INT8) model.

    Request for the MobileNet-V1-1.0 quantized (INT8) model.

    Thank you for sharing these valuable models. I'd like to evaluate and look into the 'MobileNet-v1-1.0' model quantized by the DFQ. I'd appreciate it if you could provide the quantized MobileNet-v1-1.0 model either in TF or in PyTorch.

    opened by yschoi-dev 0
  • What's the runtime and AI Framework for DeepSpeech2?

    What's the runtime and AI Framework for DeepSpeech2?

    For DeepSpeech2, may I know what's the runtime for it's quantized (INT8 ) model, Hexagan DSP, NPU or others? And what's the AI framework, SNPE, Hexagan NN or others? Thanks~

    opened by sunfangxun 0
  • Unable to replicate DeepLabV3 Pytorch Tutorial numbers

    Unable to replicate DeepLabV3 Pytorch Tutorial numbers

    I've been working through the DeepLabV3 Pytorch tutorial, which can be founded here: https://github.com/quic/aimet-model-zoo/blob/develop/zoo_torch/Docs/DeepLabV3.md.

    However, when running the evaluation script using optimized checkpoint, I am unable to replicate the mIOU result that was listed in the table. The number that I got was 0.67 while the number reported by Qualcomm was 0.72. I was wondering if anyone have had this issue before and how to resolve it ?

    opened by LLNLanLeN 3
Releases(repo_restructured_1)
Owner
Qualcomm Innovation Center
Qualcomm Innovation Center
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
The source code of the paper "SHGNN: Structure-Aware Heterogeneous Graph Neural Network"

SHGNN: Structure-Aware Heterogeneous Graph Neural Network The source code and dataset of the paper: SHGNN: Structure-Aware Heterogeneous Graph Neural

Wentao Xu 7 Nov 13, 2022
A python bot to move your mouse every few seconds to appear active on Skype, Teams or Zoom as you go AFK. 🐭 🤖

PyMouseBot If you're from GT and annoyed with SGVPN idle timeouts while working on development laptop, You might find this useful. A python cli bot to

Oaker Min 6 Oct 24, 2022
MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution

Octave Convolution MXNet implementation for: Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution Imag

Meta Research 549 Dec 28, 2022
PyTorch implementation of "Learn to Dance with AIST++: Music Conditioned 3D Dance Generation."

Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. Installation pip install -r requirements.txt Prepare Dataset bash data/scripts/pre

Zj Li 8 Sep 07, 2021
这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer

Time Series Research with Torch 这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer。 建立原因 相较于mxnet和TF,Torch框架中的神经网络层需要提前指定输入维度: # 建立线性层 TensorF

Chi Zhang 85 Dec 29, 2022
The Agriculture Domain of ERPNext comes with features to record crops and land

Agriculture The Agriculture Domain of ERPNext comes with features to record crops and land, track plant, soil, water, weather analytics, and even trac

Frappe 21 Jan 02, 2023
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis

O-CNN This repository contains the implementation of our papers related with O-CNN. The code is released under the MIT license. O-CNN: Octree-based Co

Microsoft 607 Dec 28, 2022
realsense d400 -> jpg + csv

Realsense-capture realsense d400 - jpg + csv Requirements RealSense sdk : Installation Python3 pyrealsense2 (RealSense SDK) Numpy OpenCV Tkinter Run

Ar-Ray 2 Mar 22, 2022
A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

Zain 1 Feb 01, 2022
DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

AlexZou 72 Dec 13, 2022
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
Customer-Transaction-Analysis - This analysis is based on a synthesised transaction dataset containing 3 months worth of transactions for 100 hypothetical customers.

Customer-Transaction-Analysis - This analysis is based on a synthesised transaction dataset containing 3 months worth of transactions for 100 hypothetical customers. It contains purchases, recurring

Ayodeji Yekeen 1 Jan 01, 2022
Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions

torch-imle Concise and self-contained PyTorch library implementing the I-MLE gradient estimator proposed in our NeurIPS 2021 paper Implicit MLE: Backp

UCL Natural Language Processing 249 Jan 03, 2023
PyTorch implementation for ACL 2021 paper "Maria: A Visual Experience Powered Conversational Agent".

Maria: A Visual Experience Powered Conversational Agent This repository is the Pytorch implementation of our paper "Maria: A Visual Experience Powered

Jokie 22 Dec 12, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022
This is an unofficial PyTorch implementation of Meta Pseudo Labels

This is an unofficial PyTorch implementation of Meta Pseudo Labels. The official Tensorflow implementation is here.

Jungdae Kim 320 Jan 08, 2023
Randomized Correspondence Algorithm for Structural Image Editing

===================================== README: Inpainting based PatchMatch ===================================== @Author: Younesse ANDAM @Conta

Younesse 116 Dec 24, 2022
Official implementation of the paper ``Unifying Nonlocal Blocks for Neural Networks'' (ICCV'21)

Spectral Nonlocal Block Overview Official implementation of the paper: Unifying Nonlocal Blocks for Neural Networks (ICCV'21) Spectral View of Nonloca

91 Dec 14, 2022
Boostcamp CV Serving For Python

Boostcamp-CV-Serving Prerequisites MySQL GCP Cloud Storage GCP key file Sentry Streamlit Cloud Secrets: .streamlit/secrets.toml #DO NOT SHARE THIS I

Jungwon Seo 19 Feb 22, 2022