This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021.

Related tags

Deep LearningGMPQ
Overview

GMPQ: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation

This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is accepted to ICCV2021. This repo contains searching the quantization policy via attribution preservation on small datasets including CIFAR-10, Cars, Flowers, Aircraft, Pets and Food, and finetuning on largescale dataset like ImageNet using our proposed GMPQ.

Quick Start

Prerequisites

  • python>=3.5
  • pytorch>=1.1.0
  • torchvision>=0.3.0
  • other packages like numpy and sklearn

Dataset

If you already have the ImageNet dataset for pytorch, you could create a link to data folder and use it:

# prepare dataset, change the path to your own
ln -s /path/to/imagenet/ data/

If you don't have the ImageNet, you can use the following script to download it: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

For small datasets which we search the quantization policy on, please follow the official instruction:

Searching the mixed-precision quantization policy

For a specific small dataset, you should first pretrain a full-precision model to provide supervision for attribution rank consistency preservation and save it to pretrain_model.pth.tar.

After that, you can start searching the quantization policy. Take ResNet18 and CIFAR-10 for example:

CUDA_VISIBLE_DEVICES=0,1 python search_attention.py \
-a mixres18_w2346a2346  -fa qresnet18_cifar  --epochs 25  --pretrained pretrain_model.pth.tar --aw 40 \
--dataname cifar10 --expname cifar10_resnet18  --cd 0.0003   --step-epoch 10    \
--batch-size 256   --lr 0.1   --lra 0.01 -j 16  \
  path/to/cifar10 \

It also supports other network architectures like ResNet50 and other small datasets like Cars, Flowers, Aircraft, Pets and Food.

Finetuning on ImageNet

After searching, you can get the optimal quantization policy, with the checkpoint arch_checkpoint.pth.tar. You can run the following command to finetune and evaluate the performance on ImageNet dataset.


CUDA_VISIBLE_DEVICES=0,1 python main.py     \
 -a qresnet18                 \
 --ac arch_checkpoint.pth.tar \
 -c checkpoints/train_resnet18   \
 --data_name imagenet          \
 --data path/to/imagenet           \
 --epochs 100                     \
 --pretrained pretrained.pth.tar
 --lr 0.01                    \
 --gpu_id 1,2,3     \
 --train_batch_per_gpu 192              \
 --wd 4e-5                       \
 --workers 32                    \
Owner
IVG Lab, Department of Automation, Tsinghua Univeristy
A library for differentiable nonlinear optimization.

Theseus A library for differentiable nonlinear optimization built on PyTorch to support constructing various problems in robotics and vision as end-to

Meta Research 1.1k Dec 30, 2022
Generalized Random Forests

generalized random forests A pluggable package for forest-based statistical estimation and inference. GRF currently provides non-parametric methods fo

GRF Labs 781 Dec 25, 2022
Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

A Latent Transformer for Disentangled Face Editing in Images and Videos Official implementation for paper: A Latent Transformer for Disentangled Face

InterDigital 108 Dec 09, 2022
[ICCV 2021 Oral] PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers Created by Xumin Yu*, Yongming Rao*, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou

Xumin Yu 317 Dec 26, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm

DeCLIP Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm. Our paper is available in arxiv Updates ** Ou

Sense-GVT 470 Dec 30, 2022
[BMVC 2021] Official PyTorch Implementation of Self-supervised learning of Image Scale and Orientation Estimation

Self-Supervised Learning of Image Scale and Orientation Estimation (BMVC 2021) This is the official implementation of the paper "Self-Supervised Learn

Jongmin Lee 17 Nov 10, 2022
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity

SSD: Single Shot MultiBox Detector Introduction Here is my pytorch implementation of 2 models: SSD-Resnet50 and SSDLite-MobilenetV2.

Viet Nguyen 149 Jan 07, 2023
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation

GPT2-Pytorch with Text-Generator Better Language Models and Their Implications Our model, called GPT-2 (a successor to GPT), was trained simply to pre

Tae-Hwan Jung 775 Jan 08, 2023
Use CLIP to represent video for Retrieval Task

A Straightforward Framework For Video Retrieval Using CLIP This repository contains the basic code for feature extraction and replication of results.

Jesus Andres Portillo Quintero 54 Dec 22, 2022
Combine Tacotron2 and Hifi GAN to generate speech from text

EndToEndTextToSpeech Combine Tacotron2 and Hifi GAN to generate speech from text Download weights Hifi GAN - hifi_gan/checkpoint/ : pretrain 2.5M ste

Phạm Quốc Huy 1 Dec 18, 2021
1st Solution For ICDAR 2021 Competition on Mathematical Formula Detection

This project releases our 1st place solution on ICDAR 2021 Competition on Mathematical Formula Detection. We implement our solution based on MMDetection, which is an open source object detection tool

yuxzho 94 Dec 25, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
The official re-implementation of the Neurips 2021 paper, "Targeted Neural Dynamical Modeling".

Targeted Neural Dynamical Modeling Note: This is a re-implementation (in Tensorflow2) of the original TNDM model. We do not plan to further update the

6 Oct 05, 2022
Spectrum Surveying: Active Radio Map Estimation with Autonomous UAVs

Spectrum Surveying: The Python code in this repository implements the simulations and plots the figures described in the paper “Spectrum Surveying: Ac

Universitetet i Agder 2 Dec 06, 2022
A collection of SOTA Image Classification Models in PyTorch

A collection of SOTA Image Classification Models in PyTorch

sithu3 85 Dec 30, 2022
UFT - Universal File Transfer With Python

UFT 2.0.0 UFT (Universal File Transfer) is a CLI tool , which can be used to upl

Merwin 1 Feb 18, 2022
Deeper DCGAN with AE stabilization

AEGeAN Deeper DCGAN with AE stabilization Parallel training of generative adversarial network as an autoencoder with dedicated losses for each stage.

Tyler Kvochick 36 Feb 17, 2022
Object Tracking and Detection Using OpenCV

Object tracking is one such application of computer vision where an object is detected in a video, otherwise interpreted as a set of frames, and the object’s trajectory is estimated. For instance, yo

Happy N. Monday 4 Aug 21, 2022
Code for the paper 'A High Performance CRF Model for Clothes Parsing'.

Clothes Parsing Overview This code provides an implementation of the research paper: A High Performance CRF Model for Clothes Parsing Edgar Simo-S

Edgar Simo-Serra 119 Nov 21, 2022