Network Compression via Central Filter

Overview

Network Compression via Central Filter

Environments

The code has been tested in the following environments:

  • Python 3.8
  • PyTorch 1.8.1
  • cuda 10.2
  • torchsummary, torchvision, thop

Both windows and linux are available.

Pre-trained Models

CIFAR-10:

Vgg-16 | ResNet56 | DenseNet-40 | GoogLeNet

ImageNet:

ResNet50

Running Code

The experiment is divided into two steps. We have provided the calculated data and can skip the first step.

Similarity Matrix Generation

@echo off
@rem for windows
start cmd /c ^
"cd /D [code dir]  ^
& [python.exe dir]\python.exe rank.py ^
--arch [model arch name] ^
--resume [pre-trained model dir] ^
--num_workers [worker numbers] ^
--image_num [batch numbers] ^
--batch_size [batch size] ^
--dataset [CIFAR10 or ImageNet] ^
--data_dir [data dir] ^
--calc_dis_mtx True ^
& pause"
# for linux
python rank.py \
--arch [model arch name] \
--resume [pre-trained model dir] \
--num_workers [worker numbers] \
--image_num [batch numbers] \
--batch_size [batch size] \
--dataset [CIFAR10 or ImageNet] \
--data_dir [data dir] \
--calc_dis_mtx True

Model Training

The experimental results and related configurations covered in this paper are as follows.

1. VGGNet

Architecture Compress Rate Params Flops Accuracy
VGG-16(Baseline) 14.98M(0.0%) 313.73M(0.0%) 93.96%
VGG-16 [0.3]+[0.2]*4+[0.3]*2+[0.4]+[0.85]*4 2.45M(83.6%) 124.10M(60.4%) 93.67%
VGG-16 [0.3]*5+[0.5]*3+[0.8]*4 2.18M(85.4%) 91.54M(70.8%) 93.06%
VGG-16 [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 1.51M(89.9%) 65.92M(79.0%) 92.49%
python main_win.py \
--arch vgg_16_bn \
--resume [pre-trained model dir] \
--compress_rate [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

2. ResNet-56

Architecture Compress Rate Params Flops Accuracy
ResNet-56(Baseline) 0.85M(0.0%) 125.49M(0.0%) 93.26%
ResNet-56 [0.]+[0.2,0.]*9+[0.3,0.]*9+[0.4,0.]*9 0.53M(37.6%) 86.11M(31.4%) 93.64%
ResNet-56 [0.]+[0.3,0.]*9+[0.4,0.]*9+[0.5,0.]*9 0.45M(47.1%) 75.7M(39.7%) 93.59%
ResNet-56 [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 0.19M(77.6%) 40.0M(68.1%) 92.19%
python main_win.py \
--arch resnet_56 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

3.DenseNet-40

Architecture Compress Rate Params Flops Accuracy
DenseNet-40(Baseline) 1.04M(0.0%) 282.00M(0.0%) 94.81%
DenseNet-40 [0.]+[0.3]*12+[0.1]+[0.3]*12+[0.1]+[0.3]*8+[0.]*4 0.67M(35.6%) 165.38M(41.4%) 94.33%
DenseNet-40 [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 0.46M(55.8%) 109.40M(61.3%) 93.71%
# for linux
python main_win.py \
--arch densenet_40 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

4. GoogLeNet

Architecture Compress Rate Params Flops Accuracy
GoogLeNet(Baseline) 6.15M(0.0%) 1520M(0.0%) 95.05%
GoogLeNet [0.2]+[0.7]*15+[0.8]*9+[0.,0.4,0.] 2.73M(55.6%) 0.56B(63.2%) 94.70%
GoogLeNet [0.2]+[0.9]*24+[0.,0.4,0.] 2.17M(64.7%) 0.37B(75.7%) 94.13%
python main_win.py \
--arch googlenet \
--resume [pre-trained model dir] \
--compress_rate [0.2]+[0.9]*24+[0.,0.4,0.] \
--num_workers [worker numbers] \
--epochs 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--data_dir [dataset dir] \
--dataset CIFAR10

python main_win.py \
--arch googlenet \
--from_scratch True \
--resume finally_pruned_model/googlenet_1.pt \
--num_workers 2 \
--epochs 30 \
--lr 0.01 \
--lr_decay_step 5,15 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10

4. ResNet-50

Architecture Compress Rate Params Flops Top-1 Accuracy Top-5 Accuracy
ResNet-50(baseline) 25.55M(0.0%) 4.11B(0.0%) 76.15% 92.87%
ResNet-50 [0.]+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*2+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*3+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*5+[0.1,0.1,0.1]+[0.2,0.2,0.1]*2 16.08M(36.9%) 2.13B(47.9%) 75.08% 92.30%
ResNet-50 [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 13.73M(46.2%) 1.50B(63.5%) 73.43% 91.57%
ResNet-50 [0.]+[0.2,0.2,0.65]*1+[0.75,0.75,0.65]*2+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*3+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*5+[0.15,0.15,0.35]+[0.5,0.5,0.35]*2 8.10M(68.2%) 0.98B(76.2%) 70.26% 89.82%
python main_win.py \
--arch resnet_50 \
--resume [pre-trained model dir] \
--data_dir [dataset dir] \
--dataset ImageNet \
--compress_rate [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 \
--num_workers [worker numbers] \
--batch_size 64 \
--epochs 2 \
--lr_decay_step 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--input_size 224 \
--start_cov 0

python main_win.py \
--arch resnet_50 \
--from_scratch True \
--resume finally_pruned_model/resnet_50_1.pt \
--num_workers 8 \
--epochs 40 \
--lr 0.001 \
--lr_decay_step 5,20 \
--save_id 2 \
--batch_size 64 \
--weight_decay 0.0005 \
--input_size 224 \
--data_dir [dataset dir] \
--dataset ImageNet 
Code for Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

Piggyback: https://arxiv.org/abs/1801.06519 Pretrained masks and backbones are available here: https://uofi.box.com/s/c5kixsvtrghu9yj51yb1oe853ltdfz4q

Arun Mallya 165 Nov 22, 2022
Source code for "FastBERT: a Self-distilling BERT with Adaptive Inference Time".

FastBERT Source code for "FastBERT: a Self-distilling BERT with Adaptive Inference Time". Good News 2021/10/29 - Code: Code of FastPLM is released on

Weijie Liu 584 Jan 02, 2023
[KDD 2021, Research Track] DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neural Networks

DiffMG This repository contains the code for our KDD 2021 Research Track paper: DiffMG: Differentiable Meta Graph Search for Heterogeneous Graph Neura

AutoML Research 24 Nov 29, 2022
Learning to Stylize Novel Views

Learning to Stylize Novel Views [Project] [Paper] Contact: Hsin-Ping Huang ([ema

34 Nov 27, 2022
WHENet - ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L

HeadPoseEstimation-WHENet-yolov4-onnx-openvino ONNX, OpenVINO, TFLite, TensorRT, EdgeTPU, CoreML, TFJS, YOLOv4/YOLOv4-tiny-3L 1. Usage $ git clone htt

Katsuya Hyodo 49 Sep 21, 2022
Playing around with FastAPI and streamlit to create a YoloV5 object detector

FastAPI-Streamlit-based-YoloV5-detector Playing around with FastAPI and streamlit to create a YoloV5 object detector It turns out that a User Interfac

2 Jan 20, 2022
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。

Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据

Bubbliiiing 210 Jan 06, 2023
Sparse R-CNN: End-to-End Object Detection with Learnable Proposals, CVPR2021

End-to-End Object Detection with Learnable Proposal, CVPR2021

Peize Sun 1.2k Dec 27, 2022
DTCN SMP Challenge - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
Gesture Volume Control v.2

Gesture volume control v.2 In this project I am going to learn how to use Gesture Control to change the volume of a computer. I first look into hand t

Pavel Dat 23 Dec 26, 2022
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Real-time face detection and emotion/gender classification using fer2013/imdb datasets with a keras CNN model and openCV.

Octavio Arriaga 5.3k Dec 30, 2022
FedML: A Research Library and Benchmark for Federated Machine Learning

FedML: A Research Library and Benchmark for Federated Machine Learning 📄 https://arxiv.org/abs/2007.13518 News 2021-02-01 (Award): #NeurIPS 2020# Fed

FedML-AI 2.3k Jan 08, 2023
BabelCalib: A Universal Approach to Calibrating Central Cameras. In ICCV (2021)

BabelCalib: A Universal Approach to Calibrating Central Cameras This repository contains the MATLAB implementation of the BabelCalib calibration frame

Yaroslava Lochman 55 Dec 30, 2022
PyTorch Implementation of DSB for Score Based Generative Modeling. Experiments managed using Hydra.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling This repository contains the implementation for the paper Diffusion

James Thornton 50 Jan 03, 2023
Implementation of C-RNN-GAN.

Implementation of C-RNN-GAN. Publication: Title: C-RNN-GAN: Continuous recurrent neural networks with adversarial training Information: http://mogren.

Olof Mogren 427 Dec 25, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Rank1 Conversation Emotion Detection Task

Rank1-Conversation_Emotion_Detection_Task accuracy macro-f1 recall 0.826 0.7544 0.719 基于预训练模型和时序预测模型的对话情感探测任务 1 摘要 针对对话情感探测任务,本文将其分为文本分类和时间序列预测两个子任务,分

Yuchen Han 2 Nov 28, 2021