CondenseNet: Light weighted CNN for mobile devices

Overview

CondenseNets

This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Huang*, Shichen Liu*, Laurens van der Maaten and Kilian Weinberger (* Authors contributed equally).

Citation

If you find our project useful in your research, please consider citing:

@inproceedings{huang2018condensenet,
  title={Condensenet: An efficient densenet using learned group convolutions},
  author={Huang, Gao and Liu, Shichen and Van der Maaten, Laurens and Weinberger, Kilian Q},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2752--2761},
  year={2018}
}

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Discussions
  5. Contacts

Introduction

CondenseNet is a novel, computationally efficient convolutional network architecture. It combines dense connectivity between layers with a mechanism to remove unused connections. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions remove connections between layers for which this feature re-use is superfluous. At test time, our model can be implemented using standard grouped convolutions —- allowing for efficient computation in practice. Our experiments demonstrate that CondenseNets are much more efficient than other compact convolutional networks such as MobileNets and ShuffleNets.

Figure 1: Learned Group Convolution with G=C=3.

Figure 2: CondenseNets with Fully Dense Connectivity and Increasing Growth Rate.

Usage

Dependencies

Train

As an example, use the following command to train a CondenseNet on ImageNet

python main.py --model condensenet -b 256 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0,1,2,3,4,5,6,7 --resume

As another example, use the following command to train a CondenseNet on CIFAR-10

python main.py --model condensenet -b 64 -j 12 cifar10 \
--stages 14-14-14 --growth 8-16-32 --gpu 0 --resume

Evaluation

We take the ImageNet model trained above as an example.

To evaluate the trained model, use evaluate to evaluate from the default checkpoint directory:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate

or use evaluate-from to evaluate from an arbitrary path:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate-from /PATH/TO/BEST/MODEL

Note that these models are still the large models. To convert the model to group-convolution version as described in the paper, use the convert-from function:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--convert-from /PATH/TO/BEST/MODEL

Finally, to directly load from a converted model (that is, a CondenseNet), use a converted model file in combination with the evaluate-from option:

python main.py --model condensenet_converted -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate-from /PATH/TO/CONVERTED/MODEL

Other Options

We also include DenseNet implementation in this repository.
For more examples of usage, please refer to script.sh
For detailed options, please python main.py --help

Results

Results on ImageNet

Model FLOPs Params Top-1 Err. Top-5 Err. Pytorch Model
CondenseNet-74 (C=G=4) 529M 4.8M 26.2 8.3 Download (18.69M)
CondenseNet-74 (C=G=8) 274M 2.9M 29.0 10.0 Download (11.68M)

Results on CIFAR

Model FLOPs Params CIFAR-10 CIFAR-100
CondenseNet-50 28.6M 0.22M 6.22 -
CondenseNet-74 51.9M 0.41M 5.28 -
CondenseNet-86 65.8M 0.52M 5.06 23.64
CondenseNet-98 81.3M 0.65M 4.83 -
CondenseNet-110 98.2M 0.79M 4.63 -
CondenseNet-122 116.7M 0.95M 4.48 -
CondenseNet-182* 513M 4.2M 3.76 18.47

(* trained 600 epochs)

Inference time on ARM platform

Model FLOPs Top-1 Time(s)
VGG-16 15,300M 28.5 354
ResNet-18 1,818M 30.2 8.14
1.0 MobileNet-224 569M 29.4 1.96
CondenseNet-74 (C=G=4) 529M 26.2 1.89
CondenseNet-74 (C=G=8) 274M 29.0 0.99

Contact

[email protected]
[email protected]

We are working on the implementation on other frameworks.
Any discussions or concerns are welcomed!

Owner
Shichen Liu
PhD student at USC
Shichen Liu
Recurrent Conditional Query Learning

Recurrent Conditional Query Learning (RCQL) This repository contains the Pytorch implementation of One Model Packs Thousands of Items with Recurrent C

Dongda 4 Nov 28, 2022
All public open-source implementations of convnets benchmarks

convnet-benchmarks Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below. Machine: 6-cor

Soumith Chintala 2.7k Dec 30, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
Official code for 'Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urban Driving Scenes'

PEBAL This repo contains the Pytorch implementation of our paper: Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentationon Complex Urba

Yu Tian 115 Dec 29, 2022
MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios This is the official TensorFlow implementation of MetaTTE in the

morningstarwang 4 Dec 14, 2022
Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better performance.

InfoPro-Pytorch The Information Propagation algorithm for training deep networks with local supervision. (ICLR 2021) Revisiting Locally Supervised Lea

78 Dec 27, 2022
An experimental technique for efficiently exploring neural architectures.

SMASH: One-Shot Model Architecture Search through HyperNetworks An experimental technique for efficiently exploring neural architectures. This reposit

Andy Brock 478 Aug 04, 2022
Implementation of the federated dual coordinate descent (FedDCD) method.

FedDCD.jl Implementation of the federated dual coordinate descent (FedDCD) method. Installation To install, just call Pkg.add("https://github.com/Zhen

Zhenan Fan 6 Sep 21, 2022
李云龙二次元风格化!打滚卖萌,使用了animeGANv2进行了视频的风格迁移

李云龙二次元风格化!一键star、fork,你也可以生成这样的团长! 打滚卖萌求star求fork! 0.效果展示 视频效果前往B站观看效果最佳:李云龙二次元风格化: github开源repo:李云龙二次元风格化 百度AIstudio开源地址,一键fork即可运行: 李云龙二次元风格化!一键fork

oukohou 44 Dec 04, 2022
Code for "MetaMorph: Learning Universal Controllers with Transformers", Gupta et al, ICLR 2022

MetaMorph: Learning Universal Controllers with Transformers This is the code for the paper MetaMorph: Learning Universal Controllers with Transformers

Agrim Gupta 50 Jan 03, 2023
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
Improving Convolutional Networks via Attention Transfer (ICLR 2017)

Attention Transfer PyTorch code for "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Tran

Sergey Zagoruyko 1.4k Dec 23, 2022
Solution to the Weather4cast 2021 challenge

This code was used for the entry by the team "antfugue" for the Weather4cast 2021 Challenge. Below, you can find the instructions for generating predi

Jussi Leinonen 13 Jan 03, 2023
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Learning to propose objects This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Ko

Philipp Krähenbühl 90 Sep 10, 2021
StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN

StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulati

360 Dec 28, 2022
chen2020iros: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

Overlap-based 3D LiDAR Monte Carlo Localization This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D

Photogrammetry & Robotics Bonn 219 Dec 15, 2022
PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation

PyGRANSO PyGRANSO: A PyTorch-enabled port of GRANSO with auto-differentiation Please check https://ncvx.org/PyGRANSO for detailed instructions (introd

SUN Group @ UMN 26 Nov 16, 2022
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
Official implementation of the ICML2021 paper "Elastic Graph Neural Networks"

ElasticGNN This repository includes the official implementation of ElasticGNN in the paper "Elastic Graph Neural Networks" [ICML 2021]. Xiaorui Liu, W

liuxiaorui 34 Dec 04, 2022