Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Overview

Densely Connected Convolutional Networks (DenseNets)

This repository contains the code for DenseNet introduced in the following paper

Densely Connected Convolutional Networks (CVPR 2017, Best Paper Award)

Gao Huang*, Zhuang Liu*, Laurens van der Maaten and Kilian Weinberger (* Authors contributed equally).

and its journal version

Convolutional Networks with Dense Connectivity (TPAMI 2019)

Gao Huang, Zhuang Liu, Geoff Pleiss, Laurens van der Maaten and Kilian Weinberger.

Now with memory-efficient implementation! Please check the technical report and code for more infomation.

The code is built on fb.resnet.torch.

Citation

If you find DenseNet useful in your research, please consider citing:

@article{huang2019convolutional,
 title={Convolutional Networks with Dense Connectivity},
 author={Huang, Gao and Liu, Zhuang and Pleiss, Geoff and Van Der Maaten, Laurens and Weinberger, Kilian},
 journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
 year={2019}
 }
 
@inproceedings{huang2017densely,
  title={Densely Connected Convolutional Networks},
  author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q },
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2017}
}

Other Implementations

Our [Caffe], Our memory-efficient [Caffe], Our memory-efficient [PyTorch], [PyTorch] by Andreas Veit, [PyTorch] by Brandon Amos, [PyTorch] by Federico Baldassarre, [MXNet] by Nicatio, [MXNet] by Xiong Lin, [MXNet] by miraclewkf, [Tensorflow] by Yixuan Li, [Tensorflow] by Laurent Mazare, [Tensorflow] by Illarion Khlestov, [Lasagne] by Jan Schlüter, [Keras] by tdeboissiere,
[Keras] by Roberto de Moura Estevão Filho, [Keras] by Somshubra Majumdar, [Chainer] by Toshinori Hanya, [Chainer] by Yasunori Kudo, [Torch 3D-DenseNet] by Barry Kui, [Keras] by Christopher Masch, [Tensorflow2] by Gaston Rios and Ulises Jeremias Cornejo Fandos.

Note that we only listed some early implementations here. If you would like to add yours, please submit a pull request.

Some Following up Projects

  1. Multi-Scale Dense Convolutional Networks for Efficient Prediction
  2. DSOD: Learning Deeply Supervised Object Detectors from Scratch
  3. CondenseNet: An Efficient DenseNet using Learned Group Convolutions
  4. Fully Convolutional DenseNets for Semantic Segmentation
  5. Pelee: A Real-Time Object Detection System on Mobile Devices

Contents

  1. Introduction
  2. Usage
  3. Results on CIFAR
  4. Results on ImageNet and Pretrained Models
  5. Updates

Introduction

DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs.

Figure 1: A dense block with 5 layers and growth rate 4.

densenet Figure 2: A deep DenseNet with three dense blocks.

Usage

  1. Install Torch and required dependencies like cuDNN. See the instructions here for a step-by-step guide.
  2. Clone this repo: git clone https://github.com/liuzhuang13/DenseNet.git

As an example, the following command trains a DenseNet-BC with depth L=100 and growth rate k=12 on CIFAR-10:

th main.lua -netType densenet -dataset cifar10 -batchSize 64 -nEpochs 300 -depth 100 -growthRate 12

As another example, the following command trains a DenseNet-BC with depth L=121 and growth rate k=32 on ImageNet:

th main.lua -netType densenet -dataset imagenet -data [dataFolder] -batchSize 256 -nEpochs 90 -depth 121 -growthRate 32 -nGPU 4 -nThreads 16 -optMemory 3

Please refer to fb.resnet.torch for data preparation.

DenseNet and DenseNet-BC

By default, the code runs with the DenseNet-BC architecture, which has 1x1 convolutional bottleneck layers, and compresses the number of channels at each transition layer by 0.5. To run with the original DenseNet, simply use the options -bottleneck false and -reduction 1

Memory efficient implementation (newly added feature on June 6, 2017)

There is an option -optMemory which is very useful for reducing GPU memory footprint when training a DenseNet. By default, the value is set to 2, which activates the shareGradInput function (with small modifications from here). There are two extreme memory efficient modes (-optMemory 3 or -optMemory 4) which use a customized densely connected layer. With -optMemory 4, the largest 190-layer DenseNet-BC on CIFAR can be trained on a single NVIDIA TitanX GPU (uses 8.3G of 12G) instead of fully using four GPUs with the standard (recursive concatenation) implementation .

More details about the memory efficient implementation are discussed here.

Results on CIFAR

The table below shows the results of DenseNets on CIFAR datasets. The "+" mark at the end denotes for standard data augmentation (random crop after zero-padding, and horizontal flip). For a DenseNet model, L denotes its depth and k denotes its growth rate. On CIFAR-10 and CIFAR-100 without data augmentation, a Dropout layer with drop rate 0.2 is introduced after each convolutional layer except the very first one.

Model Parameters CIFAR-10 CIFAR-10+ CIFAR-100 CIFAR-100+
DenseNet (L=40, k=12) 1.0M 7.00 5.24 27.55 24.42
DenseNet (L=100, k=12) 7.0M 5.77 4.10 23.79 20.20
DenseNet (L=100, k=24) 27.2M 5.83 3.74 23.42 19.25
DenseNet-BC (L=100, k=12) 0.8M 5.92 4.51 24.15 22.27
DenseNet-BC (L=250, k=24) 15.3M 5.19 3.62 19.64 17.60
DenseNet-BC (L=190, k=40) 25.6M - 3.46 - 17.18

Results on ImageNet and Pretrained Models

Torch

Models in the original paper

The Torch models are trained under the same setting as in fb.resnet.torch. The error rates shown are 224x224 1-crop test errors.

Network Top-1 error Torch Model
DenseNet-121 (k=32) 25.0 Download (64.5MB)
DenseNet-169 (k=32) 23.6 Download (114.4MB)
DenseNet-201 (k=32) 22.5 Download (161.8MB)
DenseNet-161 (k=48) 22.2 Download (230.8MB)

Models in the tech report

More accurate models trained with the memory efficient implementation in the technical report.

Network Top-1 error Torch Model
DenseNet-264 (k=32) 22.1 Download (256MB)
DenseNet-232 (k=48) 21.2 Download (426MB)
DenseNet-cosine-264 (k=32) 21.6 Download (256MB)
DenseNet-cosine-264 (k=48) 20.4 Download (557MB)

Caffe

https://github.com/shicai/DenseNet-Caffe.

PyTorch

PyTorch documentation on models. We would like to thank @gpleiss for this nice work in PyTorch.

Keras, Tensorflow and Theano

https://github.com/flyyufelix/DenseNet-Keras.

MXNet

https://github.com/miraclewkf/DenseNet.

Wide-DenseNet for better Time/Accuracy and Memory/Accuracy Tradeoff

If you use DenseNet as a model in your learning task, to reduce the memory and time consumption, we recommend use a wide and shallow DenseNet, following the strategy of wide residual networks. To obtain a wide DenseNet we set the depth to be smaller (e.g., L=40) and the growthRate to be larger (e.g., k=48).

We test a set of Wide-DenseNet-BCs and compared the memory and time with the DenseNet-BC (L=100, k=12) shown above. We obtained the statistics using a single TITAN X card, with batch size 64, and without any memory optimization.

Model Parameters CIFAR-10+ CIFAR-100+ Time per Iteration Memory
DenseNet-BC (L=100, k=12) 0.8M 4.51 22.27 0.156s 5452MB
Wide-DenseNet-BC (L=40, k=36) 1.5M 4.58 22.30 0.130s 4008MB
Wide-DenseNet-BC (L=40, k=48) 2.7M 3.99 20.29 0.165s 5245MB
Wide-DenseNet-BC (L=40, k=60) 4.3M 4.01 19.99 0.223s 6508MB

Obersevations:

  1. Wide-DenseNet-BC (L=40, k=36) uses less memory/time while achieves about the same accuracy as DenseNet-BC (L=100, k=12).
  2. Wide-DenseNet-BC (L=40, k=48) uses about the same memory/time as DenseNet-BC (L=100, k=12), while is much more accurate.

Thus, for practical use, we suggest picking one model from those Wide-DenseNet-BCs.

Updates

12/10/2019:

  1. Journal version (accepted by IEEE TPAMI) released.

08/23/2017:

  1. Add supporting code, so one can simply git clone and run.

06/06/2017:

  1. Support ultra memory efficient training of DenseNet with customized densely connected layer.

  2. Support memory efficient training of DenseNet with standard densely connected layer (recursive concatenation) by fixing the shareGradInput function.

05/17/2017:

  1. Add Wide-DenseNet.
  2. Add keras, tf, theano link for pretrained models.

04/20/2017:

  1. Add usage of models in PyTorch.

03/29/2017:

  1. Add the code for imagenet training.

12/03/2016:

  1. Add Imagenet results and pretrained models.
  2. Add DenseNet-BC structures.

Contact

liuzhuangthu at gmail.com
gaohuang at tsinghua.edu.cn
Any discussions, suggestions and questions are welcome!

Owner
Zhuang Liu
Zhuang Liu
Code for our paper Aspect Sentiment Quad Prediction as Paraphrase Generation in EMNLP 2021.

Aspect Sentiment Quad Prediction (ASQP) This repo contains the annotated data and code for our paper Aspect Sentiment Quad Prediction as Paraphrase Ge

Isaac 39 Dec 11, 2022
Deep Learning Based Fasion Recommendation System for Ecommerce

Project Name: Fasion Recommendation System for Ecommerce A Deep learning based streamlit web app which can recommened you various types of fasion prod

BAPPY AHMED 13 Dec 13, 2022
PyTorch implementation of Octave Convolution with pre-trained Oct-ResNet and Oct-MobileNet models

octconv.pytorch PyTorch implementation of Octave Convolution in Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octa

Duo Li 273 Dec 18, 2022
Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"

Topographic Variational Autoencoder Paper: https://arxiv.org/abs/2109.01394 Getting Started Install requirements with Anaconda: conda env create -f en

T. Andy Keller 69 Dec 12, 2022
HyperSeg: Patch-wise Hypernetwork for Real-time Semantic Segmentation Official PyTorch Implementation

: We present a novel, real-time, semantic segmentation network in which the encoder both encodes and generates the parameters (weights) of the decoder. Furthermore, to allow maximal adaptivity, the w

Yuval Nirkin 182 Dec 14, 2022
Pytorch implementation of Implicit Behavior Cloning.

Implicit Behavior Cloning - PyTorch (wip) Pytorch implementation of Implicit Behavior Cloning. Install conda create -n ibc python=3.8 pip install -r r

Kevin Zakka 49 Dec 25, 2022
Thermal Control of Laser Powder Bed Fusion using Deep Reinforcement Learning

This repository is the implementation of the paper "Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning", linked here. The project makes use of the Deep Reinforcement Library

BaratiLab 11 Dec 27, 2022
Accuracy Aligned. Concise Implementation of Swin Transformer

Accuracy Aligned. Concise Implementation of Swin Transformer This repository contains the implementation of Swin Transformer, and the training codes o

FengWang 77 Dec 16, 2022
An algorithm study of the 6th iOS 10 set of Boost Camp Web Mobile

알고리즘 스터디 🔥 부스트캠프 웹모바일 6기 iOS 10조의 알고리즘 스터디 입니다. 개인적인 사정 등으로 S034, S055만 참가하였습니다. 스터디 목적 상진: 코테 합격 + 부캠끝나고 아침에 일어나기 위해 필요한 사이클 기완: 꾸준하게 자리에 앉아 공부하기 +

2 Jan 11, 2022
Implementation of the master's thesis "Temporal copying and local hallucination for video inpainting".

Temporal copying and local hallucination for video inpainting This repository contains the implementation of my master's thesis "Temporal copying and

David Álvarez de la Torre 1 Dec 02, 2022
[EMNLP 2021] MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity Representations

MuVER This repo contains the code and pre-trained model for our EMNLP 2021 paper: MuVER: Improving First-Stage Entity Retrieval with Multi-View Entity

24 May 30, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

9 Nov 14, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
This repository contains the official MATLAB implementation of the TDA method for reverse image filtering

ReverseFilter TDA This repository contains the official MATLAB implementation of the TDA method for reverse image filtering proposed in the paper: "Re

Fergaletto 2 Dec 13, 2021
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.

Conceptual 12M We introduce the Conceptual 12M (CC12M), a dataset with ~12 million image-text pairs meant to be used for vision-and-language pre-train

Google Research Datasets 226 Dec 07, 2022
This repo in the implementation of EMNLP'21 paper "SPARQLing Database Queries from Intermediate Question Decompositions" by Irina Saparina, Anton Osokin

SPARQLing Database Queries from Intermediate Question Decompositions This repo is the implementation of the following paper: SPARQLing Database Querie

Yandex Research 20 Dec 19, 2022
Learning and Building Convolutional Neural Networks using PyTorch

Image Classification Using Deep Learning Learning and Building Convolutional Neural Networks using PyTorch. Models, selected are based on number of ci

Mayur 126 Dec 22, 2022
Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity

Cross-lingual Transfer for Speech Processing using Acoustic Language Similarity Indic TTS Samples can be found at https://peter-yh-wu.github.io/cross-

Peter Wu 1 Nov 12, 2022
Only a Matter of Style: Age Transformation Using a Style-Based Regression Model

Only a Matter of Style: Age Transformation Using a Style-Based Regression Model The task of age transformation illustrates the change of an individual

444 Dec 30, 2022