Dynamic Slimmable Network (CVPR 2021, Oral)

Overview

Dynamic Slimmable Network (DS-Net)

This repository contains PyTorch code of our paper: Dynamic Slimmable Network (CVPR 2021 Oral).

image

Architecture of DS-Net. The width of each supernet stage is adjusted adaptively by the slimming ratio ρ predicted by the gate.

image

Accuracy vs. complexity on ImageNet.

Usage

1. Requirements

2. Stage I: Supernet Training

For example, train dynamic slimmable MobileNet supernet with 8 GPUs (takes about 2 days):

python -m torch.distributed.launch --nproc_per_node=8 train.py /PATH/TO/ImageNet -c ./configs/mobilenetv1_bn_uniform.yml

3. Stage II: Gate Training

  • Will be available soon

Citation

If you use our code for your paper, please cite:

@inproceedings{li2021dynamic,
  author = {Changlin Li and
            Guangrun Wang and
            Bing Wang and
            Xiaodan Liang and
            Zhihui Li and
            Xiaojun Chang},
  title = {Dynamic Slimmable Network},
  booktitle = {CVPR},
  year = {2021}
}
Comments
  • The usage of gumbel softmax in DS-Net

    The usage of gumbel softmax in DS-Net

    Thank you for your very nice work,I want to know that the effect of gumble softmax,because I think the network can be trained without gumble softmax. Is the gumbel softmax just aimed to increase the randomness of channel choice?

    discussion 
    opened by LinyeLi60 7
  • UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.

    UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.

    Why I get an warning: /home/chauncey/.local/lib/python3.8/site-packages/torchvision/transforms/functional.py:364: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. warnings.warn( when I use python3 -m torch.distributed.launch --nproc_per_node=1 train.py ./imagenet -c ./configs/mobilenetv1_bn_uniform.yml

    opened by Chauncey-Wang 3
  • Question about calculating MAdds of dynamic network in the paper

    Question about calculating MAdds of dynamic network in the paper

    Thank you for your great work, and I have a question about how to calculate MAdds in your paper. The dynamic network has different widths and MAdds for each instance, but you denoted MAdds for your networks. Are they the average MAdds for the whole dataset?

    discussion 
    opened by sseung0703 3
  • why not set ensemble_ib to True?

    why not set ensemble_ib to True?

    Hi,

    I found that ensemble_ib is set to False for both slim training and gate training from the configs, but from paper it would boost the performance when set toTrue.

    Any idea?

    opened by twmht 2
  • MAdds of Pretrained Supernet

    MAdds of Pretrained Supernet

    Hi Changlin, your work is excellent. I have a question about the calculation of MAdds, in README.md the MAdds of Subnetwork 13 is 565M, but I think the MAdds of Subnetwork 13 should be 821M observed in my experiments, because the channel number of Subnetwork 13 is larger than the original MobileNetV1, and the original MobileNetV1 1.0's MAdds should be 565M. Looking forward to your reply.

    opened by LinyeLi60 2
  • Error of change the num_choice in mobilenetv1_bn_uniform_reset_bn.yml

    Error of change the num_choice in mobilenetv1_bn_uniform_reset_bn.yml

    I follow your suggestion to set the num_choice in mobilenetv1_bn_uniform_reset_bn.yml to 14, but get an expected error when I use python -m torch.distributed.launch --nproc_per_node=8 train.py /PATH/TO/ImageNet -c ./configs/mobilenetv1_bn_uniform_reset_bn.yml.

    08/25 10:15:57 AM Recalibrating BatchNorm statistics... 08/25 10:16:10 AM Finish recalibrating BatchNorm statistics. 08/25 10:16:19 AM Finish recalibrating BatchNorm statistics. 08/25 10:16:21 AM Test: [ 0/0] Mode: 0 Time: 0.344 (0.344) Loss: 6.9204 (6.9204) [email protected]: 0.0000 ( 0.0000) [email protected]: 0.0000 ( 0.0000) Flops: 132890408 (132890408) 08/25 10:16:22 AM Test: [ 0/0] Mode: 1 Time: 0.406 (0.406) Loss: 6.9189 (6.9189) [email protected]: 0.0000 ( 0.0000) [email protected]: 0.0000 ( 0.0000) Flops: 152917440 (152917440) 08/25 10:16:22 AM Test: [ 0/0] Mode: 2 Time: 0.381 (0.381) Loss: 6.9187 (6.9187) [email protected]: 0.0000 ( 0.0000) [email protected]: 0.0000 ( 0.0000) Flops: 175152224 (175152224) 08/25 10:16:23 AM Test: [ 0/0] Mode: 3 Time: 0.389 (0.389) Loss: 6.9134 (6.9134) [email protected]: 0.0000 ( 0.0000) [email protected]: 0.0000 ( 0.0000) Flops: 199594752 (199594752) Traceback (most recent call last): File "train.py", line 658, in main() File "train.py", line 635, in main eval_metrics.append(validate_slim(model, File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/apis/train_slim.py", line 215, in validate_slim output = model(input) File "/home/chauncey/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/models/dyn_slim_net.py", line 191, in forward x = self.forward_features(x) File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/models/dyn_slim_net.py", line 178, in forward_features x = stage(x) File "/home/chauncey/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/models/dyn_slim_stages.py", line 48, in forward x = self.first_block(x) File "/home/chauncey/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/models/dyn_slim_blocks.py", line 240, in forward x = self.conv_pw(x) File "/home/chauncey/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/chauncey/PycharmProjects/DS-Net-main/dyn_slim/models/dyn_slim_ops.py", line 94, in forward self.running_outc = self.out_channels_list[self.channel_choice] IndexError: list index out of range

    It looks like we should make some adjustment in other py files.

    opened by chaunceywx 2
  • Why the num_choice in different yml is different?

    Why the num_choice in different yml is different?

    Why you set num_choice in mobilenetv1_bn_uniform_reset_bn.yml as 4, but set this parameter as 14 in the other two yml file?

    老哥,如果你也是中国人,咱们还是用中文交流吧,我英语水平比较感人。。。

    opened by chaunceywx 2
  • 运行问题

    运行问题

    请问大佬下面这个问题是为什么 Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


    /root/anaconda3/envs/0108/lib/python3.6/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /root/anaconda3/envs/0108/lib/python3.6/site-packages/torchvision/image.so: undefined symbol: _ZNK3c106IValue23reportToTensorTypeErrorEv warn(f"Failed to load image Python extension: {e}") /root/anaconda3/envs/0108/lib/python3.6/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /root/anaconda3/envs/0108/lib/python3.6/site-packages/torchvision/image.so: undefined symbol: _ZNK3c106IValue23reportToTensorTypeErrorEv warn(f"Failed to load image Python extension: {e}") 01/21 05:42:18 AM Added key: store_based_barrier_key:1 to store for rank: 1 01/21 05:42:18 AM Added key: store_based_barrier_key:1 to store for rank: 0 01/21 05:42:18 AM Training in distributed mode with multiple processes, 1 GPU per process. Process 0, total 2. 01/21 05:42:18 AM Training in distributed mode with multiple processes, 1 GPU per process. Process 1, total 2. 01/21 05:42:20 AM Model slimmable_mbnet_v1_bn_uniform created, param count: 7676204 01/21 05:42:20 AM Data processing configuration for current model + dataset: 01/21 05:42:20 AM input_size: (3, 224, 224) 01/21 05:42:20 AM interpolation: bicubic 01/21 05:42:20 AM mean: (0.485, 0.456, 0.406) 01/21 05:42:20 AM std: (0.229, 0.224, 0.225) 01/21 05:42:20 AM crop_pct: 0.875 01/21 05:42:20 AM NVIDIA APEX not installed. AMP off. 01/21 05:42:21 AM Using torch DistributedDataParallel. Install NVIDIA Apex for Apex DDP. 01/21 05:42:21 AM Scheduled epochs: 40 01/21 05:42:21 AM Training folder does not exist at: images/train 01/21 05:42:21 AM Training folder does not exist at: images/train Killing subprocess 239 Killing subprocess 240 Traceback (most recent call last): File "/root/anaconda3/envs/0108/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/root/anaconda3/envs/0108/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/root/anaconda3/envs/0108/lib/python3.6/site-packages/torch/distributed/launch.py", line 340, in main() File "/root/anaconda3/envs/0108/lib/python3.6/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/root/anaconda3/envs/0108/lib/python3.6/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/root/anaconda3/envs/0108/bin/python', '-u', 'train.py', '--local_rank=1', 'images', '-c', './configs/mobilenetv1_bn_uniform_reset_bn.yml']' returned non-zero exit status 1.

    opened by 6imust 1
  • project environment

    project environment

    Hi,could you provide the environment for the project?I try to train the network with python=3.8 pytorch=1.7.1,cuda=10.2.Shortly after starting training,there's a RuntimeError: CUDA error: device-side assert triggered happened,and some other environment also lead to this error.I'm not sure whether the problem is caused by the difference of environment.

    opened by singularity97 1
  • Softmax twice for SGS loss?

    Softmax twice for SGS loss?

    Dear authors, thanks for this nice work.

    I wonder why the calculation of the SGS loss is using the softmaxed data rather than the logits, considering the PyTorch CrossEntropyLoss already contains a softmax inside.

    https://github.com/changlin31/DS-Net/blob/15cd3036970ec27d2c306014344fd50d9e9b888b/dyn_slim/apis/train_slim_gate.py#L98 https://github.com/changlin31/DS-Net/blob/15cd3036970ec27d2c306014344fd50d9e9b888b/dyn_slim/models/dyn_slim_blocks.py#L324-L355

    opened by Yu-Zhewen 0
  • Can we futher improve autoalim without gate?

    Can we futher improve autoalim without gate?

    It is not easy to deploy gate operator with some other backends, like TensorRT.

    So my question is can we futher improve autoalim without the dynamic gate when inference?Any ongoing work are doing this?

    opened by twmht 3
  • DS-Net for object detection

    DS-Net for object detection

    Hello. Thanks for your work. I noticed that you also conducted some experiments in object detection. I wonder whether or when you will release the code

    opened by NoLookDefense 8
  • Dynamic path for DS-mobilenet

    Dynamic path for DS-mobilenet

    Hi. Thanks for your work. I am reading your paper and trying to reimplement, and I feel confused about some details. You mentioned in your paper that the slimming ratio ρ∈[0.35 : 0.05 : 1.25], which have 18 paths. However, in your code, there are only 14 paths ρ∈[0.35 : 0.05 : 1] as mentioned in https://github.com/changlin31/DS-Net/blob/15cd3036970ec27d2c306014344fd50d9e9b888b/dyn_slim/models/dyn_slim_net.py#L36 . And also, when conducting gate training, the gate function only has a 4-dimension output, meaning that there is only 4 paths and the slimming ratio is restricted to ρ∈[0.35 : 0.05 : 0.5]. https://github.com/changlin31/DS-Net/blob/15cd3036970ec27d2c306014344fd50d9e9b888b/dyn_slim/models/dyn_slim_blocks.py#L204 Why the dynamic path for larger network is not used?

    opened by NoLookDefense 1
Releases(v0.0.1)
  • v0.0.1(Nov 30, 2021)

    Pretrained weights of DS-MBNet supernet. Detailed accuracy of each sub-networks:

    | Subnetwork | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | | ----------------- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | MAdds | 133M | 153M | 175M | 200M | 226M | 255M | 286M | 319M | 355M | 393M | 433M | 475M | 519M | 565M | | Top-1 (%) | 70.1 | 70.4 | 70.8 | 71.2 | 71.6 | 72.0 | 72.4 | 72.7 | 73.0 | 73.3 | 73.6 | 73.9 | 74.1 | 74.6 | | Top-5 (%) | 89.4 | 89.6 | 89.9 | 90.2 | 90.3 | 90.6 | 90.9 | 91.0 | 91.2 | 91.4 | 91.5 | 91.7 | 91.8 | 92.0 |

    Source code(tar.gz)
    Source code(zip)
    DS_MBNet-70_1.pth.tar(60.93 MB)
    log-DS_MBNet-70_1.txt(6.12 KB)
Owner
Changlin Li
Changlin Li
Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface.

Gym-TORCS Gym-TORCS is the reinforcement learning (RL) environment in TORCS domain with OpenAI-gym-like interface. TORCS is the open-rource realistic

naoto yoshida 400 Dec 27, 2022
Code for the AAAI-2022 paper: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification (AAAI 2022) Prerequisite PyTorch = 1.2.0 P

16 Dec 14, 2022
Learning to Segment Instances in Videos with Spatial Propagation Network

Learning to Segment Instances in Videos with Spatial Propagation Network This paper is available at the 2017 DAVIS Challenge website. Check our result

Jingchun Cheng 145 Sep 28, 2022
Scalable Optical Flow-based Image Montaging and Alignment

SOFIMA SOFIMA (Scalable Optical Flow-based Image Montaging and Alignment) is a tool for stitching, aligning and warping large 2d, 3d and 4d microscopy

Google Research 16 Dec 21, 2022
StyleGAN2 Webtoon / Anime Style Toonify

StyleGAN2 Webtoon / Anime Style Toonify Korea Webtoon or Japanese Anime Character Stylegan2 base high Quality 1024x1024 / 512x512 Generate and Transfe

121 Dec 21, 2022
Learning Energy-Based Models by Diffusion Recovery Likelihood

Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma Paper: https://arxiv.o

Ruiqi Gao 41 Nov 22, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
[ACM MM 2021] Yes, "Attention is All You Need", for Exemplar based Colorization

Transformer for Image Colorization This is an implemention for Yes, "Attention Is All You Need", for Exemplar based Colorization, and the current soft

Wang Yin 30 Dec 07, 2022
PyTorch wrapper for Taichi data-oriented class

Stannum PyTorch wrapper for Taichi data-oriented class PRs are welcomed, please see TODOs. Usage from stannum import Tin import torch data_oriented =

86 Dec 23, 2022
Space Invaders For Python

Space-Invaders Just download or clone the git repository. To run the Space Invader game you need to have pyhton installed in you system. If you dont h

Fei 5 Jul 27, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks

Self-Correcting Quantum Many-Body Control using Reinforcement Learning with Tensor Networks This repository contains the code and data for the corresp

Friederike Metz 7 Apr 23, 2022
Keqing Chatbot With Python

KeqingChatbot A public running instance can be found on telegram as @keqingchat_bot. Requirements Python 3.8 or higher. A bot token. Local Deploy git

Rikka-Chan 2 Jan 16, 2022
Save-restricted-v-3 - Save restricted content Bot For telegram

Save restricted content Bot Contact: Telegram A stable telegram bot to get restr

DEVANSH 11 Dec 21, 2022
MPRNet-Cloud-removal: Progressive cloud removal

MPRNet-Cloud-removal Progressive cloud removal Requirements 1.Pytorch = 1.0 2.Python 3 3.NVIDIA GPU + CUDA 9.0 4.Tensorboard Installation 1.Clone the

Semi 95 Dec 18, 2022
Implements a fake news detection program using classifiers.

Fake news detection Implements a fake news detection program using classifiers for Data Mining course at UoA. Description The project is the categoriz

Apostolos Karvelas 1 Jan 09, 2022
Google AI Open Images - Object Detection Track: Open Solution

Google AI Open Images - Object Detection Track: Open Solution This is an open solution to the Google AI Open Images - Object Detection Track 😃 More c

minerva.ml 46 Jun 22, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Distributional Sliced-Wasserstein distance code

Distributional Sliced Wasserstein distance This is a pytorch implementation of the paper "Distributional Sliced-Wasserstein and Applications to Genera

VinAI Research 39 Jan 01, 2023
PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot Description This is an inference sample written in PyTorch of the origi

320 Nov 21, 2022