Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

Overview

News

SRU++, a new SRU variant, is released. [tech report] [blog]

The experimental code and SRU++ implementation are available on the dev branch which will be merged into master later.

About

SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.


Average processing time of LSTM, conv2d and SRU, tested on GTX 1070

For example, the figure above presents the processing time of a single mini-batch of 32 samples. SRU achieves 10 to 16 times speed-up compared to LSTM, and operates as fast as (or faster than) word-level convolution using conv2d.

Reference:

Simple Recurrent Units for Highly Parallelizable Recurrence [paper]

@inproceedings{lei2018sru,
  title={Simple Recurrent Units for Highly Parallelizable Recurrence},
  author={Tao Lei and Yu Zhang and Sida I. Wang and Hui Dai and Yoav Artzi},
  booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
  year={2018}
}

When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute [paper]

@article{lei2021srupp,
  title={When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute},
  author={Tao Lei},
  journal={arXiv preprint arXiv:2102.12459},
  year={2021}
}

Requirements

Install requirements via pip install -r requirements.txt.


Installation

From source:

SRU can be installed as a regular package via python setup.py install or pip install ..

From PyPi:

pip install sru

Directly use the source without installation:

Make sure this repo and CUDA library can be found by the system, e.g.

export PYTHONPATH=path_to_repo/sru
export LD_LIBRARY_PATH=/usr/local/cuda/lib64

Examples

The usage of SRU is similar to nn.LSTM. SRU likely requires more stacking layers than LSTM. We recommend starting by 2 layers and use more if necessary (see our report for more experimental details).

import torch
from sru import SRU, SRUCell

# input has length 20, batch size 32 and dimension 128
x = torch.FloatTensor(20, 32, 128).cuda()

input_size, hidden_size = 128, 128

rnn = SRU(input_size, hidden_size,
    num_layers = 2,          # number of stacking RNN layers
    dropout = 0.0,           # dropout applied between RNN layers
    bidirectional = False,   # bidirectional RNN
    layer_norm = False,      # apply layer normalization on the output of each layer
    highway_bias = -2,        # initial bias of highway gate (<= 0)
)
rnn.cuda()

output_states, c_states = rnn(x)      # forward pass

# output_states is (length, batch size, number of directions * hidden size)
# c_states is (layers, batch size, number of directions * hidden size)

Contributing

Please read and follow the guidelines.

Other Implementations

@musyoku had a very nice SRU implementaion in chainer.

@adrianbg implemented the first CPU version.


Comments
  • Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Enable both Pytorch native AMP and Nvidia APEX AMP for SRU

    Hi!

    I was happily using SRUs with Pytorch native AMP, however I started experimenting with training using Microsoft DeepSpeed and bumped in to an issue.

    Basically the issues is that I observed that FP16 training using DeepSpeed doesn't work for both GRUs and SRUs. However when using Nvidia APEX AMP, DeepSpeed training using GRUs does work.

    So, based on the tips in one of your issues, I started looking in to how I could enable Pytorch native AMP and Nvidia APEX AMP for SRUs, so I could train models based on SRUs using DeepSpeed.

    That is why I created this pull request. Basically, I found that by making the code simpler, I can make SRUs work with both methods of AMP.

    Now amp_recurrence_fp16 can be used for both types of AMP. When amp_recurrence_fp16=True, the tensor's are cast to float16, otherwise nothing special happens. So, I also removed the torch.cuda.amp.autocast(enabled=False) region; I might be wrong, but it seems that we don't need it.

    I did some tests with my own code and it works in the different scenarios of interest:

    • Using PyTorch native AMP, not using DeepSpeed
    • Not using PyTorch native AMP, not using DeepSpeed
    • Using Nvidia APEX AMP, using DeepSpeed
    • Not using Nvidia APEX AMP, using DeepSpeed

    It would be beneficial if we can test this with an official SRU repo test, maybe repurposing the language_model/train_lm.py?

    opened by visionscaper 13
  • float16 handling

    float16 handling

    When I convert my model, which using this SRU unit, into float16 enabled one, it fails. Is this SRU not implemented to use in float16 environment, or is it hard to fix it?

    bug 
    opened by ywatanabe1989 11
  • support GPU inference in torchscript

    support GPU inference in torchscript

    This is on 3.0.0-dev branch for now

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs

    The refactored code has passed the forward() & backward() test. I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.

    opened by taoleicn 8
  • Error unpacking PackedSequence on latest version

    Error unpacking PackedSequence on latest version

    Hello @taolei87 , After updating to the latest version, my code broke. It works great on the previous 2.3.5 version and with nn.LSTM.

    File "C:\xxx\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
      result = self.forward(*input, **kwargs)
    File "C:\xxx\lib\site-packages\sru\modules.py", line 576, in forward
      mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    RuntimeError: shape '[393, 1]' is invalid for input of size 384
    

    I can see that in the previous version the unpacking code on forward was different:

            input_packed = isinstance(input, nn.utils.rnn.PackedSequence)
            if input_packed:
                input, lengths = nn.utils.rnn.pad_packed_sequence(input)
                max_length = lengths.max().item()
                mask_pad = torch.ByteTensor([[0] * l + [1] * (max_length - l) for l in lengths.tolist()])
                mask_pad = mask_pad.to(input.device).transpose(0, 1).contiguous()
    

    Now is:

    
            orig_input = input
            if isinstance(orig_input, PackedSequence):
                input, batch_sizes, sorted_indices, unsorted_indices = input
                length = input.size(0)
                batch_size = input.size(1)
                mask_pad = torch.arange(batch_size,
                                        device=batch_sizes.device).expand(length, batch_size)
                mask_pad = (mask_pad >= batch_sizes.view(length, 1)).contiguous()
    
    bug 
    opened by bratao 8
  • Increasing GPU Usage each epoch

    Increasing GPU Usage each epoch

    I'm trying to implement a model that includes a SRUCell. This are my specs:

    Tesla M60 GPU torch.version: 0.4.1.post2 torch.cuda.version: 9.0.176

    Although its training, every epoch the memory usage in the GPU increases until it fills it. I made a toy example where this error occurs:

    import torch
    from torch.autograd import Variable
    from sru import SRUCell
    
    
    batch_size = 5
    seq_len = 60
    epochs = 1000
    cuda = torch.cuda.is_available()
    
    model = SRUCell(100, 100)
    
    if cuda:
        model.cuda(0)
    
    optimizer = torch.optim.Adam([
            {'params':model.parameters()}], lr=1e-3)
    
    loss_function = torch.nn.MSELoss()
        
    seq = Variable(torch.rand(batch_size,seq_len,100))
    y = Variable(torch.rand(batch_size,100))
    
    
    if cuda:
        seq = seq.cuda(0)
        y = y.cuda(0)
    
    
    model.train()
    
    for e in range(epochs):
        model.zero_grad()
        
        h = Variable(torch.zeros(batch_size, 100))
        c = Variable(torch.zeros(batch_size, 100))
        
        if cuda:
            h = h.cuda(0)
            c = c.cuda(0)
        
        for i in range(seq_len):
            x = seq[:,i,:]
            h, c = model(x, c)
        loss = loss_function(h, y)
        loss.backward()
        optimizer.step()
        print('Epoch: {} - Loss: {}'.format(e, loss))
    
    opened by santiag0m 8
  • Can i put hidden states in sru cell forward like in vanilla pytorch?

    Can i put hidden states in sru cell forward like in vanilla pytorch?

    In vanilla it work like this

    rnn = nn.LSTMCell(10, 20)
    input = torch.randn(6, 3, 10)
    hx = torch.randn(3, 20)
    cx = torch.randn(3, 20)
    output = []
    for i in range(6):
        hx, cx = rnn(input[i], (hx, cx))
        output.append(hx)
    

    How can i do same for sru cell?

    opened by hadaev8 7
  • AttributeError when preprocessing data for DrQA

    AttributeError when preprocessing data for DrQA

    Firstly i ran download.sh, and it succesfully downloaded glove and train/dev jsons for SQuAD. However, python prepro.py gave me this:

    Traceback (most recent call last):
      File "prepro.py", line 243, in <module>
        vocab_tag = list(nlp.tagger.tag_names)
    AttributeError: 'Tagger' object has no attribute 'tag_names'
    

    My Spacy version is 2.0.3, and it seems like something broke in update from 1.x that is written in requirements, and I didn't succeed in fixing it myself. Any suggests?

    opened by mojesty 7
  • Calculating Backwards For SRU Results in CUDA error.

    Calculating Backwards For SRU Results in CUDA error.

    I'm not sure how, but I'm seeing this error when I try to compute the backwards function. Don't know if you've come across this during your debug?

    Traceback (most recent call last):
      File "gan_language.py", line 341, in <module>
        G.backward(one)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
      File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
        variables, grad_variables, retain_graph)
      File "/home/nick/wgan-gp/sru/cuda_functional.py", line 417, in backward
        stream=SRU_STREAM
      File "cupy/cuda/function.pyx", line 129, in cupy.cuda.function.Function.__call__ (cupy/cuda/function.cpp:4010)  File "cupy/cuda/function.pyx", line 111, in cupy.cuda.function._launch (cupy/cuda/function.cpp:3647)
      File "cupy/cuda/driver.pyx", line 127, in cupy.cuda.driver.launchKernel (cupy/cuda/driver.cpp:2541)
      File "cupy/cuda/driver.pyx", line 62, in cupy.cuda.driver.check_status (cupy/cuda/driver.cpp:1446)
    cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_HANDLE: invalid resource handle
    
    opened by NickShahML 7
  • Speed up data loading / batching for ONE BILLION WORD experiment

    Speed up data loading / batching for ONE BILLION WORD experiment

    The data loading was inefficient and was found to be the bottleneck of BILLION WORD training. This PR rewrote the sharding (which data goes to a certain GPU / training process), and improved the training speed significantly.

    The figure compares a previous run and a new test run. We see 40% reduction on training time.

    This means our reported training efficiency will be much stronger from 59 GPU days to 36 GPU days, and 4x more efficient than FairSeq Transformer results.

    opened by taoleicn 6
  • Different input dimention compared to output dimension

    Different input dimention compared to output dimension

    Hi, I'm trying to implement a naive version of this paper in Keras, and was wondering how is the case that - n_in != n_out handled.

    I went through the code a few times, and couldn't understand the element wise multiplication of (1 - r_t) with x_t, if x_t is of a different shape than r_t.

    question 
    opened by titu1994 6
  • support GPU inference in torchscript model for v2.5 / v2.6

    support GPU inference in torchscript model for v2.5 / v2.6

    This PR works for master branch, v2.5 and v2.6 release

    A non-trivial PR to support GPU inference in torchscript

    • Load CUDA kernels as non-python modules; this is needed for torchscript compilation
    • Refactored CUDA APIs as functions that return output as tensors, instead of procedures that modify some passed-in tensors.
    • Added a workaround in case TS tries to locate and compile CUDA methods on machines that don't have CUDA / GPUs
    • The refactored code has passed the forward() & backward() test.
    • I also checked the outputs are the same for the non-torchscript and torchscript versions of the same model.
    opened by taoleicn 5
  • Mixed Precision Training

    Mixed Precision Training

    Hi,

    first of all I want to thank you for your great work. I'm using SRUs for speech enhancement, they do very well on a reasonable computational cost.

    I would like to know if there is a possibility to train SRUs in mixed precision mode? I tried to enable it, by setting precision=16 in the pytorch lightning trainer, but that didn't do the trick.

    Kind of regards, Zadagu

    opened by Zadagu 1
  • Any documentation on using SRU++ ?

    Any documentation on using SRU++ ?

    Hello, I've read and really appreciated your team's wonderful works on SRU++. I want to implement this architecture in other tasks, but i'm having problem finding the documentation on SRU++, as how I can use SRU++ the same way as SRU (calling directly from sru library after installing by pip install sru). I have looked into the dev-3.0.0 branch, which seems like the latest updated branch, but I still have no clues how to call and integrate sru++ modules into my custom defined pytorch modules. Could you help me ?

    opened by thangld201 1
  • FAILED: sru_cuda_kernel.cuda.o

    FAILED: sru_cuda_kernel.cuda.o

    when i run example, i meet this issue:FAILED: sru_cuda_kernel.cuda.o ,and in the end, it report ninja: build stopped: subcommand failed. what should i do to slove this problem?

    opened by xianyu-123 0
  • Avoid unintended eager cuda initialization

    Avoid unintended eager cuda initialization

    We noticed the package initialization for sru is eagerly triggering the initialization because of the following stack of module imports sru.modules -> sru.ops -> cuda_functional and this last module is executing the function load of torch.utils.cpp_extension.

    This was detected because of issues caused when running with the server framework in SUBPROCESS_MODE, that is forking a new process for it to run the model. We got an error complaining that CUDA had been already initialized in the parent process, which was not necessary because it is not meant to run the inference in the model.

    This PR changes this loading to be more lazy, more concretely we changed the code in sru.modules to avoid the eager import of sru.ops and instead postpone it to the instantiation of a first SRUCell.

    The changes in this PR have been tested doing a checkout of this branch in an AWS instance with GPU and running pytest -sv test which resulted in 141 passed, 161 warnings and no failures. So we understand this is working as expected for both CPU and GPU settings.

    opened by dkasapp 0
  • Unknown builtin op: sru_cuda::sru_bi_forward_simple

    Unknown builtin op: sru_cuda::sru_bi_forward_simple

    When using a bidirectional SRU, regular usage seems to be fine, and compilation to torchscript proceeds without error, but upon trying to infer with the compiled torchscript I get:

    Unknown builtin op: sru_cuda::sru_bi_forward_simple.

    Using pytorch 1.10, sru 2.6.0, cuda 11.3

    opened by ctlaltdefeat 2
Releases(v2.7.0-rc1)
Owner
ASAPP Research
AI for Enterprise
ASAPP Research
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 07, 2023
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
An implementation of Performer, a linear attention-based transformer, in Pytorch

Performer - Pytorch An implementation of Performer, a linear attention-based transformer variant with a Fast Attention Via positive Orthogonal Random

Phil Wang 900 Dec 22, 2022
S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

Amazon Web Services 138 Jan 03, 2023
The goal of this library is to generate more helpful exception messages for numpy/pytorch matrix algebra expressions.

Tensor Sensor See article Clarifying exceptions and visualizing tensor operations in deep learning code. One of the biggest challenges when writing co

Terence Parr 704 Dec 14, 2022
Distiller is an open-source Python package for neural network compression research.

Wiki and tutorials | Documentation | Getting Started | Algorithms | Design | FAQ Distiller is an open-source Python package for neural network compres

Intel Labs 4.1k Dec 28, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
Bunch of optimizer implementations in PyTorch

Bunch of optimizer implementations in PyTorch

Hyeongchan Kim 76 Jan 03, 2023
A tutorial on "Bayesian Compression for Deep Learning" published at NIPS (2017).

Code release for "Bayesian Compression for Deep Learning" In "Bayesian Compression for Deep Learning" we adopt a Bayesian view for the compression of

Karen Ullrich 190 Dec 30, 2022
A tiny package to compare two neural networks in PyTorch

Compare neural networks by their feature similarity

Anand Krishnamoorthy 180 Dec 30, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.

PyTorch Implementation of Differentiable ODE Solvers This library provides ordinary differential equation (ODE) solvers implemented in PyTorch. Backpr

Ricky Chen 4.4k Jan 04, 2023
A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.

Hugging Face 3.5k Jan 08, 2023
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

Tristan Deleu 1.7k Jan 06, 2023
PyTorch to TensorFlow Lite converter

PyTorch to TensorFlow Lite converter

Omer Ferhat Sarioglu 140 Dec 13, 2022
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022