Code snippets created for the PyTorch discussion board

Overview

PyTorch misc

Collection of code snippets I've written for the PyTorch discussion board.

All scripts were testes using the PyTorch 1.0 preview and torchvision 0.2.1.

Additional libraries, e.g. numpy or pandas, are used in a few scripts.

Some scripts might be a good starter to create a tutorial.

Overview

  • accumulate_gradients - Comparison of accumulated gradients/losses to vanilla batch update.
  • adaptive_batchnorm- Adaptive BN implementation using two additional parameters: out = a * x + b * bn(x).
  • adaptive_pooling_torchvision - Example of using adaptive pooling layers in pretrained models to use different spatial input shapes.
  • batch_norm_manual - Comparison of PyTorch BatchNorm layers and a manual calculation.
  • change_crop_in_dataset - Change the image crop size on the fly using a Dataset.
  • channel_to_patches - Permute image data so that channel values of each pixel are flattened to an image patch around the pixel.
  • conv_rnn - Combines a 3DCNN with an RNN; uses windowed frames as inputs.
  • csv_chunk_read - Provide data chunks from continuous .csv file.
  • densenet_forwardhook - Use forward hooks to get intermediate activations from densenet121. Uses separate modules to process these activations further.
  • edge_weighting_segmentation - Apply weighting to edges for a segmentation task.
  • image_rotation_with_matrix - Rotate an image given an angle using 1.) a nested loop and 2.) a rotation matrix and mesh grid.
  • LocallyConnected2d - Implementation of a locally connected 2d layer.
  • mnist_autoencoder - Simple autoencoder for MNIST data. Includes visualizations of output images, intermediate activations and conv kernels.
  • mnist_permuted - MNIST training using permuted pixel locations.
  • model_sharding_data_parallel - Model sharding with DataParallel using 2 pairs of 2 GPUs.
  • momentum_update_nograd - Script to see how parameters are updated when an optimizer is used with momentum/running estimates, even if gradients are zero.
  • pytorch_redis - Script to demonstrate the loading data from redis using a PyTorch Dataset and DataLoader.
  • shared_array - Script to demonstrate the usage of shared arrays using multiple workers.
  • shared_dict - Script to demonstrate the usage of shared dicts using multiple workers.
  • unet_demo - Simple UNet demo.
  • weighted_sampling - Usage of WeightedRandomSampler using an imbalanced dataset with class imbalance 99 to 1.

Feedback is very welcome!

Owner
Deep Learning Frameworks @NVIDIA
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

Kevin Musgrave 5k Jan 02, 2023
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
An optimizer that trains as fast as Adam and as good as SGD.

AdaBound An optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of popula

LoLo 2.9k Dec 27, 2022
torch-optimizer -- collection of optimizers for Pytorch

torch-optimizer torch-optimizer -- collection of optimizers for PyTorch compatible with optim module. Simple example import torch_optimizer as optim

Nikolay Novik 2.6k Jan 03, 2023
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch

lookahead optimizer for pytorch PyTorch implement of Lookahead Optimizer: k steps forward, 1 step back Usage: base_opt = torch.optim.Adam(model.parame

Liam 318 Dec 09, 2022
Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.

PyTorch Implementation of Differentiable ODE Solvers This library provides ordinary differential equation (ODE) solvers implemented in PyTorch. Backpr

Ricky Chen 4.4k Jan 04, 2023
You like pytorch? You like micrograd? You love tinygrad! ❤️

For something in between a pytorch and a karpathy/micrograd This may not be the best deep learning framework, but it is a deep learning framework. Due

George Hotz 9.7k Jan 05, 2023
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
pip install antialiased-cnns to improve stability and accuracy

Antialiased CNNs [Project Page] [Paper] [Talk] Making Convolutional Networks Shift-Invariant Again Richard Zhang. In ICML, 2019. Quick & easy start Ru

Adobe, Inc. 1.6k Dec 28, 2022
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023
This is an differentiable pytorch implementation of SIFT patch descriptor.

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
PyTorch wrappers for using your model in audacity!

PyTorch wrappers for using your model in audacity!

130 Dec 14, 2022
Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Fangjun Kuang 119 Jan 03, 2023
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
270 Dec 24, 2022
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022