A pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

Overview

Compact Bilinear Pooling for PyTorch.

This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

This version relies on the FFT implementation provided with PyTorch 0.4.0 onward. For older versions of PyTorch, use the tag v0.3.0.

Installation

Run the setup.py, for instance:

python setup.py install

Usage

class compact_bilinear_pooling.CompactBilinearPooling(input1_size, input2_size, output_size, h1 = None, s1 = None, h2 = None, s2 = None)

Basic usage:

from compact_bilinear_pooling import CountSketch, CompactBilinearPooling

input_size = 2048
output_size = 16000
mcb = CompactBilinearPooling(input_size, input_size, output_size).cuda()
x = torch.rand(4,input_size).cuda()
y = torch.rand(4,input_size).cuda()

z = mcb(x,y)

Test

A couple of test of the implementation of Compact Bilinear Pooling and its gradient can be run using:

python test.py

References

Comments
  • The value in ComplexMultiply_backward function

    The value in ComplexMultiply_backward function

    Hi @gdlg, thanks for this nice work. I'm confused about the backward procedure of complex multiplication. So I hope you can help me to figure it out.

    In forward,

    Z = XY = (Rx + i * Ix)(Ry + i * Iy) = (RxRy - IxIy) + i * (IxRy + RxIy) = Rz + i * Iz
    

    In backward, according the chain rule, it will has

    grad_(L/X) = grad_(L/Z) * grad(Z/X)
               = grad_Z * Y
               = (R_gz + i * I_gz)(Ry + i * Iy)
               = (R_gzRy - I_gzIy) + i * (I_gzRy + R_gzIy)
    

    So, why is this line implemented by using the value = 1 for real part and value = -1 for image part?

    Is there something wrong in my thoughts? Thanks.

    opened by KaiyuYue 8
  • The miss of Rfft

    The miss of Rfft

    When I run the test module, it indicates that the module of pytorch_fft of fft in autograd does not have attribute of Rfft. What version of pytorch_fft should I install to fit this code?

    opened by PeiqinZhuang 8
  • Save the model - TypeError: can't pickle Rfft objects

    Save the model - TypeError: can't pickle Rfft objects

    How do you save and load the model, I'm using torch.save, which cause the following error:

    File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in save
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 117, in _with_file_like
       return body(f)
     File "xanaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 135, in <lambda>
       return _with_file_like(f, "wb", lambda f: _save(obj, f, pickle_module, pickl                                                                                                                               e_protocol))
     File "x/anaconda3/lib/python3.6/site-packages/tor                                                                                                                               ch/serialization.py", line 198, in _save
       pickler.dump(obj)
    TypeError: can't pickle Rfft objects
    
    
    opened by idansc 3
  • Multi GPU support

    Multi GPU support

    I modify

    class CompactBilinearPooling(nn.Module):   
         def forward(self, x, y):    
                return CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)
    

    to

    def forward(self, x):    
        x = x.permute(0, 2, 3, 1) #NCHW to NHWC   
        y = Variable(x.data.clone())    
        out = (CompactBilinearPoolingFn.apply(self.sketch1.h, self.sketch1.s, self.sketch2.h, self.sketch2.s, self.output_size, x, y)).permute(0,3,1,2) #to NCHW    
        out = nn.functional.adaptive_avg_pool2d(out, 1) # N,C,1,1   
        #add an element-wise signed square root layer and an instance-wise l2 normalization    
        out = (torch.sqrt(nn.functional.relu(out)) - torch.sqrt(nn.functional.relu(-out)))/torch.norm(out,2,1,True)   
        return out 
    

    This makes the compact pooling layer can be plugged to PyTorch CNNs more easily:

    model.avgpool = CompactBilinearPooling(input_C, input_C, bilinear['dim'])
    model.fc = nn.Linear(int(model.fc.in_features/input_C*bilinear['dim']), num_classes)

    However, when I run this using multiple GPUs, I got the following error:

    Traceback (most recent call last): File "train3_bilinear_pooling.py", line 400, in run() File "train3_bilinear_pooling.py", line 219, in run train(train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 326, in train return _each_epoch('train', train_loader, model, criterion, optimizer, epoch) File "train3_bilinear_pooling.py", line 270, in _each_epoch output = model(input_var) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 319, in call result = self.forward(*input, **kwargs) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 67, in forward replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 72, in replicate return replicate(module, device_ids) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 19, in replicate buffer_copies = comm.broadcast_coalesced(buffers, devices) File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/cuda/comm.py", line 55, in broadcast_coalesced for chunk in _take_tensors(tensors, buffer_size): File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/_utils.py", line 232, in _take_tensors if tensor.is_sparse: File "/home/member/fuwang/opt/anaconda/lib/python3.6/site-packages/torch/autograd/variable.py", line 68, in getattr return object.getattribute(self, name) AttributeError: 'Variable' object has no attribute 'is_sparse'

    Do you have any ideas?

    opened by YanWang2014 3
  • AssertionError: False is not true

    AssertionError: False is not true

    Hi, I am back again. When running the test.py, I got the following error File "test.py", line 69, in test_gradients self.assertTrue(torch.autograd.gradcheck(cbp, (x,y), eps=1)) AssertionError: False is not true

    What does this mean?

    opened by YanWang2014 2
  • Support for Pytorch 1.11?

    Support for Pytorch 1.11?

    Hi, torch.fft() and torch.irfft() are no more functions, those are modules. And there appears to be a lof of modification in the parameters. I am currently trying to combine the two types of features with compact bilinear pooling, do you know how to port this code to pytorch 1.11?

    opened by bhosalems 1
  • Training does not converge after joining compact bilinear layer

    Training does not converge after joining compact bilinear layer

    Source code: x = self.features(x) #[4,512,28,28] batch_size = x.size(0) x = (torch.bmm(x, torch.transpose(x, 1, 2)) / 28 ** 2).view(batch_size, -1) x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x my code: x = self.features(x) #[4,512,28,28] x = x.view(x.shape[0], x.shape[1], -1) #[4,512,784] x = x.permute(0, 2, 1) #[4,784,512] x = self.mcb(x,x) #[4,784,512] batch_size = x.size(0) x = x.sum(1) #对于二维来说,dim=0,对列求和;dim=1对行求和;在这里是三维所以是对列求和 x = torch.nn.functional.normalize(torch.sign(x) * torch.sqrt(torch.abs(x) + 1e-10)) x = self.classifiers(x) return x

    The training does not converge after modification. Why? Is it a problem with my code?

    opened by roseif 3
Releases(v0.4.0)
Owner
Grégoire Payen de La Garanderie
Grégoire Payen de La Garanderie
PyTorch wrappers for using your model in audacity!

PyTorch wrappers for using your model in audacity!

130 Dec 14, 2022
S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

S3-plugin is a high performance PyTorch dataset library to efficiently access datasets stored in S3 buckets.

Amazon Web Services 138 Jan 03, 2023
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
Use Jax functions in Pytorch with DLPack

Use Jax functions in Pytorch with DLPack

Phil Wang 106 Dec 17, 2022
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS

(Generic) EfficientNets for PyTorch A 'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter ef

Ross Wightman 1.5k Jan 01, 2023
A PyTorch implementation of L-BFGS.

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook) What is it?

Hao-Jun Michael Shi 478 Dec 27, 2022
PyTorch to TensorFlow Lite converter

PyTorch to TensorFlow Lite converter

Omer Ferhat Sarioglu 140 Dec 13, 2022
High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

382 Dec 06, 2022
An optimizer that trains as fast as Adam and as good as SGD.

AdaBound An optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of popula

LoLo 2.9k Dec 27, 2022
Pytorch bindings for Fortran

Pytorch bindings for Fortran

Dmitry Alexeev 46 Dec 29, 2022
On the Variance of the Adaptive Learning Rate and Beyond

RAdam On the Variance of the Adaptive Learning Rate and Beyond We are in an early-release beta. Expect some adventures and rough edges. Table of Conte

Liyuan Liu 2.5k Dec 27, 2022
Riemannian Adaptive Optimization Methods with pytorch optim

geoopt Manifold aware pytorch.optim. Unofficial implementation for “Riemannian Adaptive Optimization Methods” ICLR2019 and more. Installation Make sur

642 Jan 03, 2023
A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.

Hugging Face 3.5k Jan 08, 2023
A tutorial on "Bayesian Compression for Deep Learning" published at NIPS (2017).

Code release for "Bayesian Compression for Deep Learning" In "Bayesian Compression for Deep Learning" we adopt a Bayesian view for the compression of

Karen Ullrich 190 Dec 30, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
ocaml-torch provides some ocaml bindings for the PyTorch tensor library.

ocaml-torch provides some ocaml bindings for the PyTorch tensor library. This brings to OCaml NumPy-like tensor computations with GPU acceleration and tape-based automatic differentiation.

Laurent Mazare 369 Jan 03, 2023
A tiny package to compare two neural networks in PyTorch

Compare neural networks by their feature similarity

Anand Krishnamoorthy 180 Dec 30, 2022
PyGCL: Graph Contrastive Learning Library for PyTorch

PyGCL is an open-source library for graph contrastive learning (GCL), which features modularized GCL components from published papers, standardized evaluation, and experiment management.

GCL: Graph Contrastive Learning Library for PyTorch 592 Jan 07, 2023
A few Windows specific scripts for PyTorch

It is a repo that contains scripts that makes using PyTorch on Windows easier. Easy Installation Update: Starting from 0.4.0, you can go to the offici

408 Dec 15, 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Henrique Morimitsu 105 Dec 16, 2022