Vector Quantization, in Pytorch

Overview

Vector Quantization - Pytorch

A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a package. It uses exponential moving averages to update the dictionary.

VQ has been successfully used by Deepmind and OpenAI for high quality generation of images (VQ-VAE-2) and music (Jukebox).

Install

$ pip install vector-quantize-pytorch

Usage

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,     # codebook size
    decay = 0.8,             # the exponential moving average decay, lower means the dictionary will change faster
    commitment = 1.          # the weight on the commitment loss
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x) # (1, 1024, 256), (1, 1024), (1)

Variants

This paper proposes to use multiple vector quantizers to recursively quantize the residuals of the waveform. You can use this with the ResidualVQ class and one extra initialization parameter.

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    num_quantizers = 8,      # specify number of quantizers
    codebook_size = 1024,    # codebook size
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

# (1, 1024, 256), (8, 1, 1024), (8, 1)
# (batch, seq, dim), (quantizer, batch, seq), (quantizer, batch)

Initialization

The SoundStream paper proposes that the codebook should be initialized by the kmeans centroids of the first batch. You can easily turn on this feature with one flag kmeans_init = True, for either VectorQuantize or ResidualVQ class

import torch
from vector_quantize_pytorch import ResidualVQ

residual_vq = ResidualVQ(
    dim = 256,
    codebook_size = 256,
    num_quantizers = 4,
    kmeans_init = True,   # set to True
    kmeans_iters = 10     # number of kmeans iterations to calculate the centroids for the codebook on init
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = residual_vq(x)

Increasing codebook usage

This repository will contain a few techniques from various papers to combat "dead" codebook entries, which is a common problem when using vector quantizers.

Lower codebook dimension

The Improved VQGAN paper proposes to have the codebook kept in a lower dimension. The encoder values are projected down before being projected back to high dimensional after quantization. You can set this with the codebook_dim hyperparameter.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    codebook_dim = 16      # paper proposes setting this to 32 or as low as 8 to increase codebook usage
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Cosine similarity

The Improved VQGAN paper also proposes to l2 normalize the codes and the encoded vectors, which boils down to using cosine similarity for the distance. They claim enforcing the vectors on a sphere leads to improvements in code usage and downstream reconstruction. You can turn this on by setting use_cosine_sim = True

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 256,
    use_cosine_sim = True   # set this to True
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Expiring stale codes

Finally, the SoundStream paper has a scheme where they replace codes that have hits below a certain threshold with randomly selected vector from the current batch. You can set this threshold with threshold_ema_dead_code keyword.

import torch
from vector_quantize_pytorch import VectorQuantize

vq = VectorQuantize(
    dim = 256,
    codebook_size = 512,
    threshold_ema_dead_code = 2  # should actively replace any codes that have an exponential moving average cluster size less than 2
)

x = torch.randn(1, 1024, 256)
quantized, indices, commit_loss = vq(x)

Citations

@misc{oord2018neural,
    title   = {Neural Discrete Representation Learning},
    author  = {Aaron van den Oord and Oriol Vinyals and Koray Kavukcuoglu},
    year    = {2018},
    eprint  = {1711.00937},
    archivePrefix = {arXiv},
    primaryClass = {cs.LG}
}
@misc{zeghidour2021soundstream,
    title   = {SoundStream: An End-to-End Neural Audio Codec},
    author  = {Neil Zeghidour and Alejandro Luebs and Ahmed Omran and Jan Skoglund and Marco Tagliasacchi},
    year    = {2021},
    eprint  = {2107.03312},
    archivePrefix = {arXiv},
    primaryClass = {cs.SD}
}
@inproceedings{anonymous2022vectorquantized,
    title   = {Vector-quantized Image Modeling with Improved {VQGAN}},
    author  = {Anonymous},
    booktitle = {Submitted to The Tenth International Conference on Learning Representations },
    year    = {2022},
    url     = {https://openreview.net/forum?id=pfNyExj7z2},
    note    = {under review}
}
Comments
  • Quantizers are not DDP/AMP compliant

    Quantizers are not DDP/AMP compliant

    Hi Lucidrains,

    Thanks for the amazing work you do by implementing all those papers!

    Is there a plan to make the Quantizer be compliant with:

    • DDP - They need an all gather before calculating anything so the updates are exactly the same across all ranks
    • AMP - In my experience, if AMP touches upon the quantizers it screws up the gradient magnitudes making it NaN/Overflow

    If you want I can have a go at it.

    opened by danieltudosiu 7
  • Commitment Loss Problems

    Commitment Loss Problems

    Hello,

    First of all, thank you so much for this powerful implementation.

    I have been researching to train some VQ-VAE to generate faces from FFHQ 128x128 and I always have the same problem if I use the commitment loss (0.25) and the gamma (0.99) like in the original paper, the commitment loss seems to grow infinitely. I know you said that it is an auxiliary loss and that is not that important but is this normal behavior? If not, how can I avoid for that to happen in the case I wanted to use this loss?

    Thank you so much in advance!

    opened by pedrocg42 6
  • fix dimensions: the codebook must look at data by taking each time fr…

    fix dimensions: the codebook must look at data by taking each time fr…

    …ame individually. In SoundStream article: "This vector quantizer learns a codebook of N vectors to encode each D-dimensional frame of enc(x)."

    opened by wesbz 5
  • kmeans and ddp hangs

    kmeans and ddp hangs

    kmeans and ddp hangs for me. ddp is initialized by pytorch lightning in my case. I have several questions:

    In https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L98

    all_num_samples = all_gather_sizes(local_samples, dim = 0) should it be dim = 1 (as dim 0 is the codebook dimension)?

    Then in https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L93 it just hangs for me. I am not totally sure, but I believe distributed.broadcast in

    https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L90

    is called with incompatible shapes. See https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast

    tensor must have the same number of elements in all processes participating in the collective.

    opened by tasptz 4
  • Cannot Converge with L2 Loss

    Cannot Converge with L2 Loss

    I am trying to quantize the latent vector. To be specific, I use a Encoder to get the latent representation z of the input. Then I try to quantize z, then send z into Decoder.

    However, during my experiment, I found the reconstruction loss cannot decrease with L2 loss, namely, the EuclideanCodebook. The model can converge with cosine similarity. Have any idea about this phenomenon?

    I think cosine similarity only considers the direction of the vector, instead of the scale of the vector. I still want to use EuclideanCodebook.

    opened by kingnobro 3
  • Error when using gloo as DDP backend

    Error when using gloo as DDP backend

    Hello! Thank you for your great work on implementing VQ layer. When I use the VQ layer in DDP mode and use gloo as the backend as suggested in README, I got the following error: terminate called after throwing an instance of 'gloo::EnforceNotMet' what(): [enforce fail at ../third_party/gloo/gloo/transport/tcp/pair.cc:510] op.preamble.length <= op.nbytes. 8773632 vs 8386560

    Do you have any ideas on how to solve this problem?
    I also tried to use nccl as the backend, however the program only hangs forever...

    opened by Saltychtao 3
  • codebook initialization

    codebook initialization

    Hi, Thank you for this great work. It's quite useful!

    I have been having problems with index collapse and I'm not sure where it's coming from. But upon digging into the code, it seems that when we're not using k-means to initialize the codebook vectors, randn (normal distribution) is used to initialize them. The vqvae paper specifically uses uniform distribution for initialization, which allows the authors to ignore KL divergence when training.

    This is from the vqvae paper: "Since we assume a uniform prior for z, the KL term that usually appears in the ELBO is constant w.r.t. the encoder parameters and can thus be ignored for training."

    Is there any reason why you changed to Normal distribution here?

    Thanks!

    opened by ramyamounir 3
  • possible papers (and code) of interest

    possible papers (and code) of interest

    Have you had a look at bitsandbytes?

    https://github.com/TimDettmers/bitsandbytes

    https://arxiv.org/abs/2208.07339

    https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/

    Also this paper on tradeoffs for various 8 bit quantization formats,

    https://arxiv.org/pdf/2206.02915v1.pdf

    opened by Thomas-MMJ 2
  • RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the

    RQ-VAE: How can I get a list of all learned codebook vectors (as indexed in the "indices")?

    Hi Lucid, i am working on quantizing CLIP image embeddings with your RQ-VAE. It works pretty well.

    Next I want to take all learned codebook vectors and add them to the vocab of a GPT (as frozen token embeddings).

    The idea is to train a GPT with CLIP image embeddings in between texts, e.g. IMAGE-CAPTION or TEXT-IMAGE-TEXT-IMAGE- ... Flamingo-style).

    If this works, then GPT could maybe also learn to generate quantized CLIP IM embeddings token by token --> and then e.g. show images through a.) retrieval or b.) a DALLE 2 decoder :)

    ... So my question is: Once the RQ-VAE is trained and i can get the quantized reconstructions and indices - How can I get a list or tensor of the actual codebook? (all possible vectors from the rq-vocab) :)

    opened by christophschuhmann 2
  • Expire codes heuristic is replacing inputs

    Expire codes heuristic is replacing inputs

    Thanks for the implementation!

    One question, should this

    https://github.com/lucidrains/vector-quantize-pytorch/blob/ebce893fff695845f7fe0f04d1400d2c29b94f98/vector_quantize_pytorch/vector_quantize_pytorch.py#L177

    be actually self.expire_codes_(quantize)?

    opened by kashif 2
  • orthogonal regularization loss useless?

    orthogonal regularization loss useless?

    because the codebooks are not registered as trainable parameters, and the orthogonal loss is only a function of the codebooks, is the orthogonal loss entirely useless?

    opened by GallagherCommaJack 2
  • EMA update on CosineCodebook

    EMA update on CosineCodebook

    The original VIT-VQGAN paper does not seem to use EMA update for codebook learning since their codebook is unit-normalized vectors.

    Particularly, to my understanding, EMA update does not quite make sense when the encoder outputs and codebook vectors are unit-normalized ones.

    What's your take on this? Should we NOT use EMA update with CosineCodebook?

    opened by le4m 3
  • Loss and Backprop Details

    Loss and Backprop Details

    Hi,

    During training the vqvae backprops on multiple losses. While inputting feature maps to the model, we are given a loss, shoud I manually backpropagate and update weights through (the good ol' loss.backward() and optimizer.step()) this or is it handled implicitly?

    opened by Malik7115 3
  • Missing parameter of beta

    Missing parameter of beta

    Hi, in the original VQVAE paper, the commit_loss is defined as

    (quantize.detach()-x) ** 2 + beta * (quantize - x.detach() ** 2)
    

    where the beta is usually to be 0.25. But the commit_loss is defined as the following in your implementation:

    F.mse_loss(quantize.detach(), x)
    

    So I wonder if the parameter beta is set to be 1 by default or if the second term is missing? Thank you very much.

    opened by Corleone-Huang 1
  • No way of training the codebook

    No way of training the codebook

    Hi! Could you please explain how the codebook vectors are updated if the codebook vectors are not required to be orthogonal?

    1. embed tensors in both Euclidean and CosineSim codebooks are registered as buffers, so they can't be updated at all
    2. There is no loss on the codebook vectors that moves them closer to the input

    Am I missing something? It seems that right now there is no way of updating the codebook vectors without the orthogonal loss.

    opened by RafailFridman 5
  • Plugging vector-quantize-pytorch into taming-transformers

    Plugging vector-quantize-pytorch into taming-transformers

    Hi,

    I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.

    The code from this repo can be seen in both files

    • taming-transformers/taming/models/vqgan.py
    • taming-transformers/taming/modules/vqvae/quantize.py

    As you can see, it is easy to launch a large scale training with your proposed architecture.

    I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested. Thanks again for your work and these open-sourced repositeries !

    opened by tanouch 2
Releases(0.10.14)
Owner
Phil Wang
Working with Attention. It's all we need
Phil Wang
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022
Tensorflow/Keras Plug-N-Play Deep Learning Models Compilation

DeepBay This project was created with the objective of compile Machine Learning Architectures created using Tensorflow or Keras. The architectures mus

Whitman Bohorquez 4 Sep 26, 2022
一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目

定时面板上的签到盒 一个运行在 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 或 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 等定时面板的签到项目 𝐞𝐥𝐞𝐜𝐕𝟐𝐏 𝐪𝐢𝐧𝐠𝐥𝐨𝐧𝐠 特别声明 本仓库发布的脚本及其中涉及的任何解锁和解密分析脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合

Leon 1.1k Dec 30, 2022
Unofficial implementation of HiFi-GAN+ from the paper "Bandwidth Extension is All You Need" by Su, et al.

HiFi-GAN+ This project is an unoffical implementation of the HiFi-GAN+ model for audio bandwidth extension, from the paper Bandwidth Extension is All

Brent M. Spell 134 Dec 30, 2022
Implementation of "Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis"

Generalizable Neural Performer: Learning Robust Radiance Fields for Human Novel View Synthesis Abstract: This work targets at using a general deep lea

163 Dec 14, 2022
Contrastive Loss Gradient Attack (CLGA)

Contrastive Loss Gradient Attack (CLGA) Official implementation of Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, WWW22 Bu

12 Dec 23, 2022
ScriptProfilerPy - Module to visualize where your python script is slow

ScriptProfiler helps you track where your code is slow It provides: Code lines t

Lucas BLP 3 Jun 02, 2022
An implementation of the 1. Parallel, 2. Streaming, 3. Randomized SVD using MPI4Py

PYPARSVD This implementation allows for a singular value decomposition which is: Distributed using MPI4Py Streaming - data can be shown in batches to

Romit Maulik 44 Dec 31, 2022
Production First and Production Ready End-to-End Speech Recognition Toolkit

WeNet 中文版 Discussions | Docs | Papers | Runtime (x86) | Runtime (android) | Pretrained Models We share neural Net together. The main motivation of WeN

2.7k Jan 04, 2023
BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis

Bilateral Denoising Diffusion Models (BDDMs) This is the official PyTorch implementation of the following paper: BDDM: BILATERAL DENOISING DIFFUSION M

172 Dec 23, 2022
Libtorch yolov3 deepsort

Overview It is for my undergrad thesis in Tsinghua University. There are four modules in the project: Detection: YOLOv3 Tracking: SORT and DeepSORT Pr

Xu Wei 226 Dec 13, 2022
Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators..

ARAPReg Code for ICCV 2021 paper: ARAPReg: An As-Rigid-As Possible Regularization Loss for Learning Deformable Shape Generators.. Installation The cod

Bo Sun 132 Nov 28, 2022
Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble This is the code for reproducing the results of the paper Uncertainty-Bas

43 Nov 23, 2022
The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network.

UNet-SIDE The undersampled DWI image using Slice-Interleaved Diffusion Encoding (SIDE) method can be reconstructed by the UNet network. For Super Reso

TIANTIAN XU 1 Jan 13, 2022
Python Environment for Bayesian Learning

Pebl is a python library and command line application for learning the structure of a Bayesian network given prior knowledge and observations. Pebl in

Abhik Shah 103 Jul 14, 2022
Official Pytorch implementation of Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Scene Representation Networks This is the official implementation of the NeurIPS submission "Scene Representation Networks: Continuous 3D-Structure-Aw

Vincent Sitzmann 365 Jan 06, 2023
AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation

AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation AniGAN: Style-Guided Generative Adversarial Networks for U

Bing Li 81 Dec 14, 2022
Generative Adversarial Text-to-Image Synthesis

###Generative Adversarial Text-to-Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee This is the

Scott Ellison Reed 883 Dec 31, 2022
Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

Lingvo is a framework for building neural networks in Tensorflow, particularly sequence models.

2.7k Jan 05, 2023
MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift

MemStream Implementation of MemStream: Memory-Based Anomaly Detection in Multi-Aspect Streams with Concept Drift . Siddharth Bhatia, Arjit Jain, Shivi

Stream-AD 61 Dec 02, 2022