PyTorch implementation of normalizing flow models

Overview

Normalizing Flows

This is a PyTorch implementation of several normalizing flows, including a variational autoencoder. It is used in the articles A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization and Resampling Base Distributions of Normalizing Flows.

Implemented Flows

Methods of Installation

The latest version of the package can be installed via pip

pip install --upgrade git+https://github.com/VincentStimper/normalizing-flows.git

If you want to use a GPU, make sure that PyTorch is set up correctly by by following the instructions at the PyTorch website.

To run the example notebooks clone the repository first

git clone https://github.com/VincentStimper/normalizing-flows.git

and then install the dependencies.

pip install -r requirements_examples.txt
Comments
  • Replication of comparable glow with papers

    Replication of comparable glow with papers

    Hi, Thanks for developing this package. I find it very neat and flexible and would like to use it for my research. I noticed that in the paper "Resampling Base Distributions of Normalizing Flows", the bpd of your glow can reach 3.2~3.3, which is comparable to the original paper.

    I was wondering if it is possible to share your training scripts and details to train glow on cifar10 to achieve the above bpd. The current example notebook is too sketchy and only results in bpd of 3.8. Thanks very much!

    opened by prclibo 1
  • Changed ActNorm flag to buffer to allow saving

    Changed ActNorm flag to buffer to allow saving

    Without it being a buffer, when you load a model and sample from it, the flow thinks it is the first run through so overwrites all the trained ActNorm parameters.

    opened by arc82 1
  • Fix deprecation warning

    Fix deprecation warning

    Fixes the deprecation warning documented in https://github.com/VincentStimper/normalizing-flows/issues/12.


    Sanity check: Running this before the change:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    gives:

    tensor([[0.4528],
            [0.6410],
            [0.5200],
            [0.5567]])
    

    After the change, the output stays the same.

    opened by timothygebhard 0
  • Sampling from flow raises deprecation warning

    Sampling from flow raises deprecation warning

    Running the following minimal example:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    raises a UserWarning about an upcoming deprecation:

    /Users/timothy/Desktop/normalizing-flows/normflows/flows/mixing.py:437: UserWarning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangular and will be removed in a future PyTorch release.
    torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs.
    X = torch.triangular_solve(B, A).solution
    should be replaced with
    X = torch.linalg.solve_triangular(A, B). (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2189.)
      outputs, _ = torch.triangular_solve(
    

    I will submit a PR shortly that fixes the issue ๐Ÿ™‚

    opened by timothygebhard 0
  • Vberenz/mkdocs

    Vberenz/mkdocs

    Added mkdocs structure, and refactored the docstrings (and applied black)

    to install the dependencies for documentation building:

    pip install -e ".[docs]"
    

    To see the doc:

    mkdocs serve
    

    This starts a live server. Modifications of the documentation are rendered live (excluded the modications to docstrings)

    To build the docs:

    mkdocs build
    

    this will create the site folder (including index.html)

    To expend the docs:

    Markdown files can be added in the docs folder, then the "nav" section of the mkdocs.yml file has to be updated, e.g.

    nav:
      - about: index.md
      - API: references.md
      - my other page: mymarkdown.md
    
    • good to know: markdown can be used in the docstrings.
    • apparently, deploying the documentation online on github after built is as simple as calling mkdocs gh-deploy (I did not try it yet)

    I still need to do:

    • continuous build on github (documentation is rebuilt and deployed at each merge into master)
    • a correction pass on the docstrings (I updated them, but did not check them one by one yet)
    • the layout is not so nice (especially for the API), needs to be improved
    • apparently mkdocs allows to display jupyter notebooks, I need to dig
    opened by vincentberenz 0
  • feat: add optional gradient clipping to HMC flow

    feat: add optional gradient clipping to HMC flow

    Add the option to clip the gradient of the target log prob within HMC. For some target distributions, the log prob may have some very large gradients which can cause numerical instability - the gradient clipping can help with this.

    opened by lollcat 0
  • Added minor fixes for bugs and warnings

    Added minor fixes for bugs and warnings

    This commit made three changes to the original repo:

    1. Fixes the warning regarding the 'is' keyword:
    /home/donglin/Github/normalizing-flows/normflow/nets.py:45: SyntaxWarning: "is" with a literal. Did you mean "=="?
      if output_fn is "sigmoid":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:47: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "relu":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:49: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "tanh":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:51: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "clampexp":
    
    1. Fixes the warning regarding 'torch.qr':
    /home/donglin/Github/normalizing-flows/normflow/flows.py:616: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
    The boolean parameter 'some' has been replaced with a string parameter 'mode'.
    Q, R = torch.qr(A, some)
    should be replaced with
    Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at  /opt/conda/conda-bld/pytorch_1623448278899/work/aten/src/ATen/native/BatchLinearAlgebra.cpp:1940.)
      Q = torch.qr(torch.randn(self.num_channels, self.num_channels))[0]
    
    1. Eliminated the "nf.util.ToDevice" call during the data pre-processing in glow.ipynb
    opened by Donglin-Wang2 0
  • RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    Hi, when doing experiments, I'd suggest doing some other tutorials, for example for the ClassCondFlow, as while trying to one on my own, I keep encountering this error.

    opened by maulberto3 0
  • Normalizing Flow vs Normalizing Flow VAE behavior

    Normalizing Flow vs Normalizing Flow VAE behavior

    I can't help but to wonder why the NormalizingFlow class use the flows' inverse method when computing forward_kl, but, on the contrary, when using the NormalizingFlowVAE, it uses the flows' forward method.

    This way, when trying to fit MNIST with NormalizingFlow, when training and passing a batch of say (64, 784) images I get the following error:

         34 for i in range(len(self.flows) - 1, -1, -1):
         35     z, log_det = self.flows[i].inverse(z)
    ---> 36     log_q += log_det
         37 log_q += self.q0.log_prob(z)
         38 return -torch.mean(log_q)
    
    RuntimeError: output with shape [64] doesn't match the broadcast shape [1, 64]
    

    Any help/suggestion?

    opened by maulberto3 0
  • Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    In NormalizingFlowVAE class in core.py, this line says that the encoder outputs log_q:

    z, log_q = self.q0(x, num_samples=num_samples)

    Suppose that, as in this example, the encoder Gaussian (q0) is parameterized by an MLP. Looking at distributions.encoder.py source code, the forward method of NNDiagGaussian class says that it outputs log_p:

    return z, log_p

    Inconsistency or not?

    opened by maulberto3 0
  • NormalizingFlow class in core.py does not provide context in forward_kld

    NormalizingFlow class in core.py does not provide context in forward_kld

    Thank you for a repo that's easy to handle with a normalizing flow of one's choice!

    I would like to implement a normalizing flow that optimizes multiple target distributions at once depending on the context I would provide to it. Yet, currently, afai, no context can be providided in the .forward_kld method of the NormalizingFlow class.

    Would be great if that's added!

    Cheers,

    Yves

    opened by ybernaerts 3
  • Decoupled sampling and generation interfaces

    Decoupled sampling and generation interfaces

    Hi, this PR added some new interface to get better control during sampling, e.g. to repeatedly generate on same latent code when training the model. Please check if it is useful as a merge:)

    opened by prclibo 0
Releases(v1.5)
  • v1.5(Dec 21, 2022)

    A rendered documentation is added to the repository, which is available on https://vincentstimper.github.io/normalizing-flows/.

    Test were added for several flow modules, which can be run via pytest. With these new tests, several bugs were detected and fixed. The current coverage is about 61%. More tests will be added in the future as well as automated testing and coverage analysis on GitHub.

    Moreover, the code is adapted to the syntax of newer PyTorch Versions.

    Source code(tar.gz)
    Source code(zip)
  • v1.4(Jul 26, 2022)

    The package is now available on PyPI, which means that it can just be installed with

    pip install normflows
    

    from now on. The code was reformatted to conform to the black coding style.

    Moreover, the following fixes and additions are included:

    • The computation of the alpha-divergence objective was corrected.
    • A bug regarding sampling from the mixture of Gaussian base distribution was fixed.
    • A flow layer to warp periodic variables was added.
    • The dependency from the Residual Flow repository was removed.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 5, 2022)

    The code was reorganized to be more hierarchical and readable. Also all required functionality for Neural Spline Flows were added to the repository to remove the dependency on the original Neural Spline Flow repository.

    Furthermore, the following features were introduced:

    • Class to reverse a flow layer
    • Class to build a chain of flow layers
    • Affine Masked Autoregressive Flows (MAF)
    • Circular Neural Spline Flows
    • Neural Spline Flows with circular and non-circular coordinates
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Feb 6, 2022)

  • v1.0(Nov 25, 2021)

    Normalizing flow library comprising the most popular flow architectures, among them Real NVP, Glow, Neural Spline Flow, and Residual Flow.

    Source code(tar.gz)
    Source code(zip)
Owner
Vincent Stimper
PhD student in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems
Vincent Stimper
TensorFlow implementation of "TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?"

TokenLearner: What Can 8 Learned Tokens Do for Images and Videos? Source: Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize

Aritra Roy Gosthipaty 23 Dec 24, 2022
A PyTorch implementation of Learning to learn by gradient descent by gradient descent

Intro PyTorch implementation of Learning to learn by gradient descent by gradient descent. Run python main.py TODO Initial implementation Toy data LST

Ilya Kostrikov 300 Dec 11, 2022
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

39 Aug 20, 2021
Tensorflow implementation of ID-Unet: Iterative Soft and Hard Deformation for View Synthesis.

ID-Unet: Iterative-view-synthesis(CVPR2021 Oral) Tensorflow implementation of ID-Unet: Iterative Soft and Hard Deformation for View Synthesis. Overvie

17 Aug 23, 2022
Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.

Decision Transformer Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivasโ€ , and Igor M

Kevin Lu 1.4k Jan 07, 2023
Official PyTorch Implementation of GAN-Supervised Dense Visual Alignment

GAN-Supervised Dense Visual Alignment โ€” Official PyTorch Implementation Paper | Project Page | Video This repo contains training, evaluation and visua

944 Jan 07, 2023
YOLOv5 ๐Ÿš€ is a family of object detection architectures and models pretrained on the COCO dataset

YOLOv5 ๐Ÿš€ is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research int

้˜ฟๆ‰ 73 Dec 16, 2022
A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding his way.

GuidEye A python software that can help blind people find things like laptops, phones, etc the same way a guide dog guides a blind person in finding h

Munal Jain 0 Aug 09, 2022
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations)

Graph Neural Networks with Learnable Structural and Positional Representations Source code for the paper "Graph Neural Networks with Learnable Structu

Vijay Prakash Dwivedi 180 Dec 22, 2022
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

76 Jan 03, 2023
PyTorch implementation of the ACL, 2021 paper Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks.

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks This repo contains the PyTorch implementation of the ACL, 2021 pa

Rabeeh Karimi Mahabadi 98 Dec 28, 2022
Draw like Bob Ross using the power of Neural Networks (With PyTorch)!

Draw like Bob Ross using the power of Neural Networks! (+ Pytorch) Learning Process Visualization Getting started Install dependecies Requires python3

Kendrick Tan 116 Mar 07, 2022
PyTorch implementation of SIFT descriptor

This is an differentiable pytorch implementation of SIFT patch descriptor. It is very slow for describing one patch, but quite fast for batch. It can

Dmytro Mishkin 150 Dec 24, 2022
SigOpt wrappers for scikit-learn methods

SigOpt + scikit-learn Interfacing This package implements useful interfaces and wrappers for using SigOpt and scikit-learn together Getting Started In

SigOpt 73 Sep 30, 2022
Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 504 Jan 02, 2023
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
A Machine Teaching Framework for Scalable Recognition

MEMORABLE This repository contains the source code accompanying our ICCV 2021 paper. A Machine Teaching Framework for Scalable Recognition Pei Wang, N

2 Dec 08, 2021
GeneralOCR is open source Optical Character Recognition based on PyTorch.

Introduction GeneralOCR is open source Optical Character Recognition based on PyTorch. It makes a fidelity and useful tool to implement SOTA models on

57 Dec 29, 2022