FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

Overview

Copyright © Meta Platforms, Inc

This source code is licensed under the MIT license found in the LICENSE.txt file in the root directory of this source tree.

Overview

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Installing

An easy way to get started is to install from source:

git clone https://github.com/facebookincubator/flowtorch.git
cd flowtorch
pip install -e .

Further Information

We refer you to the FlowTorch website for more information about installation, using the library, and becoming a contributor. Here is a handy guide:

Comments
  • Ported Jacobian and inverse tests for Bijector from Pyro

    Ported Jacobian and inverse tests for Bijector from Pyro

    This PR, ports the two most important (and complex) tests from Pyro for bijectors: comparing the numerical Jacobian to the analytical one, and confirming that the Bijector.inverse method is correct for invertible bijectors.

    enhancement CLA Signed Merged 
    opened by stefanwebb 20
  • Lazy parameters and bijectors with metaclasses

    Lazy parameters and bijectors with metaclasses

    Motivation

    Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in #57.

    Changes proposed

    The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

    enhancement CLA Signed refactor 
    opened by stefanwebb 16
  • Docusaurus v2/API docs integration + Meta rebranding

    Docusaurus v2/API docs integration + Meta rebranding

    Motivation

    The API docs are currently lacking content and use an inflexible system to specify which modules to include.

    Also, I am unable to make the repo public until I have rebranded FB as Meta Platforms

    Changes proposed

    I have integrated the new API markdown autogen tool with Docusaurus v2 styling into the website. It uses a general configuration file with regular expressions to specify what to include/exclude, displays a box/label for each symbol, plus it's signature if it has one and its raw docstring.

    The remaining tasks are parsing/formatting the docstring, adding symbol lists to module pages, and some small cosmetic fixes.

    I also completed the Meta rebranding in the copyright notices etc.

    Test Plan

    cd website
    yarn build
    yarn serve
    
    CLA Signed 
    opened by stefanwebb 12
  • Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Motivation

    It is tiresome to have to add new components to init.py for bijectors, distributions and parameters. We should be able to automatically generate it!

    Changes proposed

    Autogen for distributions was completed in a previous PR. This one completes it for parameters and bijectors.

    I also uncovered and fixed a bug in how utils.list_bijectors(), utils.list_distributions(), and utils.list_parameters were working

    CLA Signed Merged 
    opened by stefanwebb 10
  • First sample scripts

    First sample scripts

    Motivation

    We would like a number of example scripts to demonstrate how to use FlowTorch.

    Changes proposed

    I have created a new folder, /samples, and added the simple example from the landing page of the website. At the moment it is a Python script, although I think in the future they will be converted into Jupyter notebooks that are mirrored on Colab.

    Test Plan

    The sample plots figures that demonstrate learning is working.

    CLA Signed Merged 
    opened by stefanwebb 10
  • Parameterless bijectors

    Parameterless bijectors

    This PR migrates code from pyro.distributions.transforms and torch.distributions.transforms for parameterless bijectors.

    These are easy so I wanted to get them all over now!

    enhancement CLA Signed Merged 
    opened by stefanwebb 10
  • Empty params class: `flowtorch.params.Empty`

    Empty params class: `flowtorch.params.Empty`

    This PR adds a flowtorch.params.Empty class that will be used for flowtorch.bijectors.FixedBijector bijectors like Sigmoid, Exp, etc. that don't have any learnable parameters.

    I have fixed a number of other things in order to get all the tests running!

    enhancement CLA Signed 
    opened by stefanwebb 10
  • Autoregressive Bijector type

    Autoregressive Bijector type

    Motivation

    See #22 and #6.

    Changes proposed

    This PR implements a new bijectors.Autoregressive meta bijector. We then refactor bijectors.AffineAutoregressive as a class that inherits from bijectors.Affine and bijectors.Autoregressive.

    This change makes it easy to implement new autoregressive bijectors, like spline and neural autoregressive flow. All you have to do is implement the corresponding element-wise operator and inherit from the two

    CLA Signed Merged 
    opened by stefanwebb 9
  • Test that type hints are present for all Bijector classes' methods

    Test that type hints are present for all Bijector classes' methods

    Motivation

    mypy is excellent for checking types and preventing bugs, however it is not applied if type hints aren't declared for a function, method etc. Enforcing this via a unit test should lead to better code!

    Changes proposed

    I've written a unit test that will raise an exception when a methods arguments do not have type hints. Also, added stubs for additional tests on a Bijector/Params definition

    CLA Signed unit tests 
    opened by stefanwebb 9
  • Fixes pypi release, configured against test pypi

    Fixes pypi release, configured against test pypi

    Summary: Separates a pypi release workflow based off of github releases (these create tags so we dont get dev version numbers from setuptools_scm)

    Differential Revision: D28419348

    CLA Signed fb-exported Merged 
    opened by feynmanliang 9
  • CI installs stable Pytorch release

    CI installs stable Pytorch release

    Our CI was installing the nightly build of PyTorch, since 1.8.1 hadn't been released and we needed newly developed features in torch.distributions.constraints.

    Now that 1.8.1 is out, I have changed the config file to install the stable release.

    This PR contains the same changes from the flowtorch.params.Empty one so it can pass tests - @feynmanliang could you please merge the other one first? Then only the relevant changes should appear here

    CLA Signed Merged 
    opened by stefanwebb 9
  • Multivariate Bijectors Tutorial issue

    Multivariate Bijectors Tutorial issue

    Issue Description

    The Multivariate Bijectors tutorial notebook has an issue: someone hit a keyboard interrupt and so it's not complete.

    Steps to Reproduce

    No steps to reproduce needed, here's a snapshot stright from this github (https://github.com/facebookincubator/flowtorch/blob/main/tutorials/multivariate_bijections.ipynb) image

    Expected Behavior

    Users should expect the tutorial to be complete.

    opened by maulberto3 1
  • Issue with log_prob values not exported to Cuda

    Issue with log_prob values not exported to Cuda

    Issue Description

    A clear and concise description of the issue. If it's a feature request, please add [Feature Request] to the title.

    Not able to get all the data into 'device (CUDA)'. Facing problem at 'loss = -dist_y.log_prob(data).mean()'. Looks like data cant be transferred to GPU. Do we need to regester data as buffer and work around it?

    Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument mat1 in method wrapper_addmm)

    Steps to Reproduce

    Please provide steps to reproduce the issue attaching any error messages and stack traces.

    dataset = torch.tensor(data_train, dtype=torch.float)
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    for steps in range(t_steps):
        step_loss=0
        for i, data in enumerate(trainloader):
            data = data.to(device)
            if i==0:
                print(data.shape)
                #p_getsizeof(data)
            try:
                optimizer.zero_grad()
                loss = -dist_y.log_prob(data).mean()
                loss.backward()
                optimizer.step()
            except ValueError as e:
                print('Error')
                print('Skipping thatbatch')
    

    Expected Behavior

    What did you expect to happen?

    Matrices should be computated in the CUDA device and not show a conflit of data being at 2 different place.

    System Info

    Please provide information about your setup

    • PyTorch Version (run print(torch.__version__)
    • Python version

    Additional Context

    opened by bigmb 2
  • [WIP] Conv1x1

    [WIP] Conv1x1

    Motivation

    Proposes a 1x1 convolution bijector.

    Test Plan

    from flowtorch.bijectors import Conv1x1Bijector
    import torch
    
    
    def test(LU_decompose, zero_init):
        c = Conv1x1Bijector(LU_decompose=LU_decompose, zero_init=zero_init)
        c = c(shape=torch.Size([10, 20, 20]))
        for p in c.parameters():
            p.data += torch.randn_like(p)/5
    
        x = torch.randn(1, 10, 20, 20)
        y = c.forward(x)
        yp = y.detach_from_flow()
        xp = c.inverse(yp)
        assert (xp/x-1).norm() < 1e-2
    
        assert (xp-x).norm()
        
    for LU_decompose in (True, False):
        for zero_init in (True, False):
            test(True, True)
    
    

    Important

    This PR is branched out from the coupling layer. I'll update the branch once the review of the coupling layer is completed.

    CLA Signed 
    opened by vmoens 0
  • Split Bijector

    Split Bijector

    Motivation

    We introduce the Split Bijector, which allows to split a tensor in half, process one half through a sequence of transformations and normalize the other.

    Changes proposed

    The new class first splits the tensor, then passes the outputs to the _param_fn and then to the transform itself. The introduction of a _forward_pre_ops and _inverse_pre_ops methods is necessary as, in the inverse case, we need to first pass the input through the transform inverse to then pass it through the convolutional layer that will give us the normalizing constants. This breaks the _param_fb(...) -> _inverse(...) logic, as we need to do something before _param_fn. As this might be the case for the forward pass too, we introduced a similar _forward_pre_ops method.

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    • [ ] The title of my pull request is a short description of the requested changes.
    CLA Signed 
    opened by vmoens 1
  • Split bijector

    Split bijector

    A splitting bijector splits an input x in two equal parts, x1 and x2 (see for instance Glow paper): image

    Of those, only x1 is passed to the remaining part of the flow. x2 on the other hand is "normalized" by a location and scale determined by x1. The transform usually looks like this

    def _forward(self, x):
        x1, x2 = x.chunk(2, -1)
        loc, scale = some_parametric_fun(x1)
        x2 = (x2 - loc) / scale
        log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  # since x2 will disappear, we can include its prior log-lik here
        return x1, log_abs_det_jacobian
    

    The _inverse is done like this

    def _inverse(self, y):
        x1 = y
        loc, scale = some_parametric_fun(x1)
        x2 = torch.randn_like(x1)  # since we fit x2 to a gaussian in forward
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  
        x2 = x2 * scale + loc
        log_abs_det_jacobian = scale.reciprocal().log().sum()  
        return torch.cat([x1, x2], -1), log_abs_det_jacobian
    

    However, I personally find this coding very confusing: First and foremost, it messes up with the logic y = flow(x) -> dist.log_prob(y). What if we don't want a normal? That seems orthogonal to the bijector responsibility to me. Second, it includes in the LADJ a normal log-likelihood, which should come from the prior. Third, it makes the _inverse stochastic, but that should not be the case. Finally, it has an input of -- say -- dimension d and an output of d/2 (and conversely for _inverse).

    For some models (e.g. Glow), when generating data, we don't sample from a Gaussian with unit variance but from a Gaussian with some decreased temperature (e.g. an SD of 0.9 or something). With this logic, we'd have to tell every split layer in a flow to modify the self.normal scale!

    What I would suggest is this: we could use SplitBijector as a wrapper around another bijector. The way that would work is this:

    class SplitBijector(Bijector):
        def __init__(self, bijector):
             ...
             self.bijector = bijector
    
        def _forward(self, x):
            x1, x2 = x.chunk(2, -1)
            loc, scale = some_parametric_fun(x1)
            y2 = (x2 - loc) / scale
            log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
            y1 = self.bijector.forward(x1)
            log_abs_det_jacobian += self.bijector.log_abs_det_jacobian(x1, y1)
            y = torch.cat([y1, y2], 0)
            return y, log_abs_det_jacobian
    

    The _inverse would follow. Of course bijector must have the same input and output space! That way, we solve all of our problems: input and output space match, no weird stuff happen with a nested normal log-density, the prior density is only called out of the bijector, and one can tweak it at will without caring about what will happen in the bijector.

    enhancement 
    opened by vmoens 1
Releases(0.8)
  • 0.8(Apr 27, 2022)

    • Fixed a bug in distributions.Flow.parameters() where it returned duplicate parameters
    • Several tutorials converted from .mdx to .ipynb format in anticipation of new tutorial system
    • Removed yarn.lock
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Apr 25, 2022)

    This release add two new minor features.

    A new class flowtorch.bijectors.Invert can be used to swap the forward and inverse operator of a Bijector. This is useful to turn, for example, Inverse Autoregressive Flow (IAF) into Masked Autoregressive Flow (MAF).

    Bijector objects are now nn.Modules, which amongst other benefits allows easily saving and loading of state.

    Source code(tar.gz)
    Source code(zip)
  • 0.6(Mar 3, 2022)

    This small release fixes a bug in bijectors.ops.Spline where the sign of log(det(J)) was inverted for the .inverse method. It also fixes the unit tests so that they pick up this error in the future.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Feb 3, 2022)

    In this release, we add caching of intermediate values for Bijectors.

    What this means is that you can often reduce computation by calculating log|det(J)| at the same time as y = f(x). It's also useful for performing variational inference on Bijectors that don't have an explicit inverse. The mechanism by which this is achieved is a subclass of torch.Tensor called BijectiveTensor that bundles together (x, y, context, bundle, log_det_J).

    Special shout out to @vmoens for coming up with this neat solution and taking the implementation lead! Looking forward to your future contributions 🥳

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 18, 2021)

    Implementations of Inverse Autoregressive Flow and Neural Spline Flow.

    Basic content for website.

    Some unit tests for bijectors and distributions.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Incubator
We work hard to contribute our work back to the web, mobile, big data, & infrastructure communities. NB: members must have two-factor auth.
Meta Incubator
Training deep models using anime, illustration images.

animeface deep models for anime images. Datasets anime-face-dataset Anime faces collected from Getchu.com. Based on Mckinsey666's dataset. 63.6K image

Tomoya Sawada 61 Dec 25, 2022
Algorithmic encoding of protected characteristics and its implications on disparities across subgroups

Algorithmic encoding of protected characteristics and its implications on disparities across subgroups This repository contains the code for the paper

Team MIRA - BioMedIA 15 Oct 24, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
"SOLQ: Segmenting Objects by Learning Queries", SOLQ is an end-to-end instance segmentation framework with Transformer.

SOLQ: Segmenting Objects by Learning Queries This repository is an official implementation of the paper SOLQ: Segmenting Objects by Learning Queries.

MEGVII Research 179 Jan 02, 2023
Large scale PTM - PPI relation extraction

Large-scale protein-protein post-translational modification extraction with distant supervision and confidence calibrated BioBERT The silver standard

1 Feb 25, 2022
RIM: Reliable Influence-based Active Learning on Graphs.

RIM: Reliable Influence-based Active Learning on Graphs. This repository is the official implementation of RIM. Requirements To install requirements:

Wentao Zhang 4 Aug 29, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
Microscopy Image Cytometry Toolkit

Cytokit Cytokit is a collection of tools for quantifying and analyzing properties of individual cells in large fluorescent microscopy datasets with a

Hammer Lab 106 Jan 06, 2023
Python package for missing-data imputation with deep learning

MIDASpy Overview MIDASpy is a Python package for multiply imputing missing data using deep learning methods. The MIDASpy algorithm offers significant

MIDASverse 77 Dec 03, 2022
cisip-FIRe - Fast Image Retrieval

Fast Image Retrieval (FIRe) is an open source image retrieval project release by Center of Image and Signal Processing Lab (CISiP Lab), Universiti Malaya. This project implements most of the major bi

CISiP Lab 39 Nov 25, 2022
Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising

Deep-Rep-MFIR Official implementation of Deep Reparametrization of Multi-Frame Super-Resolution and Denoising Publication: Deep Reparametrization of M

Goutam Bhat 39 Jan 04, 2023
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimization"

Riggable 3D Face Reconstruction via In-Network Optimization Source code for CVPR 2021 paper "Riggable 3D Face Reconstruction via In-Network Optimizati

130 Jan 02, 2023
Automatic deep learning for image classification.

AutoDL AutoDL automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few line

wenqi 2 Oct 12, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral)

DSA^2 F: Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion (CVPR'2021, Oral) This repo is the official imp

如今我已剑指天涯 46 Dec 21, 2022
ruptures: change point detection in Python

Welcome to ruptures ruptures is a Python library for off-line change point detection. This package provides methods for the analysis and segmentation

Charles T. 1.1k Jan 03, 2023
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 09, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Framework web SnakeServer.

SnakeServer - Framework Web 🐍 Documentação oficial do framework SnakeServer. Conteúdo Sobre Como contribuir Enviar relatórios de segurança Pull reque

Jaedson Silva 0 Jul 21, 2022