Generative Flow Networks

Related tags

Deep Learninggflownet
Overview

Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation

Implementation for our paper, submitted to NeurIPS 2021 (also check this high-level blog post).

This is a minimum working version of the code used for the paper, which is extracted from the internal repository of the Mila Molecule Discovery project. Original commits are lost here, but the credit for this code goes to @bengioe, @MJ10 and @MKorablyov (see paper).

Grid experiments

Requirements for base experiments:

  • torch numpy scipy tqdm

Additional requirements for active learning experiments:

  • botorch gpytorch

Molecule experiments

Additional requirements:

  • pandas rdkit torch_geometric h5py
  • a few biochemistry programs, see mols/Programs/README

For rdkit in particular we found it to be easier to install through (mini)conda. torch_geometric has non-trivial installation instructions.

We compress the 300k molecule dataset for size. To uncompress it, run cd mols/data/; gunzip docked_mols.h5.gz.

We omit docking routines since they are part of a separate contribution still to be submitted. These are available on demand, please do reach out to [email protected] or [email protected].

Comments
  • Error: Tensors used as indices must be long, byte or bool tensors

    Error: Tensors used as indices must be long, byte or bool tensors

    Dear authors, thanks for sharing the code for this wonderful work!

    I am currently trying to run the naive gflownet training code in molecular docking setting by running python gflownet.py under the mols directory. I have unzipped the datasets and have all requirements installed. And I have successfully run the model in the toy grid environment.

    However, I got this error when I run in the mols environment:

    Exception while sampling: tensors used as indices must be long, byte or bool tensors

    And when I further look up, it seems like the problem occurs around the line 70 in model_block.py. I tried to print out the stem_block_batch_idx but it doesn't seems like could be transfered to long type directly, which is required by an index:

    tensor([[-8.4156e-02, -4.2767e-02, -7.2483e-02, -3.3011e-02, -1.1865e-02, 2.0981e-03, 1.3293e-02, -7.3515e-03, -4.1853e-02, 2.1048e-02, 3.8597e-02, -1.5558e-02, 2.1581e-02, 4.9257e-03, 9.5167e-02, 4.0965e-02, 2.0146e-02, -5.5610e-02, -3.5318e-02, -3.1394e-02, 7.2078e-02, 1.8894e-02, -3.0249e-02, 2.9740e-02, 5.6950e-02, -3.8425e-02, 2.8620e-02, 9.2052e-02, -8.5357e-03, 1.6788e-02, 7.7801e-02, -4.2119e-02, 1.3606e-02, 7.5316e-02, 4.7131e-02, -4.3429e-03, 1.4157e-04, 2.0939e-02, -2.3499e-02, -6.5888e-02, -2.8960e-02, 3.1548e-02, -9.2680e-03, 5.4192e-02, -9.6579e-03, 2.0602e-02, 1.8935e-02, 4.1228e-03, -6.3467e-02, 3.6747e-02, 1.4168e-02, -6.1473e-03, -1.9472e-02, -3.3970e-02, -5.7308e-03, -4.6021e-02, -3.8956e-02, 4.7375e-02, -8.4562e-02, -1.0087e-02, 2.0478e-02, -6.8286e-02, 5.4663e-02, -5.1468e-02, 1.2617e-02, 2.4625e-02, 5.2167e-02, 5.7779e-02, -5.7788e-02, -1.3323e-02, 1.3913e-02, -7.4439e-02, -4.0981e-02, 5.0797e-02, -5.6230e-02, -5.0963e-02, -5.5488e-02, -2.7339e-02, 1.0469e-02, 3.4695e-02, -3.2623e-02, 7.6694e-03, -5.8748e-03, 7.0495e-02, -2.2805e-02, -5.4334e-03, -2.1636e-02, 1.9597e-02, 6.2370e-02, -2.4995e-02, 1.6165e-02, -4.6878e-03, 2.9743e-02, 1.2653e-02, -5.4271e-02, 1.1247e-02, -3.8340e-03, -4.7489e-02, 1.5719e-02, 3.2552e-02, 6.0665e-02, -1.2330e-02, 2.6115e-02, -2.7376e-02, 3.4152e-02, -1.0086e-02, -2.4257e-02, 3.2202e-02, -3.2659e-02, 8.6094e-02, -3.1996e-02, 7.8751e-02, 4.5367e-02, -3.8693e-02, -3.6531e-02, 6.7311e-03, 3.2884e-02, -3.2774e-02, -3.8855e-02, 2.8814e-02, 4.3942e-02, -1.3374e-02, 3.0905e-02, -7.0064e-02, -5.7230e-03, 4.5093e-02, 3.8167e-02, -3.0602e-02, -4.0387e-02, -1.5985e-02, -9.5962e-02, -1.1354e-02, 2.0879e-02, 1.4092e-02, -3.8405e-02, 1.4337e-02, -6.0682e-02, -9.0190e-03, -5.0898e-02, -4.7344e-02, 4.1045e-02, -6.7031e-02, 8.8112e-02, 3.2149e-02, 3.7748e-02, -4.0757e-02, 1.4378e-02, -1.0749e-01, 6.1679e-02, -6.7268e-03, -2.7889e-02, -5.9315e-02, -5.5883e-02, -2.6489e-02, 7.3640e-02, 1.8273e-02, -5.2330e-02, -7.7003e-05, 6.8413e-04, -1.4364e-01, -1.9389e-02, 4.5649e-02, -4.0468e-02, -4.2819e-02, 4.5874e-02, -1.6481e-02, 1.2627e-02, -8.4941e-02, -3.7458e-02, 2.1359e-02, -9.2863e-02, -3.4932e-03, 7.1990e-02, 6.2144e-02, 8.1462e-02, -2.0569e-02, 5.9194e-02, 1.6996e-03, 8.0618e-03, 6.1753e-02, 4.1602e-02, 1.0910e-02, 2.0523e-02, -9.9781e-04, 1.9131e-02, -1.0267e-02, -9.4474e-02, -3.5725e-02, 9.9953e-03, -4.3195e-02, -7.9051e-02, -3.1881e-02, 9.2158e-03, -9.6167e-04, -2.7508e-02, 7.1478e-02, -5.4107e-02, 8.0026e-02, -1.8887e-02, 4.6941e-02, 6.5166e-02, 1.2000e-02, 3.9906e-02, -2.8206e-02, 3.7483e-02, 3.5408e-02, -2.5863e-02, 2.3528e-02, 7.1814e-03, 8.0863e-02, -1.3736e-02, -8.5978e-02, -4.1238e-02, -1.2545e-02, 5.5479e-02, 7.3487e-03, 8.9125e-02, -3.4814e-02, -4.5358e-02, 4.9893e-02, 3.5286e-02, 3.2084e-02, 5.0868e-02, 2.3549e-02, -9.2907e-02, -6.9315e-03, -1.3088e-02, 8.7066e-02, 1.1554e-02, 1.3771e-02, -1.7489e-02, -5.2921e-02, 9.2110e-03, 1.6766e-02, 4.8030e-02, 1.4481e-02, 2.9254e-03, 3.5795e-02, 1.0397e-01, -2.0675e-03, -2.9916e-02, -5.3299e-02, -2.1396e-02, -5.3189e-02, 3.2805e-02, -2.6538e-03, -2.6352e-02, -1.2823e-02, 6.1972e-02, 5.4822e-02, 4.5579e-02, -3.6638e-02, 8.1013e-03, -5.6014e-02, 1.5187e-02, -6.5561e-02]], device='cuda:0', dtype=torch.float64, grad_fn=)

    I wonder if I am running the code in the correct way. Is this index correct and if so, do you know what's happening?

    opened by wenhao-gao 3
  • About Reproducibility Issues

    About Reproducibility Issues

    Hi there,

    Thank you very much for sharing the source codes.

    For reproducibility, I modified the codes as follows,

    https://github.com/GFNOrg/gflownet/blob/831a6989d1abd5c05123ec84654fb08629d9bc38/mols/gflownet.py#L84

    ---> self.train_rng = np.random.RandomState(142857)

    as well as to add

    torch.manual_seed(142857)
    torch.cuda.manual_seed(142857)
    torch.cuda.manual_seed_all(142857)
    

    However, I encountered an issue. I ran it more than 3 times with the same random seed, but the results are totally different (although they are close). I didn't modify other parts, except for addressing package compatibility issues.

    0 [1152.62, 112.939, 23.232] 100 [460.257, 44.253, 17.728] 200 [68.114, 6.007, 8.045]

    0 [1151.024, 112.603, 24.993] 100 [471.219, 45.525, 15.964] 200 [66.349, 6.174, 4.607]

    0 [1263.066, 124.094, 22.128] 100 [467.747, 44.899, 18.76] 200 [61.992, 5.715, 4.841]

    I am wondering whether you encountered such an issue before.

    Best,

    Dong

    opened by dongqian0206 2
  • Reward signal for grid environment?

    Reward signal for grid environment?

    Hello, I'm a bit confused where this reward function comes from: https://github.com/GFNOrg/gflownet/blob/831a6989d1abd5c05123ec84654fb08629d9bc38/grid/toy_grid_dag.py#L97

    My understanding is that the reward should be as defined in the paper (https://i.samkg.dev/2233/firefox_xGnEaZVBlN.png) - are these two equivalent in some way?

    opened by SamKG 1
  • Potential bug with `FlowNetAgent.sample_many`

    Potential bug with `FlowNetAgent.sample_many`

    Hi there!

    Thanks for sharing the code and just wanted to say I've enjoyed your paper. I was reading your code and noticed that there might be a subtle bug in the grid-env dag script. I might also have read it wrong...

    https://github.com/bengioe/gflownet/blob/dddfbc522255faa5d6a76249633c94a54962cbcb/grid/toy_grid_dag.py#L316-L320

    On line 316, we zip two things: zip([e for d, e in zip(done, self.envs) if not d], acts)

    Here done is a vector of bools of length batch-size, self.envs is a list of GridEnv of length n-envs or buffer-size, and acts is a vector of ints of length (n-envs or buffer-size,).

    By default, all the lengths of the above objects should be 16.

    I was reading through the code, and noticed that if any of the elements in done are True, then on line 316 we filter them out with if not d. If env[0] was "done", then we would have a list of 15 envs, basically self.envs[1:]. Then when you zip up the actions and the shorter list envs, the actions will be aligned incorrectly... We will basically end up with self.envs[1:] being aligned to actions act[:-1]. As a result, step is now length 15, and on the next line, we again line up the incorrect actions of length 16 with our step list of length 16.

    Perhaps we need to filter act based on the done vector? E.g act = act[done] after line 316?

    Maybe I've got this wrong, so apologies for the noise if that's the case, but thought I'd leave a note in case what I'm suggesting is the case.

    All the best!

    opened by fedden 1
  • Clarification regarding the number of molecular building blocks. Why they are different from JT-VAE?

    Clarification regarding the number of molecular building blocks. Why they are different from JT-VAE?

    Hello,

    First, I really enjoyed reading the paper. Amazing work!

    I have a question regarding the number of building blocks used for generating small molecules. Appendix A.3 of the paper states that there are a total of 105 unique building blocks (after accounting for different attachment points) and that they were obtained by the process suggested by the JT-VAE paper. (Jin et al. (2020)). However, in the JT-VAE paper, the total vocabulary size is $|\chi|=780$ obtained from the same ZINC dataset. My understanding is they are both the same. If that is correct, why are the number of building blocks different here? What am I missing? If they are not the same, can you please explain the difference?

    Thank you so much for your help

    opened by Srilok 1
Releases(paper_version)
Owner
Emmanuel Bengio
Emmanuel Bengio
[CVPR 2021] Forecasting the panoptic segmentation of future video frames

Panoptic Segmentation Forecasting Colin Graber, Grace Tsai, Michael Firman, Gabriel Brostow, Alexander Schwing - CVPR 2021 [Link to paper] We propose

Niantic Labs 44 Nov 29, 2022
🗣️ Microsoft Edge TTS for Home Assistant, no need for app_key

Microsoft Edge TTS for Home Assistant This component is based on the TTS service of Microsoft Edge browser, no need to apply for app_key. Install Down

152 Dec 31, 2022
(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Dressing in Order (DiOr) 👚 [Paper] 👖 [Webpage] 👗 [Running this code] The official implementation of "Dressing in Order: Recurrent Person Image Gene

Aiyu Cui 277 Dec 28, 2022
Tool for installing and updating MiSTer cores and other files

MiSTer Downloader This tool installs and updates all the cores and other extra files for your MiSTer. It also updates the menu core, the MiSTer firmwa

72 Dec 24, 2022
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Restricted Boltzmann Machines in Python.

How to Use First, initialize an RBM with the desired number of visible and hidden units. rbm = RBM(num_visible = 6, num_hidden = 2) Next, train the m

Edwin Chen 928 Dec 30, 2022
Utilities and information for the signals.numer.ai tournament

dsignals Utilities and information for the signals.numer.ai tournament using eodhistoricaldata.com eodhistoricaldata.com provides excellent historical

Degerhan Usluel 23 Dec 18, 2022
PyTorch code accompanying the paper "Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning" (NeurIPS 2021).

HIGL This is a PyTorch implementation for our paper: Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning (NeurIPS 2021). Our cod

Junsu Kim 20 Dec 14, 2022
Hand gesture recognition model that can be used as a remote control for a smart tv.

Gesture_recognition The training data consists of a few hundred videos categorised into one of the five classes. Each video (typically 2-3 seconds lon

Pratyush Negi 1 Aug 11, 2022
This repo contains the source code and a benchmark for predicting user's utilities with Machine Learning techniques for Computational Persuasion

Machine Learning for Argument-Based Computational Persuasion This repo contains the source code and a benchmark for predicting user's utilities with M

Ivan Donadello 4 Nov 07, 2022
LeViT a Vision Transformer in ConvNet's Clothing for Faster Inference

LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference This repository contains PyTorch evaluation code, training code and pretrained

Facebook Research 504 Jan 02, 2023
FFCV: Fast Forward Computer Vision (and other ML workloads!)

Fast Forward Computer Vision: train models at a fraction of the cost with accele

FFCV 2.3k Jan 03, 2023
Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks

Bayesian-Torch is a library of neural network layers and utilities extending the core of PyTorch to enable the user to perform stochastic variational inference in Bayesian deep neural networks. Bayes

Intel Labs 210 Jan 04, 2023
Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation

TimeLens: Event-based Video Frame Interpolation This repository is about the High Speed Event and RGB (HS-ERGB) dataset, used in the 2021 CVPR paper T

Robotics and Perception Group 544 Dec 19, 2022
PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluation of Visual Stories via Semantic Consistency"

Improving Generation and Evaluation of Visual Stories via Semantic Consistency PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluat

Adyasha Maharana 28 Dec 08, 2022
On-device speech-to-intent engine powered by deep learning

Rhino Made in Vancouver, Canada by Picovoice Rhino is Picovoice's Speech-to-Intent engine. It directly infers intent from spoken commands within a giv

Picovoice 510 Dec 30, 2022
Self-Supervised Pre-Training for Transformer-Based Person Re-Identification

Self-Supervised Pre-Training for Transformer-Based Person Re-Identification [pdf] The official repository for Self-Supervised Pre-Training for Transfo

Hao Luo 116 Jan 04, 2023
Permeability Prediction Via Multi Scale 3D CNN

Permeability-Prediction-Via-Multi-Scale-3D-CNN Data: The raw CT rock cores are obtained from the Imperial Colloge portal. The CT rock cores are sub-sa

Mohamed Elmorsy 2 Jul 06, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Object Detection Projekt in GKI WS2021/22

tfObjectDetection Object Detection Projekt with tensorflow in GKI WS2021/22 Docker Container: docker run -it --name --gpus all -v path/to/project:p

Tim Eggers 1 Jul 18, 2022