GT4SD, an open-source library to accelerate hypothesis generation in the scientific discovery process.

Overview

GT4SD (Generative Toolkit for Scientific Discovery)

License: MIT Code style: black Contributions

logo

The GT4SD (Generative Toolkit for Scientific Discovery) is an open-source platform to accelerate hypothesis generation in the scientific discovery process. It provides a library for making state-of-the-art generative AI models easier to use.

Installation

pip

You can install gt4sd directly from GitHub:

pip install git+https://github.com/GT4SD/gt4sd-core

Development setup & installation

If you would like to contribute to the package, we recommend the following development setup: Clone the gt4sd-core repository:

git clone [email protected]:GT4SD/gt4sd-core.git
cd gt4ds-core
conda env create -f conda.yml
conda activate gt4sd
pip install -e .

Learn more in CONTRIBUTING.md

Supported packages

Beyond implementing various generative modeling inference and training pipelines GT4SD is designed to provide a high-level API that implement an harmonized interface for several existing packages:

  • GuacaMol: inference pipelines for the baselines models.
  • MOSES: inference pipelines for the baselines models.
  • TAPE: encoder modules compatible with the protein language models.
  • PaccMann: inference pipelines for all algorithms of the PaccMann family as well as traiing pipelines for the generative VAEs.
  • transformers: training and inference pipelines for generative models from the HuggingFace Models

Using GT4SD

Running inference pipelines

Running an algorithm is as easy as typing:

from gt4sd.algorithms.conditional_generation.paccmann_rl.core import (
    PaccMannRLProteinBasedGenerator, PaccMannRL
)
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
# algorithm configuration with default parameters
configuration = PaccMannRLProteinBasedGenerator()
# instantiate the algorithm for sampling
algorithm = PaccMannRL(configuration=configuration, target=target)
items = list(algorithm.sample(10))
print(items)

Or you can use the ApplicationRegistry to run an algorithm instance using a serialized representation of the algorithm:

from gt4sd.algorithms.registry import ApplicationsRegistry
target = 'MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTT'
algorithm = ApplicationsRegistry.get_application_instance(
    target=target,
    algorithm_type='conditional_generation',
    domain='materials',
    algorithm_name='PaccMannRL',
    algorithm_application='PaccMannRLProteinBasedGenerator',
    generated_length=32,
    # include additional configuration parameters as **kwargs
)
items = list(algorithm.sample(10))
print(items)

Running training pipelines via the CLI command

GT4SD provides a trainer client based on the gt4sd-trainer CLI command. The trainer currently supports training pipelines for language modeling (language-modeling-trainer), PaccMann (paccmann-vae-trainer) and Granular (granular-trainer, multimodal compositional autoencoders).

$ gt4sd-trainer --help
usage: gt4sd-trainer [-h] --training_pipeline_name TRAINING_PIPELINE_NAME
                     [--configuration_file CONFIGURATION_FILE]

optional arguments:
  -h, --help            show this help message and exit
  --training_pipeline_name TRAINING_PIPELINE_NAME
                        Training type of the converted model, supported types:
                        granular-trainer, language-modeling-trainer, paccmann-
                        vae-trainer. (default: None)
  --configuration_file CONFIGURATION_FILE
                        Configuration file for the trainining. It can be used
                        to completely by-pass pipeline specific arguments.
                        (default: None)

To launch a training you have two options.

You can either specify the training pipeline and the path of a configuration file that contains the needed training parameters:

gt4sd-trainer  --training_pipeline_name ${TRAINING_PIPELINE_NAME} --configuration_file ${CONFIGURATION_FILE}

Or you can provide directly the needed parameters as argumentsL

gt4sd-trainer  --training_pipeline_name language-modeling-trainer --type mlm --model_name_or_path mlm --training_file /pah/to/train_file.jsonl --validation_file /path/to/valid_file.jsonl 

To get more info on a specific training pipeleins argument simply type:

gt4sd-trainer --training_pipeline_name ${TRAINING_PIPELINE_NAME} --help

References

If you use gt4sd in your projects, please consider citing the following:

@software{GT4SD,
author = {GT4SD Team},
month = {2},
title = {{GT4SD (Generative Toolkit for Scientific Discovery)}},
url = {https://github.com/GT4SD/gt4sd-core},
version = {main},
year = {2022}
}

License

The gt4sd codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

Comments
  • cli-upload

    cli-upload

    cli-upload

    Add upload functionality to the command line. It gives the user the possibility to upload specific artifacts on a server.

    Given a specific version for an algorithm:

    • check if that version is already on the server: - check if the folder bucket/algorithm_type/algorithm_name/algorithm_application/version/ exists.
    • If yes, tell the user and stop the upload.
    • If not, upload all the files in that version.

    cli-upload relies on minio and has been tested locally using docker-compose. cli-upload can be used to upload on a cloud or local server.


    How to use cli-upload

    Following the example in the README (in the Saving a trained algorithm for inference via the CLI command section) and assuming a trained model in /tmp/test_cli_upload, run:

    gt4sd-upload --training_pipeline_name paccmann-vae-trainer --model_path /tmp/test_cli_upload --training_name fast-example --target_version fast-example-v0 --algorithm_application PaccMannGPGenerator

    opened by georgosgeorgos 15
  • MOSES VAE from Guacamol training reconstruction is

    MOSES VAE from Guacamol training reconstruction is "incorrect"

    Describe the bug The VAE in GT4SD uses the wrapper of the Moses VAE from Guacamol. Unfortunately, the decoding training step from the Moses VAE is bugged.

    More detail The problem arises from the definition of the forward_decoder method:

    def forward_decoder(self, x, z):
        lengths = [len(i_x) for i_x in x]
    
        x = nn.utils.rnn.pad_sequence(x, batch_first=True, padding_value=self.pad)
        x_emb = self.x_emb(x)
    
        z_0 = z.unsqueeze(1).repeat(1, x_emb.size(1), 1)
        x_input = torch.cat([x_emb, z_0], dim=-1)  # <--- PROBLEM 1
        x_input = nn.utils.rnn.pack_padded_sequence(x_input, lengths, batch_first=True)
    
        h_0 = self.decoder_lat(z)
        h_0 = h_0.unsqueeze(0).repeat(self.decoder_rnn.num_layers, 1, 1)
    
        output, _ = self.decoder_rnn(x_input, h_0)
    
        output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True)
        y = self.decoder_fc(output)
    
        recon_loss = F.cross_entropy(  # <--- PROBLEM 2
            y[:, :-1].contiguous().view(-1, y.size(-1)),
            x[:, 1:].contiguous().view(-1),
            ignore_index=self.pad
        )
    
        return recon_loss
    

    Namely, the reconstruction step is wrong in two spots:

    1. construction of the true input: x_input = torch.cat([x_emb, z_0], dim=-1) In the visual representation of a typical RNN, the true token feeds in from the 'bottom" of the cell and the previous hidden state from the "left". In this implementation, the reparameterized latent vector z is fed in both from the "left" (normal) and the "bottom" (atypical). Fix: this line should be removed
    2. calculation of the reconstruction loss: recon_loss = F.cross_entropy(...) This reconstruction loss is calculated as the per-token loss of the input batch (i.e., the mean of a batch of tokens) because the default reduction in F.cross_entropy is "mean". In turn, this results in reconstruction losses that are very low for the VAE, causing the optimizer to ignore the decoder and focus on the encoder. When a VAE focuses too hard on the encoder, you get mode collapse, and that's what happens with the Moses VAE. Fix: this line should be: F.cross_entropy(..., reduction="sum") / len(x)

    To reproduce

    1. Problem 1 is not a "problem" so much as it is highly atypical to structure a VAE like this. I can't say if it results in any actual problems, but it simply shouldn't be there
    2. Problem 2 can be observed with two experiments:
      1. Using PCA with two dimensions, plot the embeddings of a random batch z ~ q(z|x) and a sample from the standard normal distribution z ~ N(0, I). The embeddings from the encoder will look like a point at (0, 0) compared to the samples from the standard normal
      2. Measure the reconstruction accuracy x_r ~ p(x | z ~ q(z | x_0)). In a well-trained VAE, sum(x_r == x_0 for x_0 in xs) / len(xs) should be above 50%. This VAE is generally fairly low (in my experience).
    bug 
    opened by davidegraff 12
  • Improve CLA workflow

    Improve CLA workflow

    actions to commit to other peoples forks was not something super easy to do, so I'm settling for a bit more verbosity and automation.

    the issue will be closed with a comment to the commit that added the contributor. There is a notice to merge this into a PR.

    Therefore there is no assignment of the issue any more.

    Looks like this: https://github.com/C-nit/gt4sd-core/issues/9 and can also be triggered in a different way: https://github.com/C-nit/gt4sd-core/issues/11

    opened by C-nit 11
  • feat: Support in RT Trainer for multiple entities.

    feat: Support in RT Trainer for multiple entities.

    Solving #143 by expanding the Regression Transformer trainer to support multi-entity discriminations, i.e., support the multientity_cg collator from the RT repo.

    Signed-off-by: Nicolai Ree [email protected]

    opened by NicolaiRee 9
  • feat: property_predictors in scorer

    feat: property_predictors in scorer

    • Implement PropertyPredictorScorer in domains.materials.property_scorer. - circular import using domains.materials.scorer for the implementation
    • We are simply using the PropertyPredictorRegistry to select a property and parameters by name and PropertyPredictionScorer to compute a score on a sample wrt a target value.
    • Tests mimick the logic in properties.
    cla-signed 
    opened by georgosgeorgos 8
  • Training pipeline Regression Transformer

    Training pipeline Regression Transformer

    Adding new training pipeline for RT

    • allows to finetune existing models available in the toolkit
    • allows to train models from scratch
    • patching LRSchedulers in torchdrug --> they are needed for RT training and threw errors
    cla-signed 
    opened by jannisborn 6
  • Added toxicity and  affinity to visum notebook

    Added toxicity and affinity to visum notebook

    Signed-off-by: Eduardo [email protected] Added toxicity (Tox21 model from https://github.com/PaccMann/paccmann_sarscov2) and affinity (Paccmann predictor) to the notebook.

    @drugilsberg , I am not sure about one specific step in the notebook and I would really appreciate it if you could help: When calling the sample in PaccMannGP for the first time the first line of the output is

    configuring optimization for target: {'qed': {'weight': 1.0}, 'sa': {'weight': 1.0}}

    However, on the second call to the same object (no reinitialization), in section "Sampling and Plotting Molecules with GT4SD", the first line reads:

    configuring optimization for target: {'qed': {}, 'sa': {}}

    Do you know if this has any influence on the molecules being generated? I attached a PDF file with the output for convenience.

    visum-2022-handson-generative-models.pdf

    @helenaMontenegro , the notebook now requires users to download a small model, but I don't think this is a problem.

    cla-signed 
    opened by edux300 5
  • Problem multiprocess in requirements

    Problem multiprocess in requirements

    The new multiprocess library version (0.70.13) gives problems when installing gt4sd-core using the development mode. I had to set multiprocess==0.70.12.2 to install the library.

    opened by georgosgeorgos 5
  • Torchdrug trainer pipeline

    Torchdrug trainer pipeline

    Implemented torchdrug trainer pipeline. Models can be used via:

    gt4sd-trainer --training_pipeline_name torchdrug-gcpn-trainer -h
    gt4sd-trainer --training_pipeline_name torchdrug-graphaf-trainer -h
    

    Features:

    • [x] Support for the same two models that are available via inference TorchDrugGCPN and TorchDrugGraphAF.
    • [x] Both models can be trained on all MoleculeDatasets from torchdrug.Datasets. Those are around 20 predefined datasets.
    • [x] Implemented a custom dataset where users can pass their own data
    • [x] In addition to the unittests I verified functionalities from the CLI via gt4sd-trainer

    Problems:

    • [ ] Property optimization does not work, due to instabilities in TorchDrug. I opened issue and PR but we have to wait until they merge, release a new version and then bump our dependency. The code I wrote here already supports the property optimization but I disabled the unittest for the moment because it would fail due to the TorchDrug issue. See details: https://github.com/DeepGraphLearning/torchdrug/issues/83
    • [x] gt4sd-saving: I ran a test via CLI but the saving failed. Not sure how problematic this is, here's the error:
    INFO:gt4sd.cli.saving:Selected configuration: ConfigurationTuple(algorithm_type='generation', domain='materials', algorithm_name='TorchDrugGenerator', algorithm_application='TorchDrugGCPN')
    INFO:gt4sd.cli.saving:Saving model version "fast" with the following configuration: <class 'gt4sd.algorithms.generation.torchdrug.core.TorchDrugGCPN'>
    INFO:gt4sd.algorithms.core:TorchDrugGCPN can not save a version based on TorchDrugSavingArguments(model_path='/Users/jab/.gt4sd/runs/', training_name='gcpn_test')
    
    enhancement cla-signed 
    opened by jannisborn 5
  • RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    RT sampling_wrapper to specify a substructure or series of tokens to keep unmasked

    I would like to propose an upgrade on the feature demonstrated in this notebook: https://github.com/GT4SD/gt4sd-core/blob/main/notebooks/regression-transformer-demo.ipynb (see cells 12-14)

    In addition to explicitly specifying tokens_to_mask, one probably could more likely imagine that a chemist might want to specify a substructure to mask or to "freeze" (keep unchanged, i.e. unmasked). It might be easier to specify tokens to freeze as that would be just selecting a part of the string to be kept unmasked. Prototype example is given below.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'tokens_to_freeze': ['N4CCOCC4']
        }
    

    If one could specify substructure to freeze or to mask - that would be potentially even more advantageous, as that would remove ambiguities when a substructure can be expressed in more than one sequence.

        sampling_wrapper={
            'property_goal': {
                '<logp>': 6.123,
                '<scs>': 1.5
            },
            'fraction_to_mask': 0.6,
            # keep morpholino tail unchanged
            'substructure_to_freeze': ['N1CCOCC1'],
            # explicitly mask benzene ring moiety
            'substructure_to_mask':  ['C1=CC=CC=C1'],
        }
    

    One could use RDKit functionality to identify substructure tokens, as given here: https://www.rdkit.org/docs/Cookbook.html#substructure-matching

    Regarding the interpretation of the 'fraction_to_mask', I would then imagine that it would best applied to the remaining set of tokens (after tokens_to_freeze and explicit tokens_to_mask are excluded). I hope this makes sense, happy to clarify and exemplify further.

    enhancement 
    opened by OleinikovasV 4
  • Artifact storage for property predictors

    Artifact storage for property predictors

    Closes #116

    Now we can store artifacts also for property predictors

    • New property predictors are tested
    • One thing that remains to do is to have functions under gt4sd.properties.molecules.functions. Atm this is not yet supported since it would yield circular imports.
    cla-signed 
    opened by jannisborn 4
  • RT saving pipeline

    RT saving pipeline

    Closes #169

    • gt4sd-saving now also supports the RT training pipeline. I implemented the get_filepath_mappings_for_training_pipeline_arguments method. The inference.json is now created inside the RT trainer and also saved in the model folder such that it can later be copied by gt4sd-saving. The Property class was needed as a helper for this, to track some attributes of each property.
    • Expanded the RT example. Describes now a full process of training/finetuning a model, saving it with gt4sd-saving, running inference on it and finally uploading it to the model hub.

    I tested everything with the example from the README

    Minors:

    • adding a method filter_stubbed to the molecular RT that removes stub-like molecules("invalid SELFIES")
    • Bumping paccmann_gp dependency
    enhancement cla-signed 
    opened by jannisborn 0
  • RegressionTransformer saving pipeline

    RegressionTransformer saving pipeline

    Is your feature request related to a problem? Please describe. gt4sd-saving is not fully supportive of RT

    ToDo:

    • Implement get_filepath_mappings_for_training_pipeline_arguments
    • Save inference.json to model dir
    enhancement 
    opened by jannisborn 0
  • Disentangle properties from algorithms

    Disentangle properties from algorithms

    Is your feature request related to a problem? Please describe. Currently, the properties submodule imports stuff from algorithms.core and thus also from that __init__. In the init, we registry all the training pipelines and thus, one needs to have all those dependencies installed, including torchdrug, guacamol_baselines and other vcs-requirements

    Describe the solution you'd like Creating a submodule gt4sd.core that specifies base classes used by multiple submodules like gt4sd.algorithms or gt4sd.properties

    Describe alternatives you've considered Do the imports only when someone calls list_available_algorithms

    NOTE: When creating gt4sd.core we have to make sure that all the rest remains functional, including relative imports, jupyter notebooks (should be fine since we barely import from algorithms.core directly) and in particular also documentation

    enhancement 
    opened by jannisborn 0
  • Add methods for artifact-based property predictors

    Add methods for artifact-based property predictors

    Is your feature request related to a problem? Please describe. Currently the artifact-based property predictors (like gt4sd.properties.molecules.core.Tox21) are not usable as functions via gt4sd.properties.molecules.tox_21, unlike all the non-artifact-based properties). Moving the functions there would yield circula import issues

    Describe the solution you'd like A small refactor that goes around the circular imports

    enhancement 
    opened by jannisborn 0
  • Refactor AlgorithmConfiguration baseclass

    Refactor AlgorithmConfiguration baseclass

    Inconsistent types between AlgorithmConfiguration base class and the child ConfigurablePropertyAlgorithm Configuration, concerning attributes like domain but also methods like ensure_artifacts_for_version (class methods in the base class but instance methods in the base class).

    A simple refactor into 3 instead of 2 classes should fix this.

    Originally posted by @jannisborn in https://github.com/GT4SD/gt4sd-core/pull/121#discussion_r943649339

    • So the ones in the contstructor for lines like self.domain=domain says: error: Cannot assign to class variable "domain" via instance. That's because in the parent class (AlgorithmConfiguration) we set it as domain: ClassVar[str]
    • the ones in the signatures like get_application_prefix which returns a str are because in the parent class those are class methods, not instance methods. THe error is Signature of "get_application_prefix" incompatible with supertype "AlgorithmConfiguration

    It might be fixable by a refactor but I'm not sure it's worth it

    refactoring 
    opened by jannisborn 0
Releases(v1.0.4)
Owner
Generative Toolkit 4 Scientific Discovery
Generative Toolkit 4 Scientific Discovery
In the AI for TSP competition we try to solve optimization problems using machine learning.

AI for TSP Competition Goal In the AI for TSP competition we try to solve optimization problems using machine learning. The competition will be hosted

Paulo da Costa 11 Nov 27, 2022
Official Implementation of DDOD (Disentangle your Dense Object Detector), ACM MM2021

Disentangle Your Dense Object Detector This repo contains the supported code and configuration files to reproduce object detection results of Disentan

loveSnowBest 51 Jan 07, 2023
Official implementation of Meta-StyleSpeech and StyleSpeech

Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation Dongchan Min, Dong Bok Lee, Eunho Yang, and Sung Ju Hwang This is an official code

min95 168 Dec 28, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110.06922). Our implementations are built on top of MMdetection3D.

Wang, Yue 539 Jan 07, 2023
HomoInterpGAN - Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation

HomoInterpGAN Homomorphic Latent Space Interpolation for Unpaired Image-to-image Translation (CVPR 2019, oral) Installation The implementation is base

Ying-Cong Chen 99 Nov 15, 2022
Auto-Lama combines object detection and image inpainting to automate object removals

Auto-Lama Auto-Lama combines object detection and image inpainting to automate object removals. It is build on top of DE:TR from Facebook Research and

44 Dec 09, 2022
Augmentation for Single-Image-Super-Resolution

SRAugmentation Augmentation for Single-Image-Super-Resolution Implimentation CutBlur Cutout CutMix Cutup CutMixup Blend RGBPermutation Identity OneOf

Yubo 6 Jun 27, 2022
This repository is the offical Pytorch implementation of ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021).

Context Modeling in 3D Human Pose Estimation: A Unified Perspective (CVPR 2021) Introduction This repository is the offical Pytorch implementation of

37 Nov 21, 2022
Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al.

nam-pytorch Unofficial PyTorch implementation of Neural Additive Models (NAM) by Agarwal, et al. [abs, pdf] Installation You can access nam-pytorch vi

Rishabh Anand 11 Mar 14, 2022
This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision"

RUAS This is the official code for the paper "Learning with Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision" A prelimin

Vision & Optimization Group (VOG) 2 May 05, 2022
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper] This is Official PyTorch implementatio

42 Nov 01, 2022
Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONNX.

ONNX-HybridNets-Multitask-Road-Detection Python scripts for performing road segemtnation and car detection using the HybridNets multitask model in ONN

Ibai Gorordo 45 Jan 01, 2023
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis

WaveGlow A PyTorch implementation of the WaveGlow: A Flow-based Generative Network for Speech Synthesis Quick Start: Install requirements: pip install

Yuchao Zhang 204 Jul 14, 2022
An efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning"

MMGEN-FaceStylor English | 简体中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits

OpenMMLab 182 Dec 27, 2022
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
Source code for Fixed-Point GAN for Cloud Detection

FCD: Fixed-Point GAN for Cloud Detection PyTorch source code of Nyborg & Assent (2020). Abstract The detection of clouds in satellite images is an ess

Joachim Nyborg 8 Dec 22, 2022
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

ClevrTex This repository contains dataset generation code for ClevrTex benchmark from paper: ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi

Laurynas Karazija 26 Dec 21, 2022
BC3407-Group-5-Project - BC3407 Group Project With Python

BC3407-Group-5-Project As the world struggles to contain the ever-changing varia

1 Jan 26, 2022