Benchmark VAE - Library for Variational Autoencoder benchmarking

Overview

Python Documentation Status

Documentation

pythae

This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to perform benchmark experiments and comparisons by training the models with the same autoencoding neural network architecture. The feature make your own autoencoder allows you to train any of these models with your own data and own Encoder and Decoder neural networks.

Installation

To install the latest version of this library run the following using pip

$ pip install git+https://github.com/clementchadebec/benchmark_VAE.git

or alternatively you can clone the github repo to access to tests, tutorials and scripts.

$ git clone https://github.com/clementchadebec/benchmark_VAE.git

and install the library

$ cd benchmark_VAE
$ pip install -e .

Available Models

Below is the list of the models currently implemented in the library.

Models Training example Paper Official Implementation
Autoencoder (AE) Open In Colab
Variational Autoencoder (VAE) Open In Colab link
Beta Variational Autoencoder (Beta_VAE) Open In Colab link
Importance Weighted Autoencoder (IWAE) Open In Colab link link
Wasserstein Autoencoder (WAE) Open In Colab link link
Info Variational Autoencoder (INFOVAE_MMD) Open In Colab link
VAMP Autoencoder (VAMP) Open In Colab link link
Hamiltonian VAE (HVAE) Open In Colab link link
Regularized AE with L2 decoder param (RAE_L2) Open In Colab link link
Regularized AE with gradient penalty (RAE_GP) Open In Colab link link
Riemannian Hamiltonian VAE (RHVAE) Open In Colab link

See results for all aforementionned models

Available Samplers

Below is the list of the models currently implemented in the library.

Samplers Models Paper Official Implementation
Normal prior (NormalSampler) all models link
Gaussian mixture (GaussianMixtureSampler) all models link link
VAMP prior sampler (VAMPSampler) VAMP link link
Manifold sampler (RHVAESampler) RHVAE link
Two stage VAE sampler (TwoStageVAESampler) all VAE based models link link

Launching a model training

To launch a model training, you only need to call a TrainingPipeline instance.

>>> from pythae.pipelines import TrainingPipeline
>>> from pythae.models import VAE, VAEConfig
>>> from pythae.trainers import BaseTrainingConfig

>>> # Set up the training configuration
>>> my_training_config = BaseTrainingConfig(
...	output_dir='my_model',
...	num_epochs=50,
...	learning_rate=1e-3,
...	batch_size=200,
...	steps_saving=None
... )
>>> # Set up the model configuration 
>>> my_vae_config = model_config = VAEConfig(
...	input_dim=(1, 28, 28),
...	latent_dim=10
... )
>>> # Build the model
>>> my_vae_model = VAE(
...	model_config=my_vae_config
... )
>>> # Build the Pipeline
>>> pipeline = TrainingPipeline(
... 	training_config=my_training_config,
... 	model=my_vae_model
...	)
>>> # Launch the Pipeline
>>> pipeline(
...	train_data=your_train_data, # must be torch.Tensor or np.array 
...	eval_data=your_eval_data # must be torch.Tensor or np.array
...	)

At the end of training, the best model weights, model configuration and training configuration are stored in a final_model folder available in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss (with my_model being the output_dir argument of the BaseTrainingConfig). If you further set the steps_saving argument to a certain value, folders named checkpoint_epoch_k containing the best model weights, optimizer, scheduler, configuration and training configuration at epoch k will also appear in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss.

Lauching a training on benchmark datasets

We also provide a training script example here that can be used to train the models on benchmarks datasets (mnist, cifar10, celeba ...). The script can be launched with the following commandline

python training.py --dataset mnist --model_name ae --model_config 'configs/ae_config.json' --training_config 'configs/base_training_config.json'

See README.md for further details on this script

Launching data generation

To launch the data generation process from a trained model, you only need to build your sampler. For instance, to generate new data with your sampler, run the following.

>>> from pythae.models import VAE
>>> from pythae.samplers import NormalSampler
>>> # Retrieve the trained model
>>> my_trained_vae = VAE.load_from_folder(
...	'path/to/your/trained/model'
...	)
>>> # Define your sampler
>>> my_samper = NormalSampler(
...	model=my_trained_vae
...	)
>>> # Generate samples
>>> gen_data = normal_samper.sample(
...	num_samples=50,
...	batch_size=10,
...	output_dir=None,
...	return_gen=True
...	)

If you set output_dir to a specific path, the generated images will be saved as .png files named 00000000.png, 00000001.png ... The samplers can be used with any model as long as it is suited. For instance, a GMMSampler instance can be used to generate from any model but a VAMPSampler will only be usable with a VAMP model. Check here to see which ones apply to your model.

Define you own Autoencoder architecture

Pythae provides you the possibility to define your own neural networks within the VAE models. For instance, say you want to train a Wassertstein AE with a specific encoder and decoder, you can do the following:

>>> from pythae.models.nn import BaseEncoder, BaseDecoder
>>> from pythae.models.base.base_utils import ModelOuput
>>> class My_Encoder(BaseEncoder):
...	def __init__(self, args=None): # Args is a ModelConfig instance
...		BaseEncoder.__init__(self)
...		self.layers = my_nn_layers()
...		
...	def forward(self, x:torch.Tensor) -> ModelOuput:
...		out = self.layers(x)
...		output = ModelOuput(
...			embedding=out # Set the output from the encoder in a ModelOuput instance 
...		)
...		return output
...
... class My_Decoder(BaseDecoder):
...	def __init__(self, args=None):
...		BaseDecoder.__init__(self)
...		self.layers = my_nn_layers()
...		
...	def forward(self, x:torch.Tensor) -> ModelOuput:
...		out = self.layers(x)
...		output = ModelOuput(
...			reconstruction=out # Set the output from the decoder in a ModelOuput instance
...		)
...		return output
...
>>> my_encoder = My_Encoder()
>>> my_decoder = My_Decoder()

And now build the model

>>> from pythae.models import WAE_MMD, WAE_MMD_Config
>>> # Set up the model configuration 
>>> my_wae_config = model_config = WAE_MMD_Config(
...	input_dim=(1, 28, 28),
...	latent_dim=10
... )
...
>>> # Build the model
>>> my_wae_model = WAE_MMD(
...	model_config=my_wae_config,
...	encoder=my_encoder, # pass your encoder as argument when building the model
...	decoder=my_decoder # pass your decoder as argument when building the model
... )

important note 1: For all AE-based models (AE, WAE, RAE_L2, RAE_GP), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOuput instance must contain the embbeddings under the key embedding. For the decoder, the ModelOuput instance must contain the reconstructions under the key reconstruction.

important note 2: For all VAE-based models (VAE, Beta_VAE, IWAE, HVAE, VAMP, RHVAE), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOuput instance must contain the embbeddings and log-covariance matrices (of shape batch_size x latent_space_dim) respectively under the key embedding and log_covariance key. For the decoder, the ModelOuput instance must contain the reconstructions under the key reconstruction.

Using benchmark neural nets

You can also find predefined neural network architectures for the most common data sets (i.e. MNIST, CIFAR, CELEBA ...) that can be loaded as follows

>>> for pythae.models.nn.benchmark.mnist import (
...	Encoder_AE_MNIST, # For AE based model (only return embeddings)
... 	Encoder_VAE_MNIST, # For VAE based model (return embeddings and log_covariances)
... 	Decoder_AE_MNIST
... )

Replace mnist by cifar or celeba to access to other neural nets.

Getting your hands on the code

To help you to understand the way pythae works and how you can train your models with this library we also provide tutorials:

  • making_your_own_autoencoder.ipynb shows you how to pass your own networks to the models implemented in pythae Open In Colab

  • models_training folder provides notebooks showing how to train each implemented model and how to sample from it using pyhtae.samplers.

  • scripts folder provides in particular an example of a training script to train the models on benchmark data sets (mnist, cifar10, celeba ...)

Dealing with issues

If you are experiencing any issues while running the code or request new features/models to be implemented please open an issue on github.

Contributing 🚀

You want to contribute to this library by adding a model, a sampler or simply fix a bug ? That's awesome! Thank you! Please see CONTRIBUTING.md to follow the main contributing guidelines.

Results

Models MNIST CELEBA
AE + GaussianMixtureSampler AE GMM AE GMM
VAE + NormalSampler VAE Normal VAE Normal
VAE + GaussianMixtureSampler VAE GMM VAE GMM
Beta-VAE + NormalSampler Beta Normal Beta Normal
IWAE + Normal sampler IWAE Normal IWAE Normal
WAE + NormalSampler WAE Normal WAE Normal
INFO VAE + NormalSampler INFO Normal INFO Normal
VAMP + VAMPSampler VAMP Vamp VAMP Vamp
HVAE + NormalSampler HVAE Normal HVAE GMM
RAE_L2 + GaussianMixtureSampler RAE L2 GMM RAE L2 GMM
RAE_GP + GaussianMixtureSampler RAE GMM RAE GMM
Riemannian Hamiltonian VAE (RHVAE) + RHVAE Sampler RHVAE RHVAE RHVAE RHVAE
Comments
  • Doubt regarding the Hamiltonian calculations for RHVAE model.

    Doubt regarding the Hamiltonian calculations for RHVAE model.

    In the paper, Hamiltonian is defined as follows:

    H(z, v) = U(z) + K(v)  = -0.5*log(det(G^-1(z)) + 0.5*v^T*v
    

    But in the code, I see extra terms like addition of a joint probability term and a G inverse multiplied in the term for kinetic energy. Are these 2 equations equivalent?

    question 
    opened by shikhar2333 12
  • Integration with the Hugging Face Hub

    Integration with the Hugging Face Hub

    Is your feature request related to a problem? Please describe. As I train models, I would like to easily be able to share them with other people and document them well. I would also like to be able to access other trained models from the community.

    Describe the solution you'd like I would like to have an integration with the Hugging Face Hub (disclaimer: I'm a member of the OS team there). I would like to be able to do model.push_to_hub("osanseviero/my_vae") and get a model directly in the Hub. Some of the benefits of sharing models through the Hub:

    • versioning, commit history and diffs
    • repos provide useful metadata about their tasks, languages, metrics, etc that make them discoverable
    • multiple features from TensorBoard visualizations, leaderboards, and more
    feature request 
    opened by osanseviero 11
  • Can we use wandb sweep with the wandb callbacks provided?

    Can we use wandb sweep with the wandb callbacks provided?

    Is there a way to integrate wandb sweep with the available wandb callback available? If not, could you tell me how to exactly catch the loss values of a model to integrate onto the wandb sweep?

    question 
    opened by shrave 4
  • questions on customized autoencoder

    questions on customized autoencoder

    Hi @clementchadebec ,

    Thanks for pointing me to the notebook yesterday on customized autoencoder. Just have several questions:

    1. why the output dimensions in encoder is higher and higher with the layer depth, but the output dimensions in decoder is lower and lower with the layer depth? I am pretty new to antoencoder. Is this architecture specific to variational autoencoder?

    2. What is ModelOutput function used for? I read the help page saying "Base ModelOutput class fixing the output type from the models." Do you mean to fix the output type to torch tensor type?

    3. Not sure if the method is suitable for 1 dimensional data? Specifically for my customized autoencoder model, the dimension is going to be very very high after encoder. My original data dimension is 6241 * 1. But MLP is working fine to my 1D data.

    Thanks,

    Shan

    question 
    opened by shannjiang 4
  • Can't install due to pickle5 dependency

    Can't install due to pickle5 dependency

    Describe the bug

    pickle5 backports things from the future. But it does not exist in newer python versions:

    package pickle5-0.0.10-py37h8f50634_0 requires python >=3.7,<3.8.0a0, but none of the providers can be installed
    

    To Reproduce Steps to reproduce the behavior: Use python 3.8.13 and try to install the library with micromamba.

    Expected behavior To correctly install and import the library.

    Desktop (please complete the following information):

    • OS: Mac OS Big Sur - Apple M1 chip
    opened by VolodyaCO 3
  • cifar10 data visualization

    cifar10 data visualization

    Hi @clementchadebec, The following code loads the CIFAR10 data as NumPy train & eval:

    cifar10_trainset = datasets.CIFAR10(root='../../data', train=True, download=True, transform=None)
    # array
    train_dataset = cifar10_trainset.data[:-10000].reshape(-1, 3, 32, 32) #(40k,3,32,32)
    eval_dataset = cifar10_trainset.data[-10000:].reshape(-1, 3, 32, 32) # (10k,3,32,32)
    

    when I try to visualize, why it's not an image from the CIFAR10 dataset, what am I doing wrong here?

    npimg = train_dataset[0] # first image from training data
    img_ar = np.transpose(npimg, (1,2,0))
    plt.imshow(img_ar)
    

    image

    *also I can't assign the label (it gets hidden) when I create new issues.

    Thanks, Prachi

    question 
    opened by jprachir 3
  • Model Request: Poincare VAE

    Model Request: Poincare VAE

    It would be great to see the Poincare VAE (or a similar hyperbolic geometry VAE) implemented in pythae!

    Paper: https://arxiv.org/abs/1901.06033 Code: https://github.com/emilemathieu/pvae

    help wanted new model 
    opened by tomhosking 3
  • Is this library compatible with custom datasets?

    Is this library compatible with custom datasets?

    Hello,

    Thank you for your excellent work! As my question states, I wonder how to use this library with a custom dataset. I am new to machine learning and wanted to train a VAE on a relatively large dataset. So, I looked at the provided examples for training different models. However, it seemed to me that I had to load the whole dataset from a .npz file similar to the MNIST or the CelebA datasets. Is there a way to write my own data loader for a custom dataset and then use it with this library?

    Thank you again for your work!

    feature request 
    opened by NamelessGaki 3
  • RHVAE error: mat1 and mat2 shapes cannot be multiplied

    RHVAE error: mat1 and mat2 shapes cannot be multiplied

    Hi,

    I am trying to use RHVAE to perform data augmentation but got an error: ModelError: Error when calling forward method from model. Potential issues:

    • Wrong model architecture -> check encoder, decoder and metric architecture if you provide yours
    • The data input dimension provided is wrong -> when no encoder, decoder or metric provided, a network is built automatically but requires the shape of the flatten input data. Exception raised: <class 'RuntimeError'> with message: mat1 and mat2 shapes cannot be multiplied (2x16384 and 1024x10)

    The input dimension for my data is [1,79,79] instead of [1,28,28] as in the tutorial.

    Every parameter in my model is the same as the tutorial except the input_dim parameter: config = BaseTrainingConfig( output_dir='my_model', learning_rate=1e-4, batch_size=100, num_epochs=100, )

    model_config = RHVAEConfig( input_dim=(1, 79, 79), latent_dim=10, n_lf=1, eps_lf=0.001, beta_zero=0.3, temperature=1.5, regularization=0.001

    )

    model = RHVAE( model_config=model_config, encoder=Encoder_VAE_MNIST(model_config), decoder=Decoder_AE_MNIST(model_config) )

    Any idea how to modify the code to run it?

    good first issue 
    opened by shannjiang 3
  • how Sampler works?

    how Sampler works?

    Hi Clément: Great work on introducing the VAE-oriented library! You have made it more modular like predefined models, pipelines, and so forth. Can you share brief details on how the sampler works under the hood for generations?

    Prachi

    question 
    opened by jprachir 2
  • UnboundLocalError: local variable 'best_model' referenced before assignment

    UnboundLocalError: local variable 'best_model' referenced before assignment

    Hello there, first of all thank you for this repo, I'm quite new to ML and to Pytorch, and this helps me a lot! For which concerns this issue, I'm running pythae on Google Colab, after downloading it simply using $pip install pythae. I experienced a "UnboundLocalError: local variable 'best_model' referenced before assignment" while trying to train a VAE on my custom data (torch tensors). I show you the snippet together with the resulting error.

    Screenshot (34) Screenshot (33)

    Where X_train is a torch.Tensor of shape (28000, 1, 131, 2), each element being a double between [0, 300]. I can't figure out whether I'm doing something wrong, so I kindly ask your help.

    opened by Mirco-Ramo 2
  • Returning callback results when calling pipelines' train method

    Returning callback results when calling pipelines' train method

    Closes #62.

    This PR is a proof of concept for returning values from callbacks which might be useful for immediate manipulation after the pipeline has been run.

    opened by VolodyaCO 0
  • Allow distributed training

    Allow distributed training

    As of now, the library only supports training with one GPU and this could be a limiting factor when training models with large databases. It would be nice to be able to perform distributed training on multiple GPUs.

    Envisioned solution :bulb: : I am thinking of integrating FSDP to the library.

    enhancement feature request 
    opened by clementchadebec 0
  • Implementation of 3D MSSSIM

    Implementation of 3D MSSSIM

    As discussed in issue #68 , here is a PR for the 3D MSSSIM.

    I just re-adapted to Pythae the code from the repository that you already used using the PR for 3D MSSSIM.

    Maybe this requires further tests, let me know what do you think of it.

    Ravi

    opened by ravih18 0
  • MSSSIM VAE is not working with 3D inputs

    MSSSIM VAE is not working with 3D inputs

    Hello @clementchadebec

    MSSSIM VAE model returns an error when using 3D images for training.

    Indeed the MSSSIM implementation in benchmark_VAE/src/pythae/models/msssim_vae/msssim_vae_utils.py only works for 2 images.

    I found the following implementation that seems to work with 3D images: https://github.com/VainF/pytorch-msssim.

    I can make a PR to add it if you think it is a good idea.

    Otherwise I can see if I can generalize the current implementation for 3D images !

    Let me know what do you think of it.

    Ravi

    feature request 
    opened by ravih18 1
  • Multimodality Data Training

    Multimodality Data Training

    Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

    Hi, Thanks for the integrated framework for VAE learning, it works well with a single-modal dataset, Now, I want to perform multi-modality training with benchmark_VAE but I cannot find some introduction about custom the reconstruction loss, for instance, calculating the loss per modal and combining them into the final loss, Can you provide some ideas about how to build a custom reconstruction loss function?

    Describe the solution you'd like A clear and concise description of what you want to happen.

    Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

    Additional context Add any other context or screenshots about the feature request here.

    feature request 
    opened by JunweiLiu0208 3
Releases(v0.0.9)
  • v0.0.9(Oct 19, 2022)

    New features

    • Integration of comet_ml through CometCallback training callbacks further to #55

    Bugs fixed :bug:

    • Fix pickle5 compatibility with python>=3.8
    • update conda-forge feedstock with correct requirements (https://github.com/conda-forge/pythae-feedstock/pull/11)
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.8(Sep 7, 2022)

    New Features:

    • Added MLFlowCallback in TrainingCalbacks further to #44
    • Allow custom Dataset inheriting from torch.utils.data.Dataset to be passed as inputs in the training_pipeline further to #35
    def __call__(
            self,
            train_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset],
            eval_data: Union[np.ndarray, torch.Tensor, torch.utils.data.Dataset] = None,
            callbacks: List[TrainingCallback] = None,
        ):
    
    • Added implementation of Multiply/Partially/Combination IWAE MIWAE, PIWAE and CIWAE (https://arxiv.org/abs/1802.04537)

    Minor changes

    • Unify data handling in FactorVAE with other models. (half of the batch is used for reconstruction and the other one for factorial representation)
    • Change model sanity check method in trainers (use loaders in check instead of datasets)
    • Add encoder/decoder losses needed in CoupledOptimizerTrainer and update tests
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.7(Sep 3, 2022)

    New features

    • Added a PoincareVAE model and PoincareDiskSampler implementation following https://arxiv.org/abs/1901.06033

    Minor changes

    • Added VAE LSTM example
    • Added reproducibility reports
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.6(Jul 22, 2022)

    New features

    • Added a interpolate method allowing to interpolate linearly from given inputs in the latent space of any pythae.models (further to #34)
    • Added a reconstruct method allowing to reconstruct easily given input data with any any pythae.models.
    Source code(tar.gz)
    Source code(zip)
  • v0.0.5(Jul 7, 2022)

  • v.0.0.3(Jul 5, 2022)

  • v.0.0.2(Jul 4, 2022)

    New features

    • Add a push_to_hf_hub method allowing to push pythae.models instances to the HuggingFace Hub
    • Add a load_from_hf_hub method allowing to download pre-trained models from the Hub
    • Add tutorials (HF Hub saving and reloading and wandb callbacks)
    Source code(tar.gz)
    Source code(zip)
  • v.0.0.1(Jun 14, 2022)

Experiments for Fake News explainability project

fake-news-explainability Experiments for fake news explainability project This repository only contains the notebooks used to train the models and eva

Lorenzo Flores (Lj) 1 Dec 03, 2022
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue. How do I cite D-REX? For now, cite

Alon Albalak 6 Mar 31, 2022
[CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment

RADN [CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment [Paper on arXiv] Overview Update [2021/5/7] add codes for W

IIGROUP 53 Dec 28, 2022
Robocop is your personal mini voice assistant made using Python.

Robocop-VoiceAssistant To use this project, you should have python installed in your system. If you don't have python installed, install it beforehand

Sohil Khanduja 3 Feb 26, 2022
Temporally Efficient Vision Transformer for Video Instance Segmentation, CVPR 2022, Oral

Temporally Efficient Vision Transformer for Video Instance Segmentation Temporally Efficient Vision Transformer for Video Instance Segmentation (CVPR

Hust Visual Learning Team 203 Dec 31, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image.

CoReNet CoReNet is a technique for joint multi-object 3D reconstruction from a single RGB image. It produces coherent reconstructions, where all objec

Google Research 80 Dec 25, 2022
Official Datasets and Implementation from our Paper "Video Class Agnostic Segmentation in Autonomous Driving".

Video Class Agnostic Segmentation [Method Paper] [Benchmark Paper] [Project] [Demo] Official Datasets and Implementation from our Paper "Video Class A

Mennatullah Siam 26 Oct 24, 2022
LBBA-boosted WSOD

LBBA-boosted WSOD Summary Our code is based on ruotianluo/pytorch-faster-rcnn and WSCDN Sincerely thanks for your resources. Newer version of our code

Martin Dong 20 Sep 19, 2022
[ICLR 2022] Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics

CPDeform Code and data for paper Contact Points Discovery for Soft-Body Manipulations with Differentiable Physics at ICLR 2022 (Spotlight). @InProceed

(Lester) Sizhe Li 29 Nov 29, 2022
Repository for paper "Non-intrusive speech intelligibility prediction from discrete latent representations"

Non-Intrusive Speech Intelligibility Prediction from Discrete Latent Representations Official repository for paper "Non-Intrusive Speech Intelligibili

Alex McKinney 5 Oct 25, 2022
NeRF Meta-Learning with PyTorch

NeRF Meta Learning With PyTorch nerf-meta is a PyTorch re-implementation of NeRF experiments from the paper "Learned Initializations for Optimizing Co

Sanowar Raihan 78 Dec 18, 2022
Learning Tracking Representations via Dual-Branch Fully Transformer Networks

Learning Tracking Representations via Dual-Branch Fully Transformer Networks DualTFR ⭐ We achieves the runner-ups for both VOT2021ST (short-term) and

phiphi 19 May 04, 2022
Human motion synthesis using Unity3D

Human motion synthesis using Unity3D Prerequisite: Software: amc2bvh.exe, Unity 2017, Blender. Unity: RockVR (Video Capture), scenes, character models

Hao Xu 9 Jun 01, 2022
Code for NeurIPS 2020 article "Contrastive learning of global and local features for medical image segmentation with limited annotations"

Contrastive learning of global and local features for medical image segmentation with limited annotations The code is for the article "Contrastive lea

Krishna Chaitanya 152 Dec 22, 2022
A collection of Google research projects related to Federated Learning and Federated Analytics.

Federated Research Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning i

Google Research 483 Jan 05, 2023
Machine Learning Time-Series Platform

cesium: Open-Source Platform for Time Series Inference Summary cesium is an open source library that allows users to: extract features from raw time s

632 Dec 26, 2022
Hands-On Machine Learning for Algorithmic Trading, published by Packt

Hands-On Machine Learning for Algorithmic Trading Hands-On Machine Learning for Algorithmic Trading, published by Packt This is the code repository fo

Packt 981 Dec 29, 2022
Galileo library for large scale graph training by JD

近年来,图计算在搜索、推荐和风控等场景中获得显著的效果,但也面临超大规模异构图训练,与现有的深度学习框架Tensorflow和PyTorch结合等难题。 Galileo(伽利略)是一个图深度学习框架,具备超大规模、易使用、易扩展、高性能、双后端等优点,旨在解决超大规模图算法在工业级场景的落地难题,提

JD Galileo Team 128 Nov 29, 2022
The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Neural Deformation Graphs Project Page | Paper | Video Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction Aljaž Božič, Pablo P

Aljaz Bozic 134 Dec 16, 2022