A highly efficient and modular implementation of Gaussian Processes in PyTorch

Overview

GPyTorch


GPyTorch Unit Tests GPyTorch Examples Documentation Status

GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.

Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our LazyTensor interface, or by composing many of our already existing LazyTensors. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.

GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility (SKI/KISS-GP, stochastic Lanczos expansions, LOVE, SKIP, stochastic variational deep kernel learning, ...); (3) easy integration with deep learning frameworks.

Examples, Tutorials, and Documentation

See our numerous examples and tutorials on how to construct all sorts of models in GPyTorch.

Installation

Requirements:

  • Python >= 3.6
  • PyTorch >= 1.7

Install GPyTorch using pip or conda:

pip install gpytorch
conda install gpytorch -c gpytorch

(To use packages globally but install GPyTorch as a user-only package, use pip install --user above.)

Latest (unstable) version

To upgrade to the latest (unstable) version, run

pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git

ArchLinux Package

Note: Experimental AUR package. For most users, we recommend installation by conda or pip.

GPyTorch is also available on the ArchLinux User Repository (AUR). You can install it with an AUR helper, like yay, as follows:

yay -S python-gpytorch

To discuss any issues related to this AUR package refer to the comments section of python-gpytorch.

Citing Us

If you use GPyTorch, please cite the following papers:

Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).

@inproceedings{gardner2018gpytorch,
  title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
  author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
  booktitle={Advances in Neural Information Processing Systems},
  year={2018}
}

Development

To run the unit tests:

python -m unittest

By default, the random seeds are locked down for some of the tests. If you want to run the tests without locking down the seed, run

UNLOCK_SEED=true python -m unittest

If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:

pip install pre-commit
pre-commit install

From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.

The Team

GPyTorch is primarily maintained by:

We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.

Acknowledgements

Development of GPyTorch is supported by funding from the Bill and Melinda Gates Foundation, the National Science Foundation, and SAP.

Comments
  • Add priors [WIP]

    Add priors [WIP]

    This is an early attempt at adding priors. Lots of callsites in the code aren't updated yet, so this will fail spectacularly.

    The main thing we need to figure out is how to properly do the optimization using standard gpytorch optimizers that don't support bounds. We should probably modify the smoothed uniform prior so it has full support and is differentiable everywhere but decays rapidly outside the given bounds. Does this sound reasonable?

    opened by Balandat 50
  • Using batch-GP for learnign single common GP over multiple experiments

    Using batch-GP for learnign single common GP over multiple experiments

    Howdy folks,

    Reading the docs, I understand that batch-GP is meant to learn k independent GPs, from k independent labels y over a common data set x.

    y1 = f1(x), y2 = f2(x), ..., yk = fk(x) , for k independent GPs.

    But how would one go about using batch-GP to learn a single common GP, from k independent experiments of the same underlying process?

    y1=f(x1), y2 = f(x2), ..., yk=f(xk) for one and the same GP

    For instance, I have k sets of data and labels (y) representing measurements of how the temperature changes over altitude (x) (e.g. from weather balloons launched at k different geographical locations), and I want to induce a GP prior hat represents the temperature change over altitude between mean sea level and some maximum altitude, marginalized over the all geographical areas.

    Thanks in advance

    Galto

    question 
    opened by Galto2000 25
  • Ensure compatibility with breaking changes in pytorch master branch

    Ensure compatibility with breaking changes in pytorch master branch

    This is a run of the simple_gp_regression example notebook on the current alpha_release branch. Running kissgp_gp_regression_cuda yields similar errors

    import math
    import torch
    import gpytorch
    from matplotlib import pyplot as plt
    
    %matplotlib inline
    %load_ext autoreload
    %autoreload 2
    
    from torch.autograd import Variable
    # Training data is 11 points in [0,1] inclusive regularly spaced
    train_x = Variable(torch.linspace(0, 1, 11))
    # True function is sin(2*pi*x) with Gaussian noise N(0,0.04)
    train_y = Variable(torch.sin(train_x.data * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2)
    
    from torch import optim
    from gpytorch.kernels import RBFKernel
    from gpytorch.means import ConstantMean
    from gpytorch.likelihoods import GaussianLikelihood
    from gpytorch.random_variables import GaussianRandomVariable
    
    # We will use the simplest form of GP model, exact inference
    class ExactGPModel(gpytorch.models.ExactGP):
        def __init__(self, train_x, train_y, likelihood):
            super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
            # Our mean function is constant in the interval [-1,1]
            self.mean_module = ConstantMean(constant_bounds=(-1, 1))
            # We use the RBF kernel as a universal approximator
            self.covar_module = RBFKernel(log_lengthscale_bounds=(-5, 5))
        
        def forward(self, x):
            mean_x = self.mean_module(x)
            covar_x = self.covar_module(x)
            # Return moddl output as GaussianRandomVariable
            return GaussianRandomVariable(mean_x, covar_x)
    
    # initialize likelihood and model
    likelihood = GaussianLikelihood(log_noise_bounds=(-5, 5))
    model = ExactGPModel(train_x.data, train_y.data, likelihood)
    
    # Find optimal model hyperparameters
    model.train()
    likelihood.train()
    
    # Use adam optimizer on model and likelihood parameters
    optimizer = optim.Adam(list(model.parameters()) + list(likelihood.parameters()), lr=0.1)
    optimizer.n_iter = 0
    
    training_iter = 50
    for i in range(training_iter):
        # Zero gradients from previous iteration
        optimizer.zero_grad()
        # Output from model
        output = model(train_x)
        # Calc loss and backprop gradients
        loss = -model.marginal_log_likelihood(likelihood, output, train_y)
        loss.backward()
        optimizer.n_iter += 1
        print('Iter %d/%d - Loss: %.3f   log_lengthscale: %.3f   log_noise: %.3f' % (
            i + 1, training_iter, loss.data[0],
            model.covar_module.log_lengthscale.data[0, 0],
            model.likelihood.log_noise.data[0]
        ))
        optimizer.step()
    
    ---------------------------------------------------------------------------
    
    TypeError                                 Traceback (most recent call last)
    
    <ipython-input-8-bdcf88774fd0> in <module>()
         14     output = model(train_x)
         15     # Calc loss and backprop gradients
    ---> 16     loss = -model.marginal_log_likelihood(likelihood, output, train_y)
         17     loss.backward()
         18     optimizer.n_iter += 1
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/models/exact_gp.py in marginal_log_likelihood(self, likelihood, output, target, n_data)
         43             raise RuntimeError('You must train on the training targets!')
         44 
    ---> 45         mean, covar = likelihood(output).representation()
         46         n_data = target.size(-1)
         47         return gpytorch.exact_gp_marginal_log_likelihood(covar, target - mean).div(n_data)
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/module.py in __call__(self, *inputs, **kwargs)
        158                 raise RuntimeError('Input must be a RandomVariable or Variable, was a %s' %
        159                                    input.__class__.__name__)
    --> 160         outputs = self.forward(*inputs, **kwargs)
        161         if isinstance(outputs, Variable) or isinstance(outputs, RandomVariable) or isinstance(outputs, LazyVariable):
        162             return outputs
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/likelihoods/gaussian_likelihood.py in forward(self, input)
         14         assert(isinstance(input, GaussianRandomVariable))
         15         mean, covar = input.representation()
    ---> 16         noise = gpytorch.add_diag(covar, self.log_noise.exp())
         17         return GaussianRandomVariable(mean, noise)
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/__init__.py in add_diag(input, diag)
         36         return input.add_diag(diag)
         37     else:
    ---> 38         return _add_diag(input, diag)
         39 
         40 
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/functions/__init__.py in add_diag(input, diag)
         18                        component added.
         19     """
    ---> 20     return AddDiag()(input, diag)
         21 
         22 
    
    
    /data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/functions/add_diag.py in forward(self, input, diag)
         12         if input.ndimension() == 3:
         13             diag_mat = diag_mat.unsqueeze(0).expand_as(input)
    ---> 14         return diag_mat.mul_(val).add_(input)
         15 
         16     def backward(self, grad_output):
    
    
    TypeError: mul_ received an invalid combination of arguments - got (Variable), but expected one of:
     * (float value)
          didn't match because some of the arguments have invalid types: (!Variable!)
     * (torch.FloatTensor other)
          didn't match because some of the arguments have invalid types: (!Variable!)
    
    compatibility 
    opened by Balandat 25
  • import gpytorch error

    import gpytorch error

    $ sudo python setup.py install [sudo] password for ubuntu: running install running bdist_egg running egg_info writing dependency_links to gpytorch.egg-info/dependency_links.txt writing top-level names to gpytorch.egg-info/top_level.txt writing requirements to gpytorch.egg-info/requires.txt writing gpytorch.egg-info/PKG-INFO reading manifest file 'gpytorch.egg-info/SOURCES.txt' writing manifest file 'gpytorch.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying gpytorch/libfft/init.py -> build/lib.linux-x86_64-3.5/gpytorch/libfft running build_ext generating cffi module 'build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c' already up-to-date creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/gpytorch creating build/bdist.linux-x86_64/egg/gpytorch/means copying build/lib.linux-x86_64-3.5/gpytorch/means/init.py -> build/bdist.linux-x86_64/egg/gpytorch/means copying build/lib.linux-x86_64-3.5/gpytorch/means/mean.py -> build/bdist.linux-x86_64/egg/gpytorch/means copying build/lib.linux-x86_64-3.5/gpytorch/means/constant_mean.py -> build/bdist.linux-x86_64/egg/gpytorch/means copying build/lib.linux-x86_64-3.5/gpytorch/gp_model.py -> build/bdist.linux-x86_64/egg/gpytorch copying build/lib.linux-x86_64-3.5/gpytorch/init.py -> build/bdist.linux-x86_64/egg/gpytorch creating build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/init.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/constant_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/independent_random_variables.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/samples_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/gaussian_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/batch_random_variables.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/categorical_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/bernoulli_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables creating build/bdist.linux-x86_64/egg/gpytorch/likelihoods copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/init.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/gaussian_likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/bernoulli_likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods creating build/bdist.linux-x86_64/egg/gpytorch/lazy copying build/lib.linux-x86_64-3.5/gpytorch/lazy/init.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy copying build/lib.linux-x86_64-3.5/gpytorch/lazy/kronecker_product_lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy copying build/lib.linux-x86_64-3.5/gpytorch/lazy/toeplitz_lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy copying build/lib.linux-x86_64-3.5/gpytorch/lazy/lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy copying build/lib.linux-x86_64-3.5/gpytorch/module.py -> build/bdist.linux-x86_64/egg/gpytorch creating build/bdist.linux-x86_64/egg/gpytorch/inference copying build/lib.linux-x86_64-3.5/gpytorch/inference/init.py -> build/bdist.linux-x86_64/egg/gpytorch/inference creating build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/init.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/exact_gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/variational_gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models copying build/lib.linux-x86_64-3.5/gpytorch/inference/inference.py -> build/bdist.linux-x86_64/egg/gpytorch/inference creating build/bdist.linux-x86_64/egg/gpytorch/functions copying build/lib.linux-x86_64-3.5/gpytorch/functions/init.py -> build/bdist.linux-x86_64/egg/gpytorch/functions copying build/lib.linux-x86_64-3.5/gpytorch/functions/log_normal_cdf.py -> build/bdist.linux-x86_64/egg/gpytorch/functions copying build/lib.linux-x86_64-3.5/gpytorch/functions/normal_cdf.py -> build/bdist.linux-x86_64/egg/gpytorch/functions copying build/lib.linux-x86_64-3.5/gpytorch/functions/dsmm.py -> build/bdist.linux-x86_64/egg/gpytorch/functions copying build/lib.linux-x86_64-3.5/gpytorch/functions/add_diag.py -> build/bdist.linux-x86_64/egg/gpytorch/functions creating build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/toeplitz.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/interpolation.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/init.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/lincg.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/fft.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/lanczos_quadrature.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/function_factory.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/kronecker_product.py -> build/bdist.linux-x86_64/egg/gpytorch/utils copying build/lib.linux-x86_64-3.5/gpytorch/utils/circulant.py -> build/bdist.linux-x86_64/egg/gpytorch/utils creating build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/init.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/grid_interpolation_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/rbf_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/spectral_mixture_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels copying build/lib.linux-x86_64-3.5/gpytorch/kernels/index_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels creating build/bdist.linux-x86_64/egg/gpytorch/libfft copying build/lib.linux-x86_64-3.5/gpytorch/libfft/init.py -> build/bdist.linux-x86_64/egg/gpytorch/libfft copying build/lib.linux-x86_64-3.5/gpytorch/libfft/_libfft.abi3.so -> build/bdist.linux-x86_64/egg/gpytorch/libfft byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/mean.py to mean.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/constant_mean.py to constant_mean.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/gp_model.py to gp_model.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/constant_random_variable.py to constant_random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/independent_random_variables.py to independent_random_variables.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/samples_random_variable.py to samples_random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/gaussian_random_variable.py to gaussian_random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/batch_random_variables.py to batch_random_variables.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/random_variable.py to random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/categorical_random_variable.py to categorical_random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/bernoulli_random_variable.py to bernoulli_random_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/likelihood.py to likelihood.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/gaussian_likelihood.py to gaussian_likelihood.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/bernoulli_likelihood.py to bernoulli_likelihood.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/kronecker_product_lazy_variable.py to kronecker_product_lazy_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/toeplitz_lazy_variable.py to toeplitz_lazy_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/lazy_variable.py to lazy_variable.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/module.py to module.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/gp_posterior.py to gp_posterior.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/exact_gp_posterior.py to exact_gp_posterior.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/variational_gp_posterior.py to variational_gp_posterior.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/inference.py to inference.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/log_normal_cdf.py to log_normal_cdf.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/normal_cdf.py to normal_cdf.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/dsmm.py to dsmm.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/add_diag.py to add_diag.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/toeplitz.py to toeplitz.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/interpolation.py to interpolation.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/lincg.py to lincg.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/fft.py to fft.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/lanczos_quadrature.py to lanczos_quadrature.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/function_factory.py to function_factory.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/kronecker_product.py to kronecker_product.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/circulant.py to circulant.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/init.py to init.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/grid_interpolation_kernel.py to grid_interpolation_kernel.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/kernel.py to kernel.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/rbf_kernel.py to rbf_kernel.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/spectral_mixture_kernel.py to spectral_mixture_kernel.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/index_kernel.py to index_kernel.cpython-35.pyc byte-compiling build/bdist.linux-x86_64/egg/gpytorch/libfft/init.py to init.cpython-35.pyc creating stub loader for gpytorch/libfft/_libfft.abi3.so byte-compiling build/bdist.linux-x86_64/egg/gpytorch/libfft/_libfft.py to _libfft.cpython-35.pyc creating build/bdist.linux-x86_64/egg/EGG-INFO copying gpytorch.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying gpytorch.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying gpytorch.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying gpytorch.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying gpytorch.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt zip_safe flag not set; analyzing archive contents... gpytorch.libfft.pycache._libfft.cpython-35: module references file creating 'dist/gpytorch-0.1-py3.5-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing gpytorch-0.1-py3.5-linux-x86_64.egg removing '/usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg' (and everything under it) creating /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg Extracting gpytorch-0.1-py3.5-linux-x86_64.egg to /usr/local/lib/python3.5/dist-packages gpytorch 0.1 is already the active version in easy-install.pth

    Installed /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg Processing dependencies for gpytorch==0.1 Searching for cffi==1.10.0 Best match: cffi 1.10.0 Adding cffi 1.10.0 to easy-install.pth file

    Using /usr/local/lib/python3.5/dist-packages Searching for pycparser==2.18 Best match: pycparser 2.18 Adding pycparser 2.18 to easy-install.pth file

    Using /usr/local/lib/python3.5/dist-packages Finished processing dependencies for gpytorch==0.1


    $ python Python 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information.

    import gpytorch Traceback (most recent call last): File "", line 1, in File "/home/ubuntu/gpytorch-master/gpytorch/init.py", line 3, in from .lazy import LazyVariable, ToeplitzLazyVariable File "/home/ubuntu/gpytorch-master/gpytorch/lazy/init.py", line 2, in from .toeplitz_lazy_variable import ToeplitzLazyVariable File "/home/ubuntu/gpytorch-master/gpytorch/lazy/toeplitz_lazy_variable.py", line 4, in from gpytorch.utils import toeplitz File "/home/ubuntu/gpytorch-master/gpytorch/utils/toeplitz.py", line 2, in import gpytorch.utils.fft as fft File "/home/ubuntu/gpytorch-master/gpytorch/utils/fft.py", line 1, in from .. import libfft File "/home/ubuntu/gpytorch-master/gpytorch/libfft/init.py", line 3, in from ._libfft import lib as _lib, ffi as _ffi ImportError: No module named 'gpytorch.libfft._libfft'

    opened by chalesguo 22
  • Heteroskedastic likelihoods and log-noise models

    Heteroskedastic likelihoods and log-noise models

    Allows to specify generic (log-) noise models that are used to obtain out-of-sample noise estimates. This allows e.g. to stick a GP to be fit on the (log-) measured standard errors of observed data into the GaussianLikelihood, and then jointly fit that together with the GP to be fit on the data.

    enhancement WIP 
    opened by Balandat 20
  • Arbitrary number of batch dimensions for LazyTensors

    Arbitrary number of batch dimensions for LazyTensors

    Major refactors

    • [x] Refactor _get_indices from all LazyTensors
    • [x] Simplify _getitem to handle all cases - including tensor indices
    • [x] Write efficient _getitem for (most) LazyTensors
      • [x] CatLazyTensor
      • [x] BlockDiagLazyTensor
      • [x] ToeplitzLazyTensor
    • [x] Write efficient _get_indices for all LazyTensors
    • [x] Add a custom _expand_batch method for certain LazyTensors
    • [x] Add a custom _unsqueeze_batch method for certain LazyTensors
    • [x] BlockDiagLazyTensor and SumBatchLazyTensor use an explicit batch dimension (rather than implicit one) for the block structure. Also they can sum/block along any batch dimension.
    • [x] Custom _sum_batch and _prod_batch methods
      • [x] NonLazyTensor
      • [x] DiagLazyTensor
      • [x] InterpolatedLazyTensor
      • [x] ZeroLazyTensor

    New features

    • [x] LazyTensors now handle multiple batch dimensions
    • [x] LazyTensors have squeeze and unsqueeze methods
    • [x] Replace sum_batch with sum (can accept arbitrary dimensions)
    • [x] Replace mul_batch with prod (can accept arbitrary dimensions)
    • [x] LazyTensor.mul now expects a tensor of size *constant_size, 1, 1 for constant mul. (More consistent with the Tensor api).
    • [x] Add broadcasting capabilities to remaining LazyTensors

    Tests

    • [x] Add MultiBatch tests for all LazyTensors
    • [x] Add unittetsts for BlockDiagLazyTensor and SumBatchLazyTensor using any batch dimension for summing/blocking
    • [x] Add tests for sum and prod methods
    • [x] Add tests for constant mul
    • [x] Add tests for permuting dimensions
    • [x] Add tests for LazyEvaluatedKernelTensor

    Miscelaneous todos (as part of the whole refactoring process)

    • [x] Ensure that InterpolatedLazyTensor.diag didn't become more inefficient
    • [x] Make CatLazyTensor work on batch dimensions
    • [x] Add to LT docs that users might have to overwrite _getitem, _get_indices, _unsqueeze_batch, _expand_batch, and transpose.
    • [x] Fix #573

    Details

    The new __getitem__ method reduces all possible indices to two cases:

    • The row and/or column of the LT is absorbed into one of the batch dimensions (this happens when a batch dimension is tensor indexed and the row/column are as well). This calls the sub-method _get_indices, in which all dimensions are indexed by Tensor indices. The output is a Tensor.
    • Neither the row nor column are absorbed into one of the batch dimensions. In this case, the _getitem sub-method is called, and the resulting output will be an LT with a reduced row and column.

    Closes #369 Closes #490 Closes #533 Closes #532 Closes #573

    enhancement refactor 
    opened by gpleiss 19
  • Add TriangularLazyTensor

    Add TriangularLazyTensor

    Adds a new TriangularLazyTensor abstraction. This tensor can be upper or lower (default) triangular. This simplifies a bunch of stuff with solves, dets, logprobs etc.

    Some of the changes with larger blast radius in this PR are:

    1. CholLazyTensor now takes in a TriangularLazyTensor
    2. The _cholesky method is expected to return a TriangularLazyTensor
    3. The _cholesky method now takes an upper kwarg (allows to work with both lower and upper variants of TriangularLazyTensor)
    4. DiagLazyTensor is not subclassed from TriangularLazyTensor
    5. The memoization functionality is updated to allow caching results depending on args/kwargs (required for dealing with the upper/lower kwargs). By setting ignore_args=False in the @cached decorator, the existing behavior can be replicated.

    Some improvements:

    1. CholLazyTensor now has a more efficient inv_matmul and inv_quad methods using the factorization of the matrix.
    2. KroneckerProductLazyTensor now returns a Cholesky decomposition that itself uses a Kronecker product representation [previously suggested in #1086]
    3. Added a test_cholesky test to the LazyTensorTestCase (this covers some previously uncovered cases explicitly)
    4. There were a number of hard-to-spot issues due to hacky manual cache handling - I replaced all these call sites with the cache helpers from gpytorch.utils.memoize, which is the correct way to go about this.
    enhancement refactor 
    opened by Balandat 18
  • Replicating results presented in Doubly Stochastic Variational Inference for Deep Gaussian Processes

    Replicating results presented in Doubly Stochastic Variational Inference for Deep Gaussian Processes

    Hi, has anybody succeeded in replicating the results of the paper Doubly Stochastic Variational Inference for Deep Gaussian Processes by Salimbeni and Deisenroth in GPyTorch? There is an example DeepGP notebook referring to the paper, but when I tried to run it on the datasets used by the paper I often observe divergence in the test log-likelihood (this is the example for training on kin8nm dataset). Training on kin8nm dataset

    The divergence does not occur every time, but I am not sure what is its cause and I see no way to control it...

    I am attaching my modified notebook with reading of the datasets, a model without residual connections, batch size and layer dimensions as in the paper. Any idea what is happening here?

    salimbeni_replication_issue.zip

    Thanks, Jan

    bug 
    opened by JanSochman 18
  • [Feature Request] Missing data likelihoods

    [Feature Request] Missing data likelihoods

    🚀 Feature Request

    We'd like to use GPs in settings where some observations may be missing. My understanding is that, in these circumstances, missing observations do not contribute anything to the likelihood of the observation model.

    Initial Attempt

    My initial attempt to write such a likelihood is as follows:

    from gpytorch.likelihoods import GaussianLikelihood
    from torch.distributions import Normal
    
    class GaussianLikelihoodWithMissingObs(GaussianLikelihood):
        def __init__(self, **kwargs):
            super().__init__(**kwargs)
    
        @staticmethod
        def _get_masked_obs(x):
            missing_idx = x.isnan()
            x_masked = x.masked_fill(missing_idx, -999.)
            return missing_idx, x_masked
    
        def expected_log_prob(self, target, input, *params, **kwargs):
            missing_idx, target = self._get_masked_obs(target)
            res = super().expected_log_prob(target, input, *params, **kwargs)
            return res * ~missing_idx
    
        def log_marginal(self, observations, function_dist, *params, **kwargs):
            missing_idx, observations = self._get_masked_obs(observations)
            res = super().log_marginal(observations, function_dist, *params, **kwargs)
            return res * ~missing_idx
    

    Test

    import torch
    import numpy as np
    from tqdm import trange
    from gpytorch.distributions import MultivariateNormal
    from gpytorch.constraints import Interval
    torch.manual_seed(42)
    
    mu = torch.zeros(2, 3)
    sigma = torch.tensor([[
            [ 1,  1-1e-7, -1+1e-7],
            [ 1-1e-7,  1, -1+1e-7],
            [-1+1e-7, -1+1e-7,  1] ]]*2).float()
    mvn = MultivariateNormal(mu, sigma)
    x = mvn.sample_n(10000)
    # x[np.random.binomial(1, 0.1, size=x.shape).astype(bool)] = np.nan
    x += np.random.normal(0, 0.5, size=x.shape)
    
    LikelihoodOfChoice = GaussianLikelihood#WithMissingObs
    likelihood = LikelihoodOfChoice(noise_constraint=Interval(1e-6, 2))
    
    opt = torch.optim.Adam(likelihood.parameters(), lr=0.5)
    
    bar = trange(1000)
    for _ in bar:
        opt.zero_grad()
        loss = -likelihood.log_marginal(x, mvn).sum()
        loss.backward()
        opt.step()
        bar.set_description("nll: " + str(int(loss.data)))
    print(likelihood.noise.sqrt()) # Test 1
    
    likelihood.expected_log_prob(x[0], mvn) == likelihood.log_marginal(x[0], mvn) # Test 2
    

    Test 1 outputs the correct 0.5 as expected, and Test 2 is False with LikelihoodOfChoice = GaussianLikelihood and LikelihoodOfChoice = GaussianLikelihoodWithMissingObs.

    Any further tests and suggestions are appreciated. Can I open a PR for this?

    enhancement 
    opened by InfProbSciX 17
  • [Docs] Pointer to get started with (bayesian) GPLVM

    [Docs] Pointer to get started with (bayesian) GPLVM

    I am in the process of exploring gpytorch from some of my GP applications. Currently I use pyro for GPLVM tasks (i.e. https://pyro.ai/examples/gplvm.html). I am always interested in trying out various approaches, so I would like to see how I can do similar things in gpytorch.

    Specifically, I am interested in the bayesian GPLVM as described in Titsias et al 2010.

    I have found some documentation on handling uncertain inputs, so I am guessing that would be a good place to start, but I would love to hear some thoughts from any of the gpytorch developers.

    documentation 
    opened by holmrenser 17
  • [Bug] Upstream changes to tensor comparisons breaks things

    [Bug] Upstream changes to tensor comparisons breaks things

    🐛 Bug

    After https://github.com/pytorch/pytorch/pull/21113 a bunch of tests are failing b/c of the change in tensor comparison behavior (return type from uint8 to bool). Creating this issue to track the fix.

    bug compatibility 
    opened by Balandat 17
  • Mean and kernel functions for first and second derivatives

    Mean and kernel functions for first and second derivatives

    Added new classes to work with derivatives:

    1. Second derivative RBF kernel called RBFKernelGradGrad (without mixed second derivatives)
    2. Three new mean functions: a. ConstantMeanGradGrad b. LinearMeanGrad c. LinearMeanGradGrad
    opened by ankushaggarwal 0
  • Uncertainty estimation for multiclass task with customized kernel

    Uncertainty estimation for multiclass task with customized kernel

    Hi, thanks for the contribution on this awesome library,

    I come up with one issue with UQ with multiclass (n_class = 3) task using customized Tanimoto kernel as shown in this issue: https://github.com/cornellius-gp/gpytorch/issues/1986.

    Now I could train a DirichletGPModel with this kernel and estimate variance with RBF kernel as shown in demo notebook. However, for this kernel, I could only compute logit on each label but failed to estimate variance, here is the kernel implementation and my code:

    class TanimotoKernel(Kernel):
        has_lengthscale = True
    
        def forward(self, x1, x2, diag=False, **params):
            cross_product = (x1.unsqueeze(-2) * x2.unsqueeze(-3)).sum(-1)
            x1_self = x1.pow(2.0).sum(-1)
            x2_self = x2.pow(2.0).sum(-1)
            
            numerator = self.lengthscale * cross_product
            denominator = x1_self.unsqueeze(-1) + x2_self.unsqueeze(-2) - cross_product 
            # probably want to do something smarter than just adding 1e-6 here to prevent roundoff errors
            return numerator / (denominator + 1e-6)
    
    # We will use the simplest form of GP model, exact inference
    class DirichletGPModel(ExactGP):
        def __init__(self, train_x, train_y, likelihood, num_classes):
            super().__init__(train_x, train_y, likelihood)
            self.mean_module = ConstantMean(batch_shape=torch.Size((num_classes,)))
            self.covar_module = ScaleKernel(
                TanimotoKernel(batch_shape=torch.Size((num_classes,))),
                batch_shape=torch.Size((num_classes,)),
            )
        
        def forward(self, x):
            mean_x = self.mean_module(x)
            covar_x = self.covar_module(x)
            return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
    
    # initialize likelihood and model
    # we let the DirichletClassificationLikelihood compute the targets for us
    likelihood = DirichletClassificationLikelihood(y_train, learn_additional_noise=True).cuda()
    model = DirichletGPModel(x_train, likelihood.transformed_targets, likelihood, num_classes=likelihood.num_classes).cuda()
    
    training_iter = 200
    
    
    # Find optimal model hyperparameters
    model.train()
    likelihood.train()
    
    # Use the adam optimizer
    optimizer = torch.optim.Adam(model.parameters(), lr=0.05)  # Includes GaussianLikelihood parameters
    
    # "Loss" for GPs - the marginal log likelihood
    mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
    
    for i in range(training_iter):
        # Zero gradients from previous iteration
        optimizer.zero_grad()
        # Output from model
        output = model(x_train)
        # Calc loss and backprop gradients
        loss = -mll(output, likelihood.transformed_targets).sum()
        loss.backward()
        if i % 5 == 0:
            print('Iter %d/%d - Loss: %.3f noise: %.3f' % (
                i + 1, training_iter, loss.item(),
                model.likelihood.second_noise_covar.noise.mean().item()
            ))
        optimizer.step()
    
    model.eval()
    likelihood.eval()
    
    with gpytorch.settings.fast_pred_var(), torch.no_grad():
        f_pred = model(x_test)
        y_pred = likelihood(f_pred, target=y_train)
    
    
    

    What error I got is:

    "RuntimeError: The size of tensor a (119) must match the size of tensor b (3) at non-singleton dimension 1", where 119 is my test set sample size.

        245 @property
        246 def variance(self):
        247     if self.islazy:
        248         # overwrite this since torch MVN uses unbroadcasted_scale_tril for this
    --> 249         diag = self.lazy_covariance_matrix.diagonal(dim1=-1, dim2=-2)
        250         diag = diag.view(diag.shape[:-1] + self._event_shape)
        251         variance = diag.expand(self._batch_shape + self._event_shape)
    

    It looks like the bug comes up from the kernel implementation. Am I correct and how could I fix this issue?

    Many thanks.

    opened by peiyaoli 0
  • [Bug] torch.float62 raises error in `GridInterpolationKernel`

    [Bug] torch.float62 raises error in `GridInterpolationKernel`

    🐛 Bug

    I want to sample from the prior distribution with precision torch.float62 However, during sampling with KISS-GP a dtype error is raised if I manually design a kernel (very similar to the RBF Kernel) that is included in the GridInterpolationKernel.

    Changing the test data size from 2500x2 to 100x2, no error will occur.

    x = torch.meshgrid(
        torch.linspace(0, 10 - 1, 10) * 1.,
        torch.linspace(0, 10 - 1, 10) * 1.,
        indexing="xy",
    )
    x = torch.cat(
        (
            x[0].contiguous().view(x[0].numel(), 1),
            x[1].contiguous().view(x[1].numel(), 1),
        ),
        dim=1,
    )
    

    To reproduce

    import torch
    import gpytorch
    
    
    torch.set_default_dtype(torch.float64)
    
    
    def postprocess_rot(dist_mat):
        return dist_mat.mul_(-1.0).exp_()
    
    class TestKernel(gpytorch.kernels.Kernel):
    
        is_stationary = True
    
        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)
    
        def forward(self, x1, x2, **params):
            x1_ = x1.div_(torch.tensor([10., 1.]))
            x2_ = x2.div_(torch.tensor([10., 1.]))
            return self.covar_dist(
                x1_, x2_, square_dist=False, dist_postprocess_func=postprocess_rot, **params
            )
    
    class ExactGP(gpytorch.models.ExactGP):
    
        def __init__(self, **kwargs):
            super().__init__(None, None, gpytorch.likelihoods.GaussianLikelihood())
            self.mean_module = gpytorch.means.ZeroMean()
            self.covar_module = gpytorch.kernels.GridInterpolationKernel(
                TestKernel(ard_num_dims=2, **kwargs),
                grid_size=100,
                num_dims=2
                )
    
        def forward(self, x):
            mean_x = self.mean_module(x)
            covar_x = self.covar_module(x)
            return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
    
    
    x = torch.meshgrid(
        torch.linspace(0, 50 - 1, 50) * 1.,
        torch.linspace(0, 50 - 1, 50) * 1.,
        indexing="xy",
    )
    x = torch.cat(
        (
            x[0].contiguous().view(x[0].numel(), 1),
            x[1].contiguous().view(x[1].numel(), 1),
        ),
        dim=1,
    )
    
    model = ExactGP()
    model.eval()
    
    with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(100):
        with gpytorch.settings.fast_pred_samples():
            samples = model(x).rsample(torch.Size([1]))
    
    

    ** Error message **

    expected scalar type Double but found Float
    

    System information

    torch=1.13.0 gpytorch=1.9.0

    bug 
    opened by anjawa 0
  • pass **kwargs to ApproximateGP.__call__ in DeepGPLayer

    pass **kwargs to ApproximateGP.__call__ in DeepGPLayer

    Hi, Currently when passing user arguments via **kwargs to a Deep GP layer these arguments are not passed through to the user defined GP model. The root of the problem seems to be in the DeepGPLayer which doesn't call to ApproximateGP with the **kwargs arguments. This PR fixes this issue.

    opened by IdanAchituve 0
  • [Docs]

    [Docs]

    📚 Documentation/Examples

    I think the docs for deep multi-output regression are wrong: https://docs.gpytorch.ai/en/stable/examples/05_Deep_Gaussian_Processes/DGP_Multitask_Regression.html

    This example uses only a scaled RBF kernel (not a multi-output kernel) and a MultivariateNormal dist, not a MultitaskMultivariateNormal. There are also differences between the code and the supporting writing (which says a MultitaskMultivariateNormal should be used). I can provide a fix if requested.

    documentation 
    opened by max-gains 1
Releases(v1.9.0)
  • v1.9.0(Aug 30, 2022)

    Starting with this release, the LazyTensor functionality of GPyTorch has been pulled out into its own separate Python package, called linear_operator. Most users won't notice the difference (at the moment), but power users will notice a few changes.

    If you have your own custom LazyTensor code, don't worry: this release is backwards compatible! However, you'll see a lot of annoying deprecation warnings 😄

    LazyTensor -> LinearOperator

    • All gpytorch.lazy.*LazyTensor classes now live in the linear_operator repo, and are now called linear_operator.operator.*LinearOperator.
      • For example, gpytorch.lazy.DiagLazyTensor is now linear_operator.operators.DiagLinearOperator
      • The only major naming change: NonLazyTensor is now DenseLinearOperator
    • gpytorch.lazify and gpytorch.delazify are now linear_operator.to_linear_operator and linear_operator.to_dense, respectively.
    • The _quad_form_derivative method has been renamed to _bilinear_derivative (a more accurate name!)
    • LinearOperator method names now reflect their corresponding PyTorch names. This includes:
      • add_diag -> add_diagonal
      • diag -> diagonal
      • inv_matmul -> solve
      • symeig -> eigh and eigvalsh
    • LinearOperator now has the mT property

    torch_function functionality

    LinearOperators are now compatible with the torch api! For example, the following code works:

    diag_linear_op = linear_operator.operators.DiagLinearOperator(torch.randn(10))
    torch.matmul(diag_linear_op, torch.randn(10, 2))  # returns a torch.Tensor!
    

    Other files that have moved:

    • gpytorch.functions - all of the core functions used by LazyTensors now live in the LinearOperator repo. This includes: diagonalization, dsmm, inv_quad, inv_quad_logdet, matmul, pivoted_cholesky, root_decomposition, solve (formally inv_matmul), and sqrt_inv_matmul
    • gpytorch.utils - a few have moved to the LinearOperator repo. This includes: broadcasting, cholesky, contour_intergral_quad, getitem, interpolation, lanczos, linear_cg, minres, permutation, stable_pinverse, qr, sparse, SothcasticLQ, and toeplitz.

    Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.8.1...v1.9.0

    Source code(tar.gz)
    Source code(zip)
  • v1.8.1(Aug 8, 2022)

    Bug fixes

    • MultitaskMultivariateNormal: fix tensor reshape issue by @adamjstewart in https://github.com/cornellius-gp/gpytorch/pull/2081
    • Fix handling of prior terms in ExactMarginalLogLikelihood by @saitcakmak in https://github.com/cornellius-gp/gpytorch/pull/2039
    • Fix bug in preconditioned KISS-GP / Hadamard Multitask GPs by @gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2090
    • Add constant_constraint to ConstantMean by @gpleiss in https://github.com/cornellius-gp/gpytorch/pull/2082

    New Contributors

    • @mone27 made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2076

    Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.8.0...v1.8.1

    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Jul 19, 2022)

    Major Features

    • add variational nearest neighbor GP by @LuhuanWu in https://github.com/cornellius-gp/gpytorch/pull/2026

    New Contributors

    • @adamjstewart made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2061
    • @m-julian made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2054
    • @ngam made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2059
    • @LuhuanWu made their first contribution in https://github.com/cornellius-gp/gpytorch/pull/2026

    Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.7.0...v1.8.0

    Source code(tar.gz)
    Source code(zip)
  • v1.7.0(Jun 27, 2022)

    Important: This release requires Python 3.7 (up from 3.6) and PyTorch 1.10 (up from 1.9)

    New Features

    • gpytorch.metrics module offers easy-to-use metrics for GP performance.(#1870) This includes:
      • gpytorch.metrics.mean_absolute_error
      • gpytorch.metrics.mean_squared_error
      • gpytorch.metrics.mean_standardized_log_loss
      • gpytorch.metrics.negative_log_predictive_density
      • gpytorch.metrics.quantile_coverage_error
    • Large scale inference (using matrix-multiplication techniques) now implements the variance reduction scheme described in Wenger et al., ICML 2022. (#1836)
      • This makes it possible to use LBFGS, or other line search based optimization techniques, with large scale (exact) GP hyperparameter optimization.
    • Variational GP models support online updates (i.e. “fantasizing new models). (#1874)
    • Improvements to gpytorch.priors
      • New HalfCauchyPrior (#1961)
      • LKJPrior now supports sampling (#1737)

    Minor Features

    • Add LeaveOneOutPseudoLikelihood for hyperparameter optimization (#1989)
    • The PeriodicKernel now supports ARD lengthscales/periods (#1919)
    • LazyTensors (A) can now be matrix multiplied with tensors (B) from the left hand side (i.e. B x A) (#1932)
    • Maximum Cholesky retries can be controlled through a setting (#1861)
    • Kernels, means, and likelihoods can be pickled (#1876)
    • Minimum variance for FixedNoiseGaussianLikelihood can be set with a context manager (#2009)

    Bug Fixes

    • Fix backpropagation issues with KeOps kernels (#1904)
    • Fix broadcasting issues with lazily evaluated kernels (#1971)
    • Fix batching issues with PolynomialKernel (#1977)
    • Fix issues with PeriodicKernel.diag() (#1919)
    • Add more informative error message when train targets and the train prior distribution mismatch (#1905)
    • Fix issues with priors on ConstantMean (#2042)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Dec 4, 2021)

    This release contains several bug fixes and performance improvements.

    New Features

    • Variational multitask models can output a single task per input (rather than all tasks per input) (#1769)

    Small fixes

    • LazyTensor#to method more closely matches the torch Tensor API (#1746)
    • Add type hints and exceptions to kernels to improve usability (#1802)

    Performance

    • Improve the speed of fantasy models (#1752)
    • Improve the speed of solves and log determinants with KroneckerProductLazyTensor (#1786)
    • Prevent explicit kernel evaluation when expanding a LazyTensor kernel (#1813)

    Fixes

    • Fix indexing bugs with kernels (#1802, #1819, #1828)
    • Fix cholesky bugs on CUDA (#1848)
    • Remove lines of code that generate warnings in PyTorch 1.9 (#1835)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.1(Sep 2, 2021)

    New features

    • Add gpytorch.kernels.PiecewisePolynomialKernel (#1738)
    • Include ability to turn off diagonal correction for SGPR models (#1717)
    • Include ability to cast LazyTensor to half and float types (#1726)

    Performance improvements

    • Specialty MVN log_prob method for Gaussians with sum-of-Kronecker covariances (#1674)
    • Ability to specify devices when concatenating rows of LazyTensors (#1712)
    • Improvements to LazyTensor symeig method (#1725)

    Bug fixes

    • Fix to computing batch sizes of kernels (#1685)
    • Fix SGPR prediction when fast_computations flags are turned off (#1709)
    • Improve stability of stable_qr function (#1714)
    • Fix bugs with pyro integration for full Bayesian inference (#1721)
    • num_classes in gpytorch.likelihoods.DirichletLikelihood should be an integer (#1728)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.0(Jun 24, 2021)

    This release adds 2 new model classes, as well as a number of bug fixes:

    • GPLVM models for unsupervised learning
    • Polya-Gamma GPs for GP classification In addition, this release contains numerous improvements to SGPR models (that have also been included in prior bug-fix releases).

    New features

    • Add example notebook that demos binary classification with Polya-Gamma augmentation (#1523)
    • New model class: Bayesian GPLVM with Stochastic Variational Inference (#1605)
    • Periodic kernel handles multi-dimensional inputs (#1593)
    • Add missing data gaussian likelihoods (#1668)

    Performance

    • Speed up SGPR models (#1517, #1528, #1670)

    Fixes

    • Fix erroneous loss for ExactGP multitask models (#1647)
    • Fix pyro sampling (#1594)
    • Fix initialize bug for additive kernels (#1635)
    • Fix matrix multiplication of rectangular ZeroLazyTensor (#1295)
    • Dirichlet GPs use true train targets not labels (#1641)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.2(May 18, 2021)

    Various bug fixes, including

    • Use current PyTorch functionality (#1611, #1586)
    • Bug fixes to Lanczos factorization (#1607)
    • Fixes to SGPR model (#1607)
    • Various fixes to LazyTensor math (#1576, #1584)
    • SmoothedBoxPrior has a sample method (#1546)
    • Fixes to additive-structure models (#1582)
    • Doc fixes {#1603)
    • Fix to index kernel and LCM kernels (#1608, #1592)
    • Fixes to KeOps bypass (#1609)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.1(Apr 15, 2021)

    Fixes

    • Simplify interface for 3+ layer DSPP models (#1565)
    • Fix marginal log likelihood calculation for exact Bayesian inference w/ Pyro (#1571)
    • Remove CG warning for small matrices (#1562)
    • Fix Pyro cluster-multitask example notebook (#1550)
    • Fix gradients for KeOps tensors (#1543)
    • Ensure that gradients are passed through lazily-evaluated kernels (#1518)
    • Fix bugs for models with batched fantasy observations (#1529, #1499)
    • Correct default latent_dim value for LMC variational models (#1512)

    New features

    • Create gpytorch.utils.grid.ScaleToBounds utility to replace gpytorch.utils.grid.scale_to_bounds method (#1566)
    • Fix skip connections in Deep GP example (#1531)
    • Add fantasy point support for structured kernel interpolation models (#1545)

    Documentation

    • Add default values to all gpytorch.settings (#1564)
    • Improve Hadamard multitask notebook (#1537)

    Performance

    • Speed up SGPR models (#1517, #1528)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Feb 23, 2021)

    This release includes many major speed improvements, especially to Kronecker-factorized multi-output models.

    Performance improvements

    • Major speed improvements for Kronecker product multitask models (#1355, #1430, #1440, #1469, #1477)
    • Unwhitened VI speed improvements (#1487)
    • SGPR speed improvements (#1493)
    • Large scale exact GP speed improvements (#1495)
    • Random Fourier feature speed improvements (#1446, #1493)

    New Features

    • Dirichlet Classification likelihood (#1484) - based on Milios et al. (NeurIPS 2018)
    • MultivariateNormal objects have a base_sample_shape attribute for low-rank/degenerate distributions (#1502)

    New documentation

    • Tutorial for designing your own kernels (#1421)

    Debugging utilities

    • Better naming conventions for AdditiveKernel and ProductKernel (#1488)
    • gpytorch.settings.verbose_linalg context manager for seeing what linalg routines are run (#1489)
    • Unit test improvements (#1430, #1437)

    Bug Fixes

    • inverse_transform is applied to the initial values of constraints (#1482)
    • psd_safe_cholesky obeys cholesky_jitter settings (#1476)
    • fix scaling issue with priors on variational models (#1485)

    Breaking changes

    • MultitaskGaussianLikelihoodKronecker (deprecated) is fully incorporated in MultitaskGaussianLikelihood (#1471)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Jan 19, 2021)

    Fixes

    • Spectral mixture kernels work with SKI (#1392)
    • Natural gradient descent is compatible with batch-mode GPs (#1416)
    • Fix prior mean in whitened SVGP (#1427)
    • RBFKernelGrad has no more in-place operations (#1389)
    • Fixes to ConstantDiagLazyTensor (#1381, #1385)

    Documentation

    • Include example notebook for multitask Deep GPs (#1410)
    • Documentation updates (#1408, #1434, #1385, #1393)

    Performance

    • KroneckerProductLazyTensors use root decompositions of children (#1394)
    • SGPR now uses Woodbury formula and matrix determinant lemma (#1356)

    Other

    • Delta distributions have an arg_constraints attribute (#1422)
    • Cholesky factorization now takes optional diagonal noise argument (#1377)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Nov 30, 2020)

    This release primarily focuses on performance improvements, and adds contour integral quadrature based variational models.

    Major Features

    Variational models with contour integral quadrature

    Minor Features

    Performance improvements

    • Kronecker product models compute a deterministic logdet (faster than the Lanczos-based logdet) (#1332)
    • Improve efficiency of KroneckerProductLazyTensor symeig method (#1338)
    • Improve SGPR efficiency (#1356)

    Other improvements

    • SpectralMixtureKernel accepts arbitrary batch shapes (#1350)
    • Variational models pass around arbitrary **kwargs to the forward method (#1339)
    • gpytorch.settings context managers keep track of their default value (#1347)
    • Kernel objects can be pickle-d (#1336)

    Bug Fixes

    • Fix requires_grad checks in gpytorch.inv_matmul (#1322)
    • Fix reshaping bug for batch independent multi-output GPs (#1368)
    • ZeroMean accepts a batch_shape argument (#1371)
    • Various doc fixes/improvements (#1327, #1343, #1315, #1373)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Oct 26, 2020)

    This release includes the following fixes:

    • Fix caching issues with variational GPs (#1274, #1311)
    • Ensure that constraint bounds are properly cast to floating point types (#1307)
    • Fix bug with broadcasting multitask multivariate normal shapes (#1312)
    • Bypass KeOps for small/rectangular kernels (#1319)
    • Fix issues with eigenvectors=False in LazyTensor#symeig (#1283)
    • Fix issues with fixed-noise LazyTensor preconditioner (#1299)
    • Doc fixes (#1275, #1301)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(Aug 30, 2020)

    Major Features

    New variational and approximate models

    This release features a number of new and added features for approximate GP models:

    • Linear model of coregionalization for variational multitask GPs (#1180)
    • Deep Sigma Point Process models (#1193)
    • Mean-field decoupled (MFD) models from "Parametric Gaussian Process Regressors" (Jankowiak et al., 2020) (#1179)
    • Implement natural gradient descent (#1258)
    • Additional non-conjugate likelihoods (Beta, StudentT, Laplace) (#1211)

    New kernels

    We have just added a number of new specialty kernels:

    • gpytorch.kernels.GaussianSymmetrizedKLKernel for performing regression with uncertain inputs (#1186)
    • gpytorch.kernels.RFFKernel (random Fourier features kernel) (#1172, #1233)
    • gpytorch.kernels.SpectralDeltaKernel (a parametric kernel for patterns/extrapolation) (#1231)

    More scalable sampling

    • Large-scale sampling with contour integral quadrature from Pleiss et al., 2020 (#1194)

    Minor features

    • Ability to set amount of jitter added when performing Cholesky factorizations (#1136)
    • Improve scalability of KroneckerProductLazyTensor (#1199, #1208)
    • Improve speed of preconditioner (#1224)
    • Add symeig and svd methods to LazyTensors (#1105)
    • Add TriangularLazyTensor for Cholesky methods (#1102)

    Bug fixes

    • Fix initialization code for gpytorch.kernels.SpectralMixtureKernel (#1171)
    • Fix bugs with LazyTensor addition (#1174)
    • Fix issue with loading smoothed box priors (#1195)
    • Throw warning when variances are not positive, check for valid correlation matrices (#1237, #1241, #1245)
    • Fix sampling issues with Pyro integration (#1238)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(Apr 24, 2020)

    Major features

    • GPyTorch is compatible with PyTorch 1.5 (latest release)
    • Several bugs with task-independent multitask models are fixed (#1110)
    • Task-dependent multitask models are more batch-mode compatible (#1087, #1089, #1095)

    Minor features

    • gpytorch.priors.MultivariateNormalPrior has an expand method (#1018)
    • Better broadcasting for batched inducing point models (#1047)
    • LazyTensor repeating works with rectangular matrices (#1068)
    • gpytorch.kernels.ScaleKernel inherits the active_dims property from its base kernel (#1072)
    • Fully-bayesian models can be saved (#1076)

    Bug Fixes

    • gpytorch.kernels.PeriodicKernel is batch-mode compatible (#1012)
    • Fix gpytorch.priors.MultivariateNormalPrior expand method (#1018)
    • Fix indexing issues with LazyTensors (#1029)
    • Fix constants with gpytorch.mlls.GammaRobustVariationalELBO (#1038, #1053)
    • Prevent doubly-computing derivatives of kernel inputs (#1042)
    • Fix initialization issues with gpytorch.kernels.SpectralMixtureKernel (#1052)
    • Fix stability of gpytorch.variational.DeltaVariationalStrategy
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Dec 20, 2019)

    Major New Features and Improvements

    Each feature in this section comes with a new example notebook and documentation for how to use them -- check the new docs!

    • Added support for deep gaussian processes (#564).
    • KeOps integration has been added -- replace certain gpytorch.kernels.SomeKernel with gpytorch.kernels.keops.SomeKernel with KeOps installed, and run exact GPs on 100000+ data points (#812).
    • Variational inference has undergone significant internal refactoring! All old variational objects should still function, but many are deprecated. (#903).
    • Our integration with Pyro has been completely overhauled and is now much improved. For examples of interesting GP + Pyro models, see our new examples (#903).
    • Our example notebooks have been completely reorganized, and our documentation surrounding them has been rewritten to hopefully provide a better tutorial to GPyTorch (#954).
    • Added support for fully Bayesian GP modelling via NUTS (#918).

    Minor New Features and Improvements

    • GridKernel and GridInterpolationKernel now support rectangular grids (#888).
    • Added cylindrical kernel (#577).
    • Added polynomial kernel (#668).
    • Added tutorials on basic usage (hyperparameters, saving/loading, etc) (#685).
    • get_fantasy_model now supports batched models (#693).
    • Added a prior_mode context manager that causes GP models to evaluate in prior mode (#707).
    • Added linear mean (#676).
    • Added horseshoe prior (#719).
    • Added polynomial kernel with derivatives (#783).
    • Fantasy model computations now use QR for solving least squares problems, improving numerical stability (#790).
    • All legacy functions have been removed, in favor of new function format in PyTorch (#799).
    • Added Newton Girard kernel (#821).
    • GP predictions now automatically clear caches when backpropagating through them. Previously, if you wanted to train through a GP in eval mode, you had to clear the caches manually by toggling the GP back to train mode and then to eval mode again. This is no longer necessary (#916).
    • Added rational quadratic kernel (#330)
    • Switch to using torch.cholesky_solve and torch.logdet now that they support batch mode / backwards (#880)
    • Better / less redundant parameterization for correlation matrices e.g. in IndexKernel (#912).
    • Kernels now define __getitem__, which allows slicing batch dimensions (#782).
    • Performance improvements in the small data regime, e.g. n < 2000 (#926).
    • Increased the size of kernel matrix for which Cholesky is the default solve strategy to n=800 (#946).
    • Added an option for manually specifying a different preconditioner for AddedDiagLazyTensor (#930).
    • Added precommit hooks that enforce code style (#927).
    • Lengthscales have been refactored, and kernels have an is_stationary attribute (#925).
    • All of our example notebooks now get smoke tested by our CI.
    • Added a deterministic_probes setting that causes our MLL computation to be fully deterministic when using CG+Lanczos, which improves L-BFGS convergence (#929).
    • The use of the Woodbury formula for preconditioner computations is now fully replaced by QR, which improves numerical stability (#968).

    Bug fixes

    • Fix a type error when calling backward on gpytorch.functions.logdet (#711).
    • Variational models now properly skip posterior variance calculations if the skip_posterior_variances context is active (#741).
    • Fixed an issue with diag mode for PeriodicKernel (#761).
    • Stability improvements for inv_softplus and inv_sigmoid (#776).
    • Fix incorrect size handling in InterpolatedLazyTensor for rectangular matrices (#906)
    • Fix indexing in IndexKernel for batch mode (#911).
    • Fixed an issue where slicing batch mode lazy covariance matrices resulted in incorrect behavior (#782).
    • Cholesky gives a better error when there are NaNs (#944).
    • Use psd_safe_cholesky in prediction strategies rather than torch.cholesky (#956).
    • An error is now raised if Cholesky is used with KeOps, which is not supported (#959).
    • Fixed a bug where NaNs could occur during interpoilation (#971).
    • Fix MLL computation for heteroskedastic noise models (#870).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.6(Oct 13, 2019)

  • v0.3.5(Aug 10, 2019)

    This release addresses breaking changes in the recent PyTorch 1.2 release. Currently, GPyTorch will run on either PyTorch 1.1 or PyTorch 1.2.

    A full list of new features and bug fixes will be coming soon in a GPyTorch 0.4 release.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.4a(Aug 10, 2019)

  • v0.3.0(Apr 15, 2019)

    New Features

    • Implement kernel checkpointing, allowing exact GPs on up to 1M data points with multiple GPUs (#499)
    • GPyTorch now supports hard parameter constraints (e.g. bounds) via the register_constraint method on Module (#596)
    • All GPyTorch objects now support multiple batch dimensions. In addition to training b GPs simultaneously, you can now train a b1 x b2 matrix of GPs simultaneously if you so choose (#492, #589, #627)
    • RBFKernelGrad now supports ARD (#602)
    • FixedNoiseGaussianLikelihood offers a better interface for dealing with known observation noise values. WhiteNoiseKernel is now hard deprecated (#593)
    • InvMatmul, InvQuadLogDet and InvQuad are now twice differentiable (#603)
    • Likelihood has been redesigned. See the new documentation for details if you are creating custom likelihoods (#591)
    • Better support for more flexible Pyro models. You can now define likelihoods of the form p(y|f, z) where f is a GP and z are arbitrary latent variables learned by Pyro (#591).
    • Parameters can now be recursively initialized with full names, e.g. model.initialize(**{"covar_module.base_kernel.lengthscale": 1., "covar_module.outputscale": 1.}) (#484)
    • Added ModelList and LikelihoodList for training multiple GPs when batch mode can't be used -- see example notebooks (#471)

    Performance and stability improvements

    • CG termination is now more tolerance based, and will much more rarely terminate without returning good solves. Furthermore, a warning is raised if it ever does that includes suggested courses of action. (#569)
    • In non-ARD mode, RBFKernel and MaternKernel use custom backward implementations for performance (#517)
    • Up to a 3x performance improvement in the regime where the test set is very small (#615)
    • The noise parameter in GaussianLikelihood now has a default lower bound, similar to sklearn (#596)
    • psd_safe_cholesky now adds successively increasing amounts of jitter rather than only once (#610)
    • Variational inference initialization now uses psd_safe_cholesky rather than torch.cholesky to initialize with the prior (#610)
    • The pivoted Cholesky preconditioner now uses a QR decomposition for its solve rather than the Woodbury formula for speed and stability (#617)
    • GPyTorch now uses Cholesky for solves with very small matrices rather than CG, resulting in reduced overhead for that setting (#586)
    • Cholesky can additionally be turned on manually for help debugging (#586)
    • Kernel distance computations now use torch.cdist when on PyTorch 1.1.0 in the non-batch setting (#642)
    • CUDA unit tests now default to using the least used available GPU when run (#515)
    • MultiDeviceKernel is now much faster (#491)

    Bug Fixes

    • Fixed an issue with variational covariances at test time (#638)
    • Fixed an issue where the training covariance wasn't being detached for variance computations, occasionally resulting in backward errors (#566)
    • Fixed an issue where active_dims in kernels was being applied twice (#576)
    • Fixes and stability improvements for MultiDeviceKernel (#560)
    • Fixed an issue where fast_pred_var was failing for single training inputs (#574)
    • Fixed an issue when initializing parameter values with non-tensor values (#630)
    • Fixed an issue with handling the preconditioner log determinant value for MLL computation (#634)
    • Fixed an issue where prior_dist was being cached for VI, which was problematic for pyro models (#599)
    • Fixed a number of issues with LinearKernel, including one where the variance could go negative (#584)
    • Fixed a bug where training inputs couldn't be set with set_train_data if they are currently None (#565)
    • Fixed a number of bugs in MultitaskMultivariateNormal (#545, #553)
    • Fixed an indexing bug in batch_symeig (#547)
    • Fixed an issue where MultitaskMultivariateNormal wasn't interleaving rows correctly (#540)

    Other

    • GPyTorch is now fully Python 3.6, and we've begun to include static type hints (#581)
    • Parameters in GPyTorch no longer have default singleton batch dimensions. For example, the default shape of lengthscale is now torch.Size([1]) rather than torch.Size([1, 1]) (#605)
    • setup.py now includes optional dependents, reads requirements from requirements.txt, does not require torch if pytorch-nightly is installed (#495)
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Feb 9, 2019)

    0.2.1

    You can install GPyTorch via Anaconda (#463)

    Speed and stability

    • Kernel distances use the JIT for fast computations (#464)
    • LinearCG uses the JIT for fast computations (#464)
    • Improve the stability of computing kernel distances (#455)

    Features

    Variational inference improvements

    • Sped up variational models by batching all matrix solves in one call (#454)
    • Can use the same set of inducing points for batch variational GPs (#445)
    • Whitened variational inference for improved convergence (#493)
    • Variational log likelihoods for BernoulliLikelihood are computed with quadrature (#473)

    Multi-GPU Gaussian processes

    • Can train and test GPs by dividing the kernel onto multiple GPUs (#450)

    GPs with derivatives

    • Can define RBFKernels for observations and their derivatives (#462)

    LazyTensors

    • LazyTensors can broadcast matrix multiplication (#459)
    • Can use @ sign for matrix multiplication with LazyTensors

    GP-list

    • Convenience methods for training/testing multiple GPs in a list (#471)

    Other

    • Added a gpytorch.settings.fast_computations feature to (optionally) use Cholesky-based inference (#456)
    • Distributions define event shapes (#469)
    • Can recursively initialize parameters on GP modules (#484)

    Bugs

    • Can initialize noise in GaussianLikelihood (#479)
    • Fixed bugs in SGPR kernel (#487)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jan 2, 2019)

    v0.1.1

    Features

    • Batch GPs, which previously were a feature, are now well-documented and much more stable (see docs)
    • Can add "fantasy observations" to models.
    • Option for exact marginal log likelihood and sampling computations (this is slower, but potentially useful for debugging) (gpytorch.settings.fast_computations)

    Bug fixes

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0.rc5(Nov 19, 2018)

    Stability of hyperparameters

    • Hyperparameters taht are constrained to be positive (e.g. variance, lengthscale, etc.) are now parameterized throught the softplus function (log(1 + e^x)) rather than through the log function
    • This dramatically improves the numerical stability and optimization of hyperparameters
    • Old models that were trained with log parameters will still work, but this is deprecated.
    • Inference now handles certain numerical floating point round-off errors more gracefully.

    Various stability improvements to variational inference

    Other changes

    • GridKernel can be used for data that lies on a perfect grid.
    • New preconditioner for LazyTensors.
    • Use batched cholesky functions for improved performance (requires updating PyTorch)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0.rc4(Nov 8, 2018)

    New features

    • Implement diagonal correction for basic variational inference, improving predictive variance estimates. This is on by default.
    • LazyTensor._quad_form_derivative now has a default implementation! While custom implementations are likely to still be faster in many cases, this means that it is no longer required to implement a custom _quad_form_derivative when implementing a new LazyTensor subclass.

    Bug fixes

    • Fix a number of critical bugs for the new variational inference.
    • Do some hyperparameter tuning for the SV-DKL example notebook, and include fancier NN features like batch normalization.
    • Made it more likely that operations internally preserve the ability to perform preconditioning for linear solves and log determinants. This may have a positive impact on model performance in some cases.
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0.rc3(Oct 29, 2018)

    Variational inference has been refactored

    • Easier to experiment with different variational approximations
    • Massive performance improvement for SV-DKL

    Experimental Pyro integration for variational inference

    Lots of tiny bug fixes

    (Too many to name, but everything should be better 😬)

    Source code(tar.gz)
    Source code(zip)
  • 0.1.0.rc2(Oct 29, 2018)

  • 0.1.0.rc1(Oct 2, 2018)

    Beta release

    GPyTorch is now available on pip! pip install gpytorch.

    Important! This release requires the preview build of PyTorch (>= 1.0). You should either build from source or install pytorch-nightly. See the PyTorch docs for specific installation instructions.

    If you were previously using GPyTorch, see the migration guide to help you move over.

    What's new

    • Batch mode: it is possible to train multiple GPs simultaneously
    • Improved multitask models

    Breaking changes

    • gpytorch.random_variables have been replaced by gpytorch.distributions. These build upon PyTorch distributions.
      • gpytorch.random_variables.GaussianRandomVariable -> gpytorch.distributions.MultivariateNormal.
      • gpytorch.random_variables.MultitaskGaussianRandomVariable -> gpytorch.distributions.MultitaskMultivariateNormal.

    Utilities

    • gpytorch.utils.scale_to_bounds is now gpytorch.utils.grid.scale_to_bounds

    Kernels

    • GridInterpolationKernel, GridKernel, InducingPointKernel - the attribute base_kernel_module has become base_kernel (for consistency)
    • AdditiveGridInterpolationKernel no longer exists. Now use `AdditiveStructureKernel(GridInterpolationKernel(...))
    • MultiplicativeGridInterpolationKernel no longer exists. Now useProductStructureKernel(GridInterpolationKernel(...))`.

    Attributes (n_* -> num_*)

    • IndexKernel: n_tasks -> num_tasks
    • LCMKernel: n_tasks -> num_tasks
    • MultitaskKernel: n_tasks -> num_tasks
    • MultitaskGaussianLikelihood: n_tasks -> num_tasks
    • SoftmaxLikelihood: n_features -> num_features
    • MultitaskMean: n_tasks -> num_tasks
    • VariationalMarginalLogLikelihood: n_data -> num_data
    • SpectralMixtureKernel: n_dimensions -> ard_num_dims, n_mixtures -> num_mixtures
    Source code(tar.gz)
    Source code(zip)
  • alpha(Oct 2, 2018)

    Alpha release

    We strongly encourage you to check out our beta release for lots of improvements! However, if you still need an old version, or need to use PyTorch 0.4, you can install this release.

    Source code(tar.gz)
    Source code(zip)
A toolset for creating Qualtrics-based IAT experiments

Qualtrics IAT Tool A web app for generating the Implicit Association Test (IAT) running on Qualtrics Online Web App The app is hosted by Streamlit, a

0 Feb 12, 2022
Official code base for the poster "On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation" published in NeurIPS 2021 Workshop (SVRHM)

Self-Supervised Learning (SimCLR) with Biological Plausible Image Augmentations Official code base for the poster "On the use of Cortical Magnificatio

Binxu 8 Aug 17, 2022
A Kernel fuzzer focusing on race bugs

Razzer: Finding kernel race bugs through fuzzing Environment setup $ source scripts/envsetup.sh scripts/envsetup.sh sets up necessary environment var

Systems and Software Security Lab at Seoul National University (SNU) 328 Dec 26, 2022
KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

KAPAO (Keypoints and Poses as Objects) KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as

Will McNally 664 Dec 30, 2022
PyTorch implementation of hand mesh reconstruction described in CMR and MobRecon.

Hand Mesh Reconstruction Introduction This repo is the PyTorch implementation of hand mesh reconstruction described in CMR and MobRecon. Update 2021-1

Xingyu Chen 236 Dec 29, 2022
A PyTorch Implementation of Gated Graph Sequence Neural Networks (GGNN)

A PyTorch Implementation of GGNN This is a PyTorch implementation of the Gated Graph Sequence Neural Networks (GGNN) as described in the paper Gated G

Ching-Yao Chuang 427 Dec 13, 2022
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 51 Dec 27, 2022
Generate Contextual Directory Wordlist For Target Org

PathPermutor Generate Contextual Directory Wordlist For Target Org This script generates contextual wordlist for any target org based on the set of UR

8 Jun 23, 2021
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* Any questions or discussions ar

sunshine.lwt 112 Jan 05, 2023
[ArXiv 2021] Data-Efficient Instance Generation from Instance Discrimination

InsGen - Data-Efficient Instance Generation from Instance Discrimination Data-Efficient Instance Generation from Instance Discrimination Ceyuan Yang,

GenForce: May Generative Force Be with You 93 Dec 25, 2022
Forecasting directional movements of stock prices for intraday trading using LSTM and random forest

Forecasting directional movements of stock-prices for intraday trading using LSTM and random-forest https://arxiv.org/abs/2004.10178 Pushpendu Ghosh,

Pushpendu Ghosh 270 Dec 24, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Collection of sports betting AI tools.

sports-betting sports-betting is a collection of tools that makes it easy to create machine learning models for sports betting and evaluate their perf

George Douzas 109 Dec 31, 2022
simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
PyContinual (An Easy and Extendible Framework for Continual Learning)

PyContinual (An Easy and Extendible Framework for Continual Learning) Easy to Use You can sumply change the baseline, backbone and task, and then read

176 Jan 05, 2023
Predicting 10 different clothing types using Xception pre-trained model.

Predicting-Clothing-Types Predicting 10 different clothing types using Xception pre-trained model from Keras library. It is reimplemented version from

AbdAssalam Ahmad 3 Dec 29, 2021
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
This is the repository of our article published on MDPI Entropy "Feature Selection for Recommender Systems with Quantum Computing".

Collaborative-driven Quantum Feature Selection This repository was developed by Riccardo Nembrini, PhD student at Politecnico di Milano. See the websi

Quantum Computing Lab @ Politecnico di Milano 10 Apr 21, 2022
Diverse Branch Block: Building a Convolution as an Inception-like Unit

Diverse Branch Block: Building a Convolution as an Inception-like Unit (PyTorch) (CVPR-2021) DBB is a powerful ConvNet building block to replace regul

253 Dec 24, 2022