NeuralCompression is a Python repository dedicated to research of neural networks that compress data

Overview

NeuralCompression

LICENSE Build and Test

What's New

About

NeuralCompression is a Python repository dedicated to research of neural networks that compress data. The repository includes tools such as JAX-based entropy coders, image compression models, video compression models, and metrics for image and video evaluation.

NeuralCompression is alpha software. The project is under active development. The API will change as we make releases, potentially breaking backwards compatibility.

Installation

NeuralCompression is a project currently under development. You can install the repository in development mode.

PyPI Installation

First, install PyTorch according to the directions from the PyTorch website. Then, you should be able to run

pip install neuralcompression

to get the latest version from PyPI.

Development Installation

First, clone the repository and navigate to the NeuralCompression root directory. To match your local environment to the test environment, run

pip install -r dev-requirements.txt

Then, you can install the package in development mode by running

pip install -e .

If you are not interested in matching the test environment, then you only need to apply the second step to install.

Repository Structure

We use a 2-tier repository structure. The neuralcompression package contains a core set of tools for doing neural compression research. Code committed to the core package requires stricter linting, high code quality, and rigorous review. The projects folder contains code for reproducing papers and training baselines. Code in this folder is not linted aggressively, we don't enforce type annotations, and it's okay to omit unit tests.

The 2-tier structure enables rapid iteration and reproduction via code in projects that is built on a backbone of high-quality code in neuralcompression.

neuralcompression

  • neuralcompression - base package
    • data - PyTorch data loaders for various data sets
    • entropy_coders - lossless compression algorithms in JAX
      • craystack - an implementation of the rANS algorithm with the craystack API
    • functional - methods for image warping, information cost, etc.
    • layers - building blocks for compression models
    • metrics - torchmetrics classes for assessing model performance
    • models - complete compression models

projects

Getting Started

For an example of package usage, see the Scale Hyperprior for an example of how to train an image compression model in PyTorch Lightning. See DVC for a video compression example.

Contributions

Please read our CONTRIBUTING guide and our CODE_OF_CONDUCT prior to submitting a pull request.

We test all pull requests. We rely on this for reviews, so please make sure any new code is tested. Tests for neuralcompression go in the tests folder in the root of the repository. Tests for individual projects go in those projects' own tests folder.

We use black for formatting, isorst for import sorting, flake8 for linting, mypy for type checking. We enforce these on the neuralcompression package, but not in the projects folder.

License

NeuralCompression is MIT licensed, as found in the LICENSE file.

Cite

If you find NeuralCompression useful in your work, feel free to cite

@misc{muckley2021neuralcompression,
    author={Matthew Muckley and Jordan Juravsky and Daniel Severo and Mannat Singh and Quentin Duval and Karen Ullrich},
    title={NeuralCompression},
    howpublished={\url{https://github.com/facebookresearch/NeuralCompression}},
    year={2021}
}
Comments
  • Dependency and Build Fixes

    Dependency and Build Fixes

    This is a potpourri of fixes.

    Dependencies Currently, we require very specific package versions for any install. This is too restrictive for a research package where researchers may have a variety of reasons to carefully tailor their environment. This PR resolves the issue by allowing general installation to have flexible packages. To maintain CI stability and functionality, the fixed version requirements have been moved to dev.

    Build We build our CPP extension using load instead of setup.py. This removes PyTorch as a dependency at install time and removes the need for us to distribute binaries. This required adding ninja as a dependency. Hopefully we can get rid of it eventually, but this temporarily should fix distribution.

    CI testing isort was misconfigured and missed a lot of import changes. I fixed the config and ran it on all files to update import sorting.

    Other While implementing this I also found a few other minor issues with respect to versioning, tests, and formatting that I resolved.

    This PR makes NeuralCompression require Python 3.8 due to its usage of importlib.

    Testing Testing done via CI.

    CLA Signed 
    opened by mmuckley 4
  • Documentation builds on ReadTheDocs are failing

    Documentation builds on ReadTheDocs are failing

    Bug

    Documentation builds on ReadTheDocs are failing.

    Steps

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    Expected behavior

    Docs should build without error.

    Environment

    ReadTheDocs

    Context

    See here: https://readthedocs.org/projects/neuralcompression/builds/15172473/

    bug 
    opened by mmuckley 4
  • `ContinuousEntropy` layer

    `ContinuousEntropy` layer

    Abstract base class (ABC) for implementing continuous entropy layers.

    The abstract class pre-computes integer probability tables based on a prior distribution, which can be used across different platforms by a range encoder and decoder. The class also provides abstract methods for compression, decompression, quantization, and reconstruction.

    enhancement CLA Signed 
    opened by 0x00b1 4
  • `NonNegativeParameterization` layer

    `NonNegativeParameterization` layer

    closes #86

    Non-negative parameterization as required by generalized divisive normalization (GDN) activations. The parameter is subjected to an invertible transformation that slows down the learning rate for small values.

    A brief usage example:

    import torch
    from torch.nn import Module, Parameter
    import torch.nn.functional
    
    from torch import Tensor
    
    from ._non_negative_parameterization import NonNegativeParameterization
    
    
    class GeneralizedDivisiveNormalization(Module):
        def __init__(
            self,
            in_channels: int,
            inverse: bool = False,
            beta_min: float = 1e-6,
            gamma_init: float = 0.1,
        ):
            super(GeneralizedDivisiveNormalization, self).__init__()
    
            self._inverse = inverse
    
            self._reparameterized_beta = NonNegativeParameterization(
                torch.ones(in_channels),
                minimum=beta_min,
            )
    
            self._beta = Parameter(
                self._reparameterized_beta.initialized,
            )
    
            self._reparameterized_gamma = NonNegativeParameterization(
                gamma_init * torch.eye(in_channels),
            )
    
            self._gamma = Parameter(
                self._reparameterized_gamma.initialized,
            )
    
        def forward(self, x: Tensor) -> Tensor:
            _, channels, _, _ = x.size()
    
            y = torch.nn.functional.conv2d(
                x ** 2,
                torch.reshape(
                    self._reparameterized_gamma(self._gamma),
                    (channels, channels, 1, 1)
                ),
                self._reparameterized_beta(self._beta),
            )
    
            if self._inverse:
                return x * torch.sqrt(y)
    
            return x * torch.rsqrt(y)
    
    import torch
    import torch.testing
    
    from neuralcompression.layers import GeneralizedDivisiveNormalization
    
    
    class TestGeneralizedDivisiveNormalization:
        def test_backward(self):
            x = torch.rand((1, 32, 16, 16), requires_grad=True)
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(32)
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x / torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
            generalized_divisive_normalization = GeneralizedDivisiveNormalization(
                32,
                inverse=True,
            )
    
            y = generalized_divisive_normalization(x)
    
            y.backward(x)
    
            assert y.shape == x.shape
    
            assert x.grad is not None
    
            assert x.grad.shape == x.shape
    
            torch.testing.assert_allclose(
                x * torch.sqrt(1 + 0.1 * (x ** 2)),
                y,
            )
    
    enhancement CLA Signed 
    opened by 0x00b1 4
  • pad image at inference time, remove resize

    pad image at inference time, remove resize

    Changes

    At inference time, pad the image instead of doing an interpolation-based resize, which gives poor results (e.g. on PSNR) when the input image heigh and/or width is not exactly divisible by the downsampling factor (=2^{number of downsampling layers}).

    CLA Signed 
    opened by desi-ivanova 3
  • Replace license docstrings with comments

    Replace license docstrings with comments

    The intention was three-fold:

    1. consolidate copyright formatting across .py sources
    2. simplify the implementation of #100
    3. remove copyright headers from module documentation

    Changes

    • [x] replaces license docstrings with comments
    CLA Signed 
    opened by 0x00b1 3
  • survival_function op

    survival_function op

    closes #75

    Survival function of x. Generally defined as 1 - distribution.cdf(x).

    Unit test tests whether the returned result matches the returned result of scipy.stats.norm.sf.

    enhancement CLA Signed 
    opened by 0x00b1 3
  • Update triggers for CI

    Update triggers for CI

    This PR alters the triggers for continuous integration. Previously, we triggered all tests on both pushes and pull requests. This meant that within a PR we would have "duplicate" (but not really duplicate) checks. What we really want on a PR is just the PR check, so we'll keep that.

    The other thing that's nice to have is to trigger CI when pushing to a branch. That is what this PR will remove, but to replace it we add workflow_dispatch which allows a user to trigger CI with the GitHub UI. So we remove quite a bit of duplicated tests at the cost of making users click a button if they want to test their code before PR.

    Note that this PR only applies to people who push to branches of the repository.

    CLA Signed 
    opened by mmuckley 3
  • HiFiC modules

    HiFiC modules

    Implements the following modules from Mentzer, et al. (2020)

    • HiFiCDiscriminator
    • HiFiCEncoder
    • HiFiCGenerator
    @misc{mentzer2020highfidelity,
          title={High-Fidelity Generative Image Compression}, 
          author={Fabian Mentzer and George Toderici and Michael Tschannen and Eirikur Agustsson},
          year={2020},
          eprint={2006.09965},
          archivePrefix={arXiv},
          primaryClass={eess.IV}
    }
    

    Originally implemented in TensorFlow Compression (TFC) by the author (@relational).

    CLA Signed 
    opened by 0x00b1 3
  • remove metadata from __init__.py

    remove metadata from __init__.py

    Changes

    The metadata special variables from neuralcompression.__init__ were removed.

    This metadata was not currently used and is now available from setup.cfg

    CLA Signed 
    opened by 0x00b1 2
  • Update PyTorch to 1.10.0

    Update PyTorch to 1.10.0

    Closes #127.

    Changes

    • [x] Replaced torch.testing.assert_equal with torch.testing.assert_close
    • [x] Updates torch to 1.10.0
    • [x] Updates torchvision to 0.11.1
    enhancement CLA Signed 
    opened by 0x00b1 2
  • Implement PQ-MIM compression paper

    Implement PQ-MIM compression paper

    opened by mmuckley 0
  • Upstream Google autoencoder models to CompressAI

    Upstream Google autoencoder models to CompressAI

    At the moment we have several model implementations that are already implemented in CompressAI (e.g., Scale Hyperprior, Mean-Scale Hyperprior). At this point CompressAI has pretty good adoption, so we should be able to remove these from our repository and upstream the dependency.

    By default CompressAI doesn't handle reflective image padding for users, so if desired we could include wrappers like that in PR #185 to handle this for users unfamiliar with the functionality of these models.

    enhancement 
    opened by mmuckley 1
Releases(v0.2.1)
  • v0.2.1(Jan 12, 2022)

    This release covers a few small fixes from PRs #171 and #172.

    Dependencies

    • To retrieve versioning information, we now use importlib. This is included only with Python >= 3.8, so NeuralCompression will now only run on versions of Python at least as recent as 3.8. (#171).
    • Install requirements are flexible, whereas dev requirements are fixed (#171). This should improve CI stability while allowing researchers flexibility in tuning their research environment while using NeuralCompression.
    • torch has been removed as a build dependency (#172).
    • Other build dependencies have been modified to be flexible (#172).

    Build System

    • C++ code from _pmf_to_quantized_cdf introduced compilation requirements when running setup.py. Since we didn't configure our build system to handle specific operating systems, this caused a failed release upload to PyPI. The build system has been altered to use torch.utils.cpp_extension.load, which defers compilation to the the user after package installation. We would like to improve this further at some point, but the modifications from #171 gets the package stable. Note: there is a reasonable chance this could fail on non-Linux OS's such as Windows. Those users will still be able to use other package features that don't rely on _pmf_to_quantized_cdf.

    Other

    • Fixed a linting issue where isort was not checking in CI if packages were properly sorted. (#171).
    • Fixed a random test issue (#171).
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Dec 13, 2021)

    NeuralCompression is a PyTorch-based Python package intended to simplify neural network-based compression research. It is similar to (and shares some of the functionality) of fantastic libraries like TensorFlow Compression and Compress AI.

    The major theme of v0.2.0 release is autoencoders, particularly features useful for implementing existing models by Ballé and features useful to expand on these models in forthcoming research. In addition, 0.2.0 sees some code organization changes and published documentation. I recommend reading the new “Image Compression” example to see some of these changes.

    API Additions

    Data (neuralcompression.data)

    Distributions (neuralcompression.distributions)

    • NoisyNormal: normal distribution with additive identically distributed (i.i.d.) uniform noise.
    • UniformNoise: adapts a continuous distribution via additive identically distributed (i.i.d.) uniform noise.

    Functional (neuralcompression.functional)

    • estimate_tails: estimates approximate tail quantiles.
    • log_cdf: logarithm of the distribution’s cumulative distribution function (CDF).
    • log_expm1: logarithm of e^{x} - 1.
    • log_ndtr: logarithm of the normal cumulative distribution function (CDF).
    • log_survival_function: logarithm of x for a distribution’s survival function.
    • lower_bound: torch.maximum with a gradient for x < bound.
    • lower_tail: approximates lower tail quantile for range coding.
    • ndtr: the normal cumulative distribution function (CDF).
    • pmf_to_quantized_cdf: transforms a probability mass function (PMF) into a quantized cumulative distribution function (CDF) for entropy coding.
    • quantization_offset: computes a distribution-dependent quantization offset.
    • soft_round_conditional_mean: conditional mean of x given noisy soft rounded values.
    • soft_round_inverse: inverse of soft_round.
    • soft_round: differentiable approximation of torch.round.
    • survival_function: survival function of x. Generally defined as 1 - distribution.cdf(x).
    • upper_tail: approximates upper tail quantile for range coding.

    Layers (neuralcompression.layers)

    • AnalysisTransformation2D: applies the 2D analysis transformation over an input signal.
    • ContinuousEntropy: base class for continuous entropy layers.
    • GeneralizedDivisiveNormalization: applies generalized divisive normalization for each channel across a batch of data.
    • HyperAnalysisTransformation2D: applies the 2D hyper analysis transformation over an input signal.
    • HyperSynthesisTransformation2D: applies the 2D hyper synthesis transformation over an input signal.
    • NonNegativeParameterization: the parameter is subjected to an invertible transformation that slows down the learning rate for small values.
    • RateMSEDistortionLoss: rate-distortion loss.
    • SynthesisTransformation2D: applies the 2D synthesis transformation over an input signal.

    Models (neuralcompression.models)

    End-to-end Optimized Image Compression

    End-to-end Optimized Image Compression
    Johannes Ballé, Valero Laparra, Eero P. Simoncelli
    https://arxiv.org/abs/1611.01704
    
    • PriorAutoencoder: base class for implementing prior autoencoder architectures.
    • FactorizedPriorAutoencoder

    High-Fidelity Generative Image Compression

    High-Fidelity Generative Image Compression
    Fabian Mentzer, George Toderici, Michael Tschannen, Eirikur Agustsson
    https://arxiv.org/abs/2006.09965
    
    • HiFiCEncoder
    • HiFiCDiscriminator
    • HiFiCGenerator

    Variational Image Compression with a Scale Hyperprior

    Variational Image Compression with a Scale Hyperprior
    Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston
    https://arxiv.org/abs/1802.01436
    
    • HyperpriorAutoencoder: base class for implementing hyperprior autoencoder architectures.
    • MeanScaleHyperpriorAutoencoder
    • ScaleHyperpriorAutoencoder

    API Changes

    • neuralcompression.functional.hsv2rgb is now neuralcompression.functional.hsv_to_rgb.
    • neuralcompression.functional.learned_perceptual_image_patch_similarity is now neuralcompression.functional.lpips.

    Acknowledgements

    Thank you to the following people for their advice:

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jul 19, 2021)

Owner
Facebook Research
Facebook Research
tinykernel - A minimal Python kernel so you can run Python in your Python

tinykernel - A minimal Python kernel so you can run Python in your Python

fast.ai 37 Dec 02, 2022
This program uses trial auth token of Azure Cognitive Services to do speech synthesis for you.

🗣️ aspeak A simple text-to-speech client using azure TTS API(trial). 😆 TL;DR: This program uses trial auth token of Azure Cognitive Services to do s

Levi Zim 359 Jan 05, 2023
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 09, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Video Object Segmentation Language as Queries for Referring Video Object S

Jonas Wu 232 Dec 29, 2022
[ICCV 2021] Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation

MAED: Encoder-decoder with Multi-level Attention for 3D Human Shape and Pose Estimation Getting Started Our codes are implemented and tested with pyth

ZiNiU WaN 176 Dec 15, 2022
In the AI for TSP competition we try to solve optimization problems using machine learning.

AI for TSP Competition Goal In the AI for TSP competition we try to solve optimization problems using machine learning. The competition will be hosted

Paulo da Costa 11 Nov 27, 2022
Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.

CaiT-TF (Going deeper with Image Transformers) This repository provides TensorFlow / Keras implementations of different CaiT [1] variants from Touvron

Sayak Paul 9 Jun 26, 2022
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021)

Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021) Alexey Nekrasov*, Jonas Schult*, Or Litany, Bastian Leibe, Francis Engelmann Mix3D is

Alexey Nekrasov 189 Dec 26, 2022
Official code for "Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021".

Simpler is Better: Few-shot Semantic Segmentation with Classifier Weight Transformer. ICCV2021. Introduction We proposed a novel model training paradi

Lucas 103 Dec 14, 2022
DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021)

DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021) This repo is the implementation of DPC. Tested environment Pyth

Dvir Ginzburg 30 Nov 30, 2022
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

HiddenLayer A lightweight library for neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. HiddenLayer is simple, easy to ex

Waleed 1.7k Dec 31, 2022
Unity Propagation in Bayesian Networks Handling Inconsistency via Unity Smoothing

This repository contains the scripts needed to generate the results from the paper Unity Propagation in Bayesian Networks Handling Inconsistency via U

0 Jan 19, 2022
A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

A boosting-based Multiple Instance Learning (MIL) package that includes MIL-Boost and MCIL-Boost

Jun-Yan Zhu 27 Aug 08, 2022
Hierarchical Uniform Manifold Approximation and Projection

HUMAP Hierarchical Manifold Approximation and Projection (HUMAP) is a technique based on UMAP for hierarchical non-linear dimensionality reduction. HU

Wilson Estécio Marcílio Júnior 160 Jan 06, 2023
Pytorch implementation of SELF-ATTENTIVE VAD, ICASSP 2021

SELF-ATTENTIVE VAD: CONTEXT-AWARE DETECTION OF VOICE FROM NOISE (ICASSP 2021) Pytorch implementation of SELF-ATTENTIVE VAD | Paper | Dataset Yong Rae

97 Dec 23, 2022
Classification models 1D Zoo - Keras and TF.Keras

Classification models 1D Zoo - Keras and TF.Keras This repository contains 1D variants of popular CNN models for classification like ResNets, DenseNet

Roman Solovyev 12 Jan 06, 2023
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
Realtime Face Anti Spoofing with Face Detector based on Deep Learning using Tensorflow/Keras and OpenCV

Realtime Face Anti-Spoofing Detection 🤖 Realtime Face Anti Spoofing Detection with Face Detector to detect real and fake faces Please star this repo

Prem Kumar 86 Aug 03, 2022
Repo for CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning

CReST in Tensorflow 2 Code for the paper: "CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning" by Chen Wei, Ki

Google Research 75 Nov 01, 2022
App for identification of various objects. Based on YOLO v4 tiny architecture

Object_detection Repository containing trained model yolo v4 tiny, which is capable of identification 80 different classes Default feed is set to be a

Mateusz Kurdziel 0 Jun 22, 2022