HyperLib: Deep learning in the Hyperbolic space

Related tags

Deep Learninghyperlib
Overview

HyperLib: Deep learning in the Hyperbolic space

PyPI version

Background

This library implements common Neural Network components in the hypberbolic space (using the Poincare model). The implementation of this library uses Tensorflow as a backend and can easily be used with Keras and is meant to help Data Scientists, Machine Learning Engineers, Researchers and others to implement hyperbolic neural networks.

You can also use this library for uses other than neural networks by using the mathematical functions avaialbe in the Poincare class. In the future we may implement components that can be used in models other than neural networks. You can learn more about Hyperbolic networks here.

Example Usage

Install the library

pip install hyperlib

Creating a hyperbolic neural network using Keras:

import tensorflow as tf
from tensorflow import keras
from hyperlib.nn.layers.lin_hyp import LinearHyperbolic
from hyperlib.nn.optimizers.rsgd import RSGD
from hyperlib.manifold.poincare import Poincare

# Create layers
hyperbolic_layer_1 = LinearHyperbolic(32, Poincare(), 1)
hyperbolic_layer_2 = LinearHyperbolic(32, Poincare(), 1)
output_layer = LinearHyperbolic(10, Poincare(), 1)

# Create optimizer
optimizer = RSGD(learning_rate=0.1)

# Create model architecture
model = tf.keras.models.Sequential([
  hyperbolic_layer_1,
  hyperbolic_layer_2,
  output_layer
])

# Compile the model with the Riemannian optimizer            
model.compile(
    optimizer=optimizer,
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)

Using math functions on the Poincare ball:

import tensorflow as tf
from hyperlib.manifold.poincare import Poincare

p = Poincare()

# Create two matrices
a = tf.constant([[5.0,9.4,3.0],[2.0,5.2,8.9],[4.0,7.2,8.9]])
b = tf.constant([[4.8,1.0,2.3]])

# Matrix multiplication on the Poincare ball
curvature = 1
p.mobius_matvec(a, b, curvature)

TODO:

  • Implement an Attention Mechanism
  • Implement a Riemannian Adam Optimizer
  • Remove casting of layer variables to tf.float64

References

[1] Chami, I., Ying, R., Ré, C. and Leskovec, J. Hyperbolic Graph Convolutional Neural Networks. NIPS 2019.

[2] Nickel, M. and Kiela, D. Poincaré embeddings for learning hierarchical representations. NIPS 2017.

[3] Khrulkov, Mirvakhabova, Ustinova, Oseledets, Lempitsky. Hyperbolic Image Embeddings.

[4] Wei Peng, Varanka, Mostafa, Shi, Zhao. Hyperbolic Deep Neural Networks: A Survey.

Comments
  • Sarkar Embedding and CICD

    Sarkar Embedding and CICD

    Overview

    This PR has two parts

    1. Sarkar tree embedding for 2 and 3 dimensions
    2. CICD testing and building

    Sarkar Embedding

    The two main functions are sarkar_embedding and sarkar_embedding_3D.
    These are used to embed a (weighted) tree into the 2D or 3D Poincare ball[^1]. The 3D version uses a "fibonacci coding" for distributing points on a 2D sphere[^2]. [^2]: http://extremelearning.com.au/evenly-distributing-points-on-a-sphere/ [^1]: https://homepages.inf.ed.ac.uk/rsarkar/papers/HyperbolicDelaunayFull.pdf

    Example usage:

    from hyperlib.util.graph import binary_tree
    from hyperlib.util.sarkar import sarkar_embedding_3D
    
    T = binary_tree(4) # a depth 4 binary tree
    root = 0 # the index to use as the root
    tau = 0.4 # the scaling factor for edges
    emb = sarkar_embedding_3D(T, root, tau=tau, precision=40)
    

    Note that both of these functions use mpmath for high precision calculations, and return a mpmath.matrix. I added high precision Poincare math functions in the utils package.

    Sarkar's algorithm can be extended to higher dimensions but the "spherical coding" part is more difficult. Will address in the future.

    CICD

    I added two basic workflows. One runs a linter and tests on push and pull requests to main.
    The other triggers when we tag a version on main. It builds wheels and uploads them to PyPI and Test PyPI. I added API tokens for nalex's PyPI account as GitHub secrets on this repo. The wheels are built for linux, windows, and mac on x86_64, amd64, i686 architectures using cibuildwheel. I tested on ubuntu and @sourface94 tested on windows.

    To build locally you use

    pip setup.py build
    

    Note that you need a cpp compiler to build the treerep part. On linux you need gxx_linux-64 which can be install from conda install -c conda-forge gxx_linux-64.

    When developing please run the tests locally first. To run the linter:

    python -m pip install flake8
    flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
    

    and to run the tests just pytest -v tests/.

    Addendum

    I updated the embedding example in the readme and made a examples/ folder for future examples.
    I also added install instructions.

    opened by meiji163 1
  • TreeRep and hyperbolic functions

    TreeRep and hyperbolic functions

    • Added embedding package with TreeRep cxx source
    • Added setup.py with pybind11, tested on ARM MacOS Big Sur
    • fixed some Poincare functions and added functions
    opened by meiji163 1
  • Sarkar Embedding

    Sarkar Embedding

    This PR adds Sarkar's algorithm for embedding a tree in 2 and 3 dimensional hyperbolic space, plus more high precision utility functions for the Poincare disc.

    TODO:

    • Sarkar's algorithm for >3 dimensions. We have to implement a solution to the "spherical coding" problem in high dimensions.
    • unit tests
    opened by meiji163 0
  • TreeRep

    TreeRep

    1. Added Treerep cxx source with pybind11 wrapper. To use simply call the function embedding.graph.treerep on the distance matrix. There are also functions to convert the resulting tree to a scipy.csgraph or a networkx graph.
    2. Added functions for working with distance matrices and measuring delta-hyperbolicity under embedding.metric
    3. Add utils.multiprecision for high precision calculations using mpmath (necessary for calculating large hyperbolic distances accurately)
    4. Fixed a few Poincare class functions and added some
    5. Made tests for treerep

    Built and tested successfully on MacOS Big Sur and Manjaro Linux

    opened by meiji163 0
  • optimizers & hyperbolic funcs

    optimizers & hyperbolic funcs

    • Moved hyperbolic functions to utils.math. When working with the library it was inconvenient having to access functions through the "Poincare" class. The benefit of having the class isn't clear to me.
    • Fixed some errors in the hyperbolic functions and added a few functions (hyp_dist, clipped_norm, parallel_transport, gyr, lambda_x)
    • Rewrote RSGD. Optimizers should inherit from keras optimizer_v2 and implement sparse and dense updates separately (see here). It should be possible to momentum next.
    • attempted RAdam, still buggy. The problem is parallel transport of momentum ( see Becigneul & Ganea pg. 5 )
    • Added Jupyter notebook in /examples to demo word embedding. Not sure where to put this but it could be nice to have more demos/tutorials in the future.
    opened by meiji163 0
  • Hyperlib not using GPU

    Hyperlib not using GPU

    I am trying to train on google colab using the following code

    from random import choice
    import numpy as np
    import pandas as pd
    import tensorflow as tf
    from tensorflow import keras
    
    from hyperlib.manifold.lorentz import Lorentz
    from hyperlib.manifold.poincare import Poincare
    from hyperlib.models.pehr import HierarchicalEmbeddings
    
    
    def load_wordnet_data(file, negatives=20):
        noun_closure = pd.read_csv(file)
        noun_closure_np = noun_closure[["id1","id2"]].values
    
        edges = set()
        for i, j in noun_closure_np:
            edges.add((i,j))
    
        unique_nouns = list(set(
            noun_closure["id1"].tolist()+noun_closure["id2"].tolist()
        ))
    
        noun_closure["neg_pairs"] = noun_closure["id1"].apply(get_neg_pairs, args=(edges, unique_nouns, 20,))
        return noun_closure, unique_nouns
    
    def get_neg_pairs(noun, edges, unique_nouns, negatives=20):
        neg_list = []
        while len(neg_list) < negatives:
            neg_noun = choice(unique_nouns)
            if neg_noun != noun \
            and not neg_noun in neg_list \
            and not ((noun, neg_noun) in edges or (neg_noun, noun) in edges):
                neg_list.append(neg_noun)
        return neg_list
    
    
    # Make training dataset
    noun_closure, unique_nouns = load_wordnet_data("mammal_closure.csv", negatives=15)
    noun_closure_dataset = noun_closure[["id1","id2"]].values
    
    batch_size = 16
    train_dataset = tf.data.Dataset.from_tensor_slices(
            (noun_closure_dataset, noun_closure["neg_pairs"].tolist()))
    train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
    
    # Create model
    model = HierarchicalEmbeddings(vocab=unique_nouns, embedding_dim=10)
    sgd = keras.optimizers.SGD(learning_rate=1e-2, momentum=0.9)
    
    # Run custom training loop
    model.fit(train_dataset, sgd, epochs=20)
    embs = model.get_embeddings()
    
    M = Poincare()
    mammal = M.expmap0(model(tf.constant('dog.n.01')), c=1)
    dists = M.dist(mammal, embs, c=1.0)
    top = tf.math.top_k(-dists[:,0], k=20)
    for i in top.indices:
        print(unique_nouns[i],': ',-dists[i,0].numpy())
    

    I see that the GPU is not being used when I inspect the GPU usage. Kindly help

    opened by rahulsee 0
  • model.fit does not show any output or progress

    model.fit does not show any output or progress

    When running model.fit as per this code example I am unable to make out whether any progress is happening or the training is hung for me. Kindly help

    This is the code I am trying to run

    from random import choice
    import numpy as np
    import pandas as pd
    import tensorflow as tf
    from tensorflow import keras
    
    from hyperlib.manifold.lorentz import Lorentz
    from hyperlib.manifold.poincare import Poincare
    from hyperlib.models.pehr import HierarchicalEmbeddings
    
    
    def load_wordnet_data(file, negatives=20):
        noun_closure = pd.read_csv(file)
        noun_closure_np = noun_closure[["id1","id2"]].values
    
        edges = set()
        for i, j in noun_closure_np:
            edges.add((i,j))
    
        unique_nouns = list(set(
            noun_closure["id1"].tolist()+noun_closure["id2"].tolist()
        ))
    
        noun_closure["neg_pairs"] = noun_closure["id1"].apply(get_neg_pairs, args=(edges, unique_nouns, 20,))
        return noun_closure, unique_nouns
    
    def get_neg_pairs(noun, edges, unique_nouns, negatives=20):
        neg_list = []
        while len(neg_list) < negatives:
            neg_noun = choice(unique_nouns)
            if neg_noun != noun \
            and not neg_noun in neg_list \
            and not ((noun, neg_noun) in edges or (neg_noun, noun) in edges):
                neg_list.append(neg_noun)
        return neg_list
    
    
    # Make training dataset
    noun_closure, unique_nouns = load_wordnet_data("mammal_closure.csv", negatives=15)
    noun_closure_dataset = noun_closure[["id1","id2"]].values
    
    batch_size = 16
    train_dataset = tf.data.Dataset.from_tensor_slices(
            (noun_closure_dataset, noun_closure["neg_pairs"].tolist()))
    train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
    
    # Create model
    model = HierarchicalEmbeddings(vocab=unique_nouns, embedding_dim=10)
    sgd = keras.optimizers.SGD(learning_rate=1e-2, momentum=0.9)
    
    # Run custom training loop
    model.fit(train_dataset, sgd, epochs=20)
    embs = model.get_embeddings()
    
    M = Poincare()
    mammal = M.expmap0(model(tf.constant('dog.n.01')), c=1)
    dists = M.dist(mammal, embs, c=1.0)
    top = tf.math.top_k(-dists[:,0], k=20)
    for i in top.indices:
        print(unique_nouns[i],': ',-dists[i,0].numpy())
    
    opened by rahulsee 1
  • RSGD Improvements

    RSGD Improvements

    Right now optimizers.rsgd is implemented specifically for the Poincare model.
    We should add an interface that works for any model.

    TODO: write other improvements here

    opened by meiji163 1
  • Solve Precision Issues

    Solve Precision Issues

    This is a tracking issue for solving precision issues.

    Problem Overview

    Precision is one of the major obstacles to the adoption of hyperbolic geometry in machine learning.
    As shown in Representation Tradeoffs for Hyperbolic Embeddings, there is a tradeoff between precision and dimensionality when representing points in hyperbolic space with floats, independent of the model that is used.

    Hyperlib should have a solution to this in its core components. Ideally the solution will satisfy the following.

    1. reasonably efficient: it doesn't incur significant overhead compared to Euclidean methods and is GPU compatible
    2. easy to use: it's abstracted away from the API so that a casual user doesn't have to touch it
    3. general: it's general enough to be used with different models of hyperbolic space

    Approaches

    Hope for the best

    We see many papers that simply accept the precision errors and try to mitigate them, or go to higher dimensions.
    E.g. Our current approach in the Poincare model is to cast tf.float64, which only gets us 53 bits of precision.

    Multiprecision

    In the sarkar embeddings, we use a multi-precision library mpmath to represent points. As far as multiprecision arithmetic goes it is fast (assuming it is using the gmpy) backend. However the support for vector operations is not good and it cannot easily interoperate with numpy or tensorflow. Also we do not yet have a good method to automatically determine the precision setting (for example, in sarkar_embedding it uses far too much precision by default).

    Avoiding the Problem

    One common approach to avoid precision errors, especially in hyperbolic SGD, is to map from the (Euclidean) tangent space and do all operations there instead. We should definitely experiment with and support this method in Hyperlib. This will work for all models via the exponential map. However, it only solves part of the problem.

    Multi-Component Float

    Multi-Component Floats (MCF) are an alternate representation for floats that can be vectorized, proposed by Yu and De Sa as a way to do calculations in the upper half-space model. IMO this is the most promising approach if it can be extended to other models of hyperbolic space.

    Todos

    • [ ] Spike: implementing MCF for upper-half space
    tracking 
    opened by meiji163 0
  • Tracking Issue: Documentation

    Tracking Issue: Documentation

    This is a tracking issue for improving documentation as hyperlib develops.
    This will be important for getting more people to use hyperbolic ML (esp. examples)

    • [ ] Standardize function doc strings
    • [ ] Generate docs with sphinx
    • [ ] Put up documentation site
    • [ ] Examples (expand on point this later)
    documentation tracking 
    opened by meiji163 0
Releases(v0.0.6)
FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Meta Incubator 272 Jan 02, 2023
Java and SHACL code commented in the paper "Towards compliance checking in reified I/O logic via SHACL" submitted to ICAIL 2021

shRIOL The subfolder shRIOL contains Java files to execute the SHACL files on the OWL ontology. To compile the Java files: "javac -cp ./src/;./lib/* -

1 Dec 06, 2022
Entity-Based Knowledge Conflicts in Question Answering.

Entity-Based Knowledge Conflicts in Question Answering Run Instructions | Paper | Citation | License This repository provides the Substitution Framewo

Apple 35 Oct 19, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
Flower classification model that classifies flowers in 10 classes made using transfer learning (~85% accuracy).

flower-classification-inceptionV3 Flower classification model that classifies flowers in 10 classes. Training and validation are done using a pre-anot

Ivan R. Mršulja 1 Dec 12, 2021
Neural Oblivious Decision Ensembles

Neural Oblivious Decision Ensembles A supplementary code for anonymous ICLR 2020 submission. What does it do? It learns deep ensembles of oblivious di

25 Sep 21, 2022
Imposter-detector-2022 - HackED 2022 Team 3IQ - 2022 Imposter Detector

HackED 2022 Team 3IQ - 2022 Imposter Detector By Aneeljyot Alagh, Curtis Kan, Jo

Joshua Ji 3 Aug 20, 2022
A Python framework for conversational search

Chatty Goose Multi-stage Conversational Passage Retrieval: An Approach to Fusing Term Importance Estimation and Neural Query Rewriting Installation Ma

Castorini 36 Oct 23, 2022
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis

O-CNN This repository contains the implementation of our papers related with O-CNN. The code is released under the MIT license. O-CNN: Octree-based Co

Microsoft 607 Dec 28, 2022
[CVPR'22] COAP: Learning Compositional Occupancy of People

COAP: Compositional Articulated Occupancy of People Paper | Video | Project Page This is the official implementation of the CVPR 2022 paper COAP: Lear

Marko Mihajlovic 111 Dec 11, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023
Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

Implementation of "Distribution Alignment: A Unified Framework for Long-tail Visual Recognition"(CVPR 2021)

105 Nov 07, 2022
Semantic Segmentation in Pytorch. Network include: FCN、FCN_ResNet、SegNet、UNet、BiSeNet、BiSeNetV2、PSPNet、DeepLabv3_plus、 HRNet、DDRNet

🚀 If it helps you, click a star! ⭐ Update log 2020.12.10 Project structure adjustment, the previous code has been deleted, the adjustment will be re-

Deeachain 269 Jan 04, 2023
Official PyTorch Implementation for "Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes"

PVDNet: Recurrent Video Deblurring with Blur-Invariant Motion Estimation and Pixel Volumes This repository contains the official PyTorch implementatio

Junyong Lee 98 Nov 06, 2022
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)

MeTAL - Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning (ICCV2021 Oral) Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaes

Sungyong Baik 44 Dec 29, 2022
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)

Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning This is a PyTorch implementation of the Asym-Siam paper, CVPR 2022: @inp

Meta Research 89 Dec 18, 2022
Generative Flow Networks

Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation Implementation for our paper, submitted to NeurIPS 2021 (also chec

Emmanuel Bengio 381 Jan 04, 2023
source code for https://arxiv.org/abs/2005.11248 "Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics"

Accelerating Antimicrobial Discovery with Controllable Deep Generative Models and Molecular Dynamics This work will be published in Nature Biomedical

International Business Machines 71 Nov 15, 2022
An open source Jetson Nano baseboard and tools to design your own.

My Jetson Nano Baseboard This basic baseboard gives the user the foundation and the flexibility to design their own baseboard for the Jetson Nano. It

NVIDIA AI IOT 57 Dec 29, 2022
Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Features"

EDM-subgenre-classifier This repository contains the code for "Deep Learning Based EDM Subgenre Classification using Mel-Spectrogram and Tempogram Fea

11 Dec 20, 2022