Deep Learning and Logical Reasoning from Data and Knowledge

Overview

Logic Tensor Networks (LTN)

Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data and rich abstract knowledge about the world. LTN uses a differentiable first-order logic language, called Real Logic, to incorporate data and logic.

Grounding_illustration

LTN converts Real Logic formulas (e.g. ∀x(cat(x) → ∃y(partOf(x,y)∧tail(y)))) into TensorFlow computational graphs. Such formulas can express complex queries about the data, prior knowledge to satisfy during learning, statements to prove ...

Computational_graph_illustration

One can represent and effectively compute the most important tasks of deep learning. Examples of such tasks are classification, regression, clustering, or link prediction. The "Getting Started" section of the README links to tutorials and examples of LTN code.

[Paper]

@misc{badreddine2021logic,
      title={Logic Tensor Networks}, 
      author={Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and Michael Spranger},
      year={2021},
      eprint={2012.13635},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Installation

Clone the LTN repository and install it using pip install -e <local project path>.

Following are the dependencies we used for development (similar versions should run fine):

  • python 3.8
  • tensorflow >= 2.2 (for running the core system)
  • numpy >= 1.18 (for examples)
  • matplotlib >= 3.2 (for examples)

Repository structure

  • logictensornetworks/core.py -- core system for defining constants, variables, predicates, functions and formulas,
  • logictensornetworks/fuzzy_ops.py -- a collection of fuzzy logic operators defined using Tensorflow primitives,
  • logictensornetworks/utils.py -- a collection of useful functions,
  • tutorials/ -- tutorials to start with LTN,
  • examples/ -- various problems approached using LTN,
  • tests/ -- tests.

Getting Started

Tutorials

tutorials/ contains a walk-through of LTN. In order, the tutorials cover the following topics:

  1. Grounding in LTN part 1: Real Logic, constants, predicates, functions, variables,
  2. Grounding in LTN part 2: connectives and quantifiers (+ complement: choosing appropriate operators for learning),
  3. Learning in LTN: using satisfiability of LTN formulas as a training objective,
  4. Reasoning in LTN: measuring if a formula is the logical consequence of a knowledgebase.

The tutorials are implemented using jupyter notebooks.

Examples

examples/ contains a series of experiments. Their objective is to show how the language of Real Logic can be used to specify a number of tasks that involve learning from data and reasoning about logical knowledge. Examples of such tasks are: classification, regression, clustering, link prediction.

  • The binary classification example illustrates in the simplest setting how to ground a binary classifier as a predicate in LTN, and how to feed batches of data during training,
  • The multiclass classification examples (single-label, multi-label) illustrate how to ground predicates that can classify samples in several classes,
  • The MNIST digit addition example showcases the power of a neurosymbolic approach in a classification task that only provides groundtruth for some final labels (result of the addition), where LTN is used to provide prior knowledge about intermediate labels (possible digits used in the addition),
  • The regression example illustrates how to ground a regressor as a function symbol in LTN,
  • The clustering example illustrates how LTN can solve a task using first-order constraints only, without any label being given through supervision,
  • The Smokes Friends Cancer example is a classical link prediction problem of Statistical Relational Learning where LTN learns embeddings for individuals based on fuzzy groundtruths and first-order constraints.

The examples are presented with both jupyter notebooks and Python scripts.

Querying with LTN

Learning with LTN

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

LTN has been developed thanks to active contributions and discussions with the following people (in alphabetical order):

  • Alessandro Daniele (FBK)
  • Artur d’Avila Garcez (City)
  • Benedikt Wagner (City)
  • Emile van Krieken (VU Amsterdam)
  • Francesco Giannini (UniSiena)
  • Giuseppe Marra (UniSiena)
  • Ivan Donadello (FBK)
  • Lucas Bechberger (UniOsnabruck)
  • Luciano Serafini (FBK)
  • Marco Gori (UniSiena)
  • Michael Spranger (Sony AI)
  • Michelangelo Diligenti (UniSiena)
  • Samy Badreddine (Sony AI)
Comments
  • ValueError: mask cannot be scalar.

    ValueError: mask cannot be scalar.

    When I try define ltn.variable the following error is returned:

        <ipython-input-11-51fc9a0fab79>:5 axioms *
            bb12_relation = ltn.variable("P",features[labels_position=="P"])
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:600 _slice_helper
            return boolean_mask(tensor=tensor, mask=slice_spec)
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:1365 boolean_mask
            raise ValueError("mask cannot be scalar.")
    
        ValueError: mask cannot be scalar.
    

    Based on the code of multiclass-multilabel.ipynb I declare the first variable in the axioms function that returns the mentioned error: ltn.variable("P",features[labels_position=="P"])

    opened by MilenaTenorio 9
  • ltnw: run knowledgebase without training should be possibe

    ltnw: run knowledgebase without training should be possibe

    import logging; logging.basicConfig(level=logging.INFO)
    
    import logictensornetworks_wrapper as ltnw
    import tensorflow as tf
    
    ltnw.constant("c",[2.1,3])
    ltnw.constant("d",[3.4,1.5])
    ltnw.function("f",4,2,fun_definition=lambda x,y:x-y)
    mu = tf.constant([2.,3.])
    ltnw.predicate("P",2,pred_definition=lambda x:tf.exp(-tf.reduce_sum(tf.square(x-mu))))
    
    ltnw.formula("P(c)")
    
    ltnw.initialize_knowledgebase()
    
    with tf.Session() as sess:
        print(sess.run(ltnw.ask("P(c)")))
        print(sess.run(ltnw.ask("P(d)")))
        print(sess.run(ltnw.ask("P(f(c,d))")))
    

    Throws ValueError: No variables to optimize.

    bug 
    opened by mspranger 3
  • Lambda for functions need to be implemented using Functional API of TF

    Lambda for functions need to be implemented using Functional API of TF

    Here is what I did:

    import logictensornetworks as ltn
    f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])
    c1 = ltn.constant([2.1,3])
    c2 = ltn.constant([4.5,0.8])
    print(f1([c1,c2])) # multiple arguments are passed as a list
    

    And I get this:

    WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[2.1, 3. ]], dtype=float32)>, <tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[4.5, 0.8]], dtype=float32)>]
    Consider rewriting this model with the Functional API.
    tf.Tensor([-2.4  2.2], shape=(2,), dtype=float32)
    

    Here are the versions:

    tensorflow=2.4.0
    ltn = directly from this repo today (24 Jan 2021)
    
    opened by thoth291 2
  • Check of number_of_features_or_feed of ltn.variable

    Check of number_of_features_or_feed of ltn.variable

    opened by ivanDonadello 2
  • ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    The implementation of ltnw.term is incompatible with the redeclaration of constants, variables or functions

    ltnw.term is looking at the result value previously stored in the global dictionary ltnw.TERMS rather than reconstructing the term

    For instance, the code:

    ltnw.variable('?x',[[3.0,5.0],[2.0,6.0],[3.0,9.0]])
    print('1st call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    
    ltnw.variable('?x',[[3.0,10.0],[1.0,6.0]])
    print('2nd call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    

    outputs:

    1st call
    value of variable:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    2nd call
    value of variable:
    [[ 3. 10.]
     [ 1.  6.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    opened by sbadredd 2
  • Error in the axioms of the clustering example

    Error in the axioms of the clustering example

    Following issue #17 and #20 , the commit 578d7bcaa35c797ac1c94cf322f0a6ec524beaa2 updated the axioms in the clustering example.

    It introduced a typo in the masks. In pseudo-code the rules with masks should be:

    for all x,y s.t. close_threshold > distance(x,y): x,y belong to the same cluster
    for all x,y s.t. distance(x,y) > distant_threshold: x,y belong to different cluster
    

    However, the rules have been written:

    for all x,y s.t.  distance(x,y) > close_threshold: x,y belong to the same cluster
    for all x,y s.t. distant_threshold > distance(x,y) : x,y belong to different cluster
    

    Basically, the operands have been mixed. This explains why the latest results were not as good as the previous ones. This is easy to fix; the operands just have to be interchanged again

    bug 
    opened by sbadredd 1
  • Add runtime Type Checking when constructing expressions

    Add runtime Type Checking when constructing expressions

    Issue #19 defined classes for Term and Formula following the usual definitions of FOL (see also)

    This can be used to type-check the arguments of various functions:

    • The inputs of predicates and functions are instances of Term,
    • The expressions in connectives and quantifier operations are instances of Formula,
    • The masks in quantifiers are instances of Formula.

    This is already indicated in type hints. Adding a runtime validation would make a stronger API and ensure that the user correctly uses the different LTN classes

    enhancement 
    opened by sbadredd 0
  • Parent classes for Terms and Formulas

    Parent classes for Terms and Formulas

    Going further than issue #16, we can define classes for Term and Formula.

    • Variable and Constant would be subclasses of Term
    • The output of a Function is a Term
    • Proposition is a subclass of Formula
    • The output of a Predicate is a Formula, and so is the result of connective and quantifiers operations

    This can in turn be used for type checking the arguments of various functions:

    • The inputs of predicates and functions must be instances of Term
    • The inputs of connective and quantifier operations must be instances of Formula

    This could be useful for helping the user with better error messages and debugging

    enhancement 
    opened by sbadredd 0
  • Add a constructor for variables made from trainable constants

    Add a constructor for variables made from trainable constants

    A variable can be instantiated using two different types of objects:

    • A value (numpy, python list, ...) that will be fed in a tf.constant (the variable refers to a new object).
    • A tf.Tensor instance that will be used directly as the variable (the variable refers to the same object).

    The latter is useful when the variable denotes a sequence of trainable constants.

    c1 = ltn.constant([2.1,3], trainable=True)
    c2 = ltn.constant([4.5,0.8], trainable=True)
    
    with tf.GradientTape() as tape:
        # Notice that the assignation must be done within a tf.GradientTape.
        # Tensorflow will keep track of the gradients between c1/c2 and x.
        x = ltn.variable("x",tf.stack([c1,c2]))
        res = P2(x)
    tape.gradient(res,c1).numpy() # the tape keeps track of gradients between P2(x), x and c1
    

    The assignation must be done within a tf.GradientTape. This is explained in the tutorials, but a user could easily miss this information.

    I propose to add a constructor for variables from constants, that must explicitly take the tf.GradientTape instance as an argument. In this way, it will be harder to miss.

    enhancement 
    opened by sbadredd 0
  • Support masks using LTN syntax instead of TensorFlow operations

    Support masks using LTN syntax instead of TensorFlow operations

    To use a guarded quantifier in a LTN sentence, the user must use lambda functions in the middle of traditional LTN syntax. Also, he can use TensorFlow syntax to write the mask, which adds to the confusion.

    For example, in the MNIST single-digit additional example, we have the following mask:

    exists(...,...,
        mask_vars=[d1,d2,labels_z],
        mask_fn=lambda vars: tf.equal(vars[0]+vars[1],vars[2])
    )
    

    If we would write the mask in LTN syntax, this would give:

    exists(...,...,
        mask= Equal([Add([d1,d2]),labels_z])
    )
    

    I believe the latter is clearer and more coherent within an LTN expression.

    This implies that the user must define extra LTN symbols for Equal and Add. I believe this is worth it, for the sake of clarity. In case the user wouldn't want to do that, he can still re-use the lambda function inside of a Mask predicate:

    Mask = ltn.Predicate.Lambda(lambda vars: tf.equal(vars[0]+vars[1],vars[2]))
    ...
    exists(...,...,
        mask=Mask([d1,d2,labels_z])
    )
    

    The mask is still written using an LTN symbol and doesn't require changing the code much compared to the original approach

    enhancement 
    opened by sbadredd 0
  • Create classes for Variable, Constant and Proposition

    Create classes for Variable, Constant and Proposition

    At the moment, LTN implements most expressions using tf.Tensor objects with some added dynamic attributes.

    For example, for a non-trainable LTN constant, the logic is the following (simplified):

    def constant(value):
        result = tf.constant(value)
        result.active_doms = []
        return result
    

    This makes the system easy to break, and debugging difficult. When copying or operating with the constant, the user might not realize that a new tensor is created and the active_doms attribute is lost.

    I propose to separate the logic of LTN with the logic of Tensorflow, and use distinct types. Something like:

    class Constant:
        def __init__(self, value):
            self.tensor = tf.constant(value)
            self.active_doms = []
    

    This implies that LTN predicates and functions will have to be adapted to work with constant.tensor, variable.tensor, ...

    enhancement 
    opened by sbadredd 0
  • Add a ltn.Predicate constructor that takes in a logits model

    Add a ltn.Predicate constructor that takes in a logits model

    Constructors for ltn.Predicate

    The constructor for ltn.Predicate accepts a model that outputs one truth degree in [0,1].

    class ModelThatOutputsATruthDegree(tf.keras.Model):
        def __init__(self):
            super().__init__()
            self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.relu)
            self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]
    
        def call(self, x):
            x = self.dense1(x)
            return self.dense2(x)
    
    model = ModelThatOutputsATruthDegree()
    P1 = ltn.Predicate(model)
    P1(x) # -> call with a ltn Variable
    

    Issue

    Many models output several values simultaneously. For example, a model for the predicate P2 classifying images x into n classes type_1, ..., type_n will likely output n logits using the same hidden layers.

    Eventually, we would expect to call the corresponding predicate using the syntax P2(x,type). This requires two additional steps:

    1. Transforming the logits into values in [0,1],
    2. Indexing the class using the term type.

    Because this is a common use-case, we implemented a function ltn.utils.LogitsToPredicateModel for convenience. It is used in some of the examples (cf MNIST digit addition). The syntax is:

    logits_model(x) # how to call `logits_model`
    P2 = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model), single_label=True)
    P2([x,type]) # how to call the predicate
    

    It automatically adds a final argument for class indexing and performs a sigmoid or softmax activation depending on the parameter single_label.

    Proposition

    It would be more elegant to have the functionality of creating a predicate from a logits model as a class constructor for ltn.Predicate.

    A suggested syntax is:

    P2 = ltn.Predicate.FromLogits(logits_model, activation_function="softmax", with_class_indexing=True)
    
    • The functionality comes as a new class constructor,
    • The activation function is more explicit than the single_label parameter in ltn.utils.LogitsToPredicateModel,
    • with_class_indexing=False still allows creating predicates in the form of P1(x), like abovementioned.

    Changes to the rest of the API

    The proposition adds a new constructor but shouldn't change any other method of ltn.Predicate or any framework method in general.

    enhancement 
    opened by sbadredd 1
  • Weighted connective operators

    Weighted connective operators

    Hello,

    In my project, I needed to use connective fuzzy logic operator., So, I implemented a class that enables to add weights to classic fuzzy operators, based on this paper : https://www.researchgate.net/publication/2610015_The_Weighting_Issue_in_Fuzzy_Logic

    I think it may be useful for other people or even to add it to ltn operators, so here is my code :

    class WeightedConnective:
        """Class to compute a weighted connective fuzzy operator."""
    
        def __init__(self, single_connective: Callable = ltn.fuzzy_ops.And_Prod()):
            """Initialize WeightedConnective.
    
            Parameters
            ----------
            single_connective : Callable
                Function to compute the binary operation
            """
            self.single_connective = single_connective
    
        def __call__(self, *args: float, weights: list[float] | None = None) -> float:
            """Call function of WeightedConnective.
    
            Parameters
            ----------
            *args : float
                Truth values whose operation should be computed
            weights : list[float] | None
                List of weights for the predicates, None if all predicates should be weighted
                equally, default: None
    
            Returns
            -------
            float:
                Truth value of weighted connective operation between predicates
    
            Raises
            ------
            ValueError
                If no predicate was provided
            ValueError
                If the number of predicates and the number of weights are different
            """
            n = len(args)
            if n == 0:
                raise ValueError("No predicate was found")
            if n == 1:
                return args[0]
            if weights is None:
                weights = [1. / n for _ in range(n)]
            if len(weights) != n:
                raise ValueError(
                    f"Numbers of predicates and weights should be equal : {n} predicates and "
                    f"{len(weights)} weights were found")
    
            s = sum(weights)
            if s != 0:
                weights = [elt / s for elt in weights]
    
            w = max(weights)
            res = (weights[0] / w) * args[0]
            for i, x in enumerate(args):
                if i != 0:
                    res = self.single_connective(res, (weights[i] / w) * args[i])
            return res
    
    enhancement 
    opened by maelle101 1
  • Saving LTN model

    Saving LTN model

    Hello,

    I am working on a project using LTN. I train a model with several Neural Networks (the number varies between executions). Is there an easy way to save and then load an entire LTN model ? Or should I use several time tensorflow saving function and store other information (for example which Predicate corresponds to each NN) by a custom way ?

    Thanks in advance for any answer, and thanks for this great framework.

    opened by maelle101 3
  • Imbalanced classification

    Imbalanced classification

    first, thank you for this great framework, my question is; what is the best way to define variables for imbalanced classification (with a lot of categories) for which in each batch they might be empty? thank you!

    opened by mpourvali 3
  • Allow to permanently `diag` variables

    Allow to permanently `diag` variables

    Diagonal quantification

    Given 2 (or more) variables, ltn.diag allows to express statements about specific pairs (or tuples) of the variables, such that the i-th tuple contains the i-th instances of the variables.

    In simplified pseudo-code, the usual quantification would compute:

    for x_i in x:
        for y_j in y:
            results.append(P(x_i,y_j))
    aggregate(results)
    

    In contrast, diagonal quantification would compute:

    for x_i, y_i in zip(x,y):
        results.append(P(x_i,y_i))
    aggregate(results)
    

    In LTN code, given two variables x1 and x2, we use diagonal quantification as follows:

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    P = ltn.Predicate(...)
    P([x1,x2]) # -> returns 10x10 values
    ltn.diag(x1,x2)
    P([x1,x2]) # -> returns only 10 "zipped" values
    ltn.undiag(x1,x2)
    P([x1,x2]) # -> returns 10x10 values
    

    See also the second tutorial.

    Issue

    At the moment, every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

    Forall(ltn.diag(x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of 10x10 values
    

    However, there are cases where the second (normal) behavior for the two variables x1 and x2 is never useful. Some variables are designed from the start to be used as paired, zipped variables. In that case, forcing the user to re-use the keyword ltn.diag at every quantification is redundant.

    Proposition

    Define a new keyword ltn.diag_lock which can be used once at the instantiation of the variables, and will force the diag behavior in every subsequent quantification. ltn.undiag will not be called after an aggregation.

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    ltn.diag_lock([x1,x2])
    P([x1,x2]) # -> returns only 10 "zipped" values
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> still returns an aggregate of only 10 "zipped values"
    

    Possibly, we can add an ltn.undiag_lock too.

    The implementation details are left to define but shouldn't change the rest of the API.

    enhancement 
    opened by sbadredd 0
  • automated translation of tptp problems to ltn axioms

    automated translation of tptp problems to ltn axioms

    Hello,

    we're trying to automatically translate TPTP problems to axioms computable by the LTNs. Errors occur when trying to apply the gradient tape in the training step because of initialized variables outside of the tape scope as it is described in the tutorial notebooks. Is there by any chance already an implementation (or in the works) to translate a logic problem (written in some intermediate language) to LTN readable axioms?

    Best, Philip

    opened by phjlip 1
Releases(v2.0)
[AAAI 2022] Negative Sample Matters: A Renaissance of Metric Learning for Temporal Grounding

[AAAI 2022] Negative Sample Matters: A Renaissance of Metric Learning for Temporal Grounding Official Pytorch implementation of Negative Sample Matter

Multimedia Computing Group, Nanjing University 69 Dec 26, 2022
ivadomed is an integrated framework for medical image analysis with deep learning.

Repository on the collaborative IVADO medical imaging project between the Mila and NeuroPoly labs.

144 Dec 19, 2022
Reading Group @mila-iqia on Computational Optimal Transport for Machine Learning Applications

Computational Optimal Transport for Machine Learning Reading Group Over the last few years, optimal transport (OT) has quickly become a central topic

Ali Harakeh 11 Aug 26, 2022
A containerized REST API around OpenAI's CLIP model.

OpenAI's CLIP — REST API This is a container wrapping OpenAI's CLIP model in a RESTful interface. Running the container locally First, build the conta

Santiago Valdarrama 48 Nov 06, 2022
Unofficial Tensorflow Implementation of ConvNeXt from A ConvNet for the 2020s

Tensorflow Implementation of "A ConvNet for the 2020s" This is the unofficial Tensorflow Implementation of ConvNeXt from "A ConvNet for the 2020s" pap

DK 11 Oct 12, 2022
This is my codes that can visualize the psnr image in testing videos.

CVPR2018-Baseline-PSNRplot This is my codes that can visualize the psnr image in testing videos. Future Frame Prediction for Anomaly Detection – A New

Wenhao Yang 12 May 29, 2021
PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning This repository is for EMSRDPN introduced in the foll

7 Feb 10, 2022
PyTorch implementation of Barlow Twins.

Barlow Twins: Self-Supervised Learning via Redundancy Reduction PyTorch implementation of Barlow Twins. @article{zbontar2021barlow, title={Barlow Tw

Facebook Research 839 Dec 29, 2022
Pose estimation with MoveNet Lightning

Pose Estimation With MoveNet Lightning MoveNet is the TensorFlow pre-trained model that identifies 17 different key points of the human body. It is th

Yash Vora 2 Jan 04, 2022
An implementation of chunked, compressed, N-dimensional arrays for Python.

Zarr Latest Release Package Status License Build Status Coverage Downloads Gitter Citation What is it? Zarr is a Python package providing an implement

Zarr Developers 1.1k Dec 30, 2022
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.

BitPack is a practical tool that can efficiently save quantized neural network models with mixed bitwidth.

Zhen Dong 36 Dec 02, 2022
TRACER: Extreme Attention Guided Salient Object Tracing Network implementation in PyTorch

TRACER: Extreme Attention Guided Salient Object Tracing Network This paper was accepted at AAAI 2022 SA poster session. Datasets All datasets are avai

Karel 118 Dec 29, 2022
How to use TensorLayer

How to use TensorLayer While research in Deep Learning continues to improve the world, we use a bunch of tricks to implement algorithms with TensorLay

zhangrui 349 Dec 07, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022
Code for models used in Bashiri et al., "A Flow-based latent state generative model of neural population responses to natural images".

A Flow-based latent state generative model of neural population responses to natural images Code for "A Flow-based latent state generative model of ne

Sinz Lab 5 Aug 26, 2022
Hierarchical User Intent Graph Network for Multimedia Recommendation

Hierarchical User Intent Graph Network for Multimedia Recommendation This is our Pytorch implementation for the paper: Hierarchical User Intent Graph

6 Jan 05, 2023
Source code and Dataset creation for the paper "Neural Symbolic Regression That Scales"

NeuralSymbolicRegressionThatScales Pytorch implementation and pretrained models for the paper "Neural Symbolic Regression That Scales", presented at I

35 Nov 25, 2022
Official Implementation of Neural Splines

Neural Splines: Fitting 3D Surfaces with Inifinitely-Wide Neural Networks This repository contains the official implementation of the CVPR 2021 (Oral)

Francis Williams 56 Nov 29, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 29, 2022
This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Developed By Google!

Machine Learning Hand Detector This is a Machine Learning Based Hand Detector Project, It Uses Machine Learning Models and Modules Like Mediapipe, Dev

Popstar Idhant 3 Feb 25, 2022