Deep Learning and Logical Reasoning from Data and Knowledge

Overview

Logic Tensor Networks (LTN)

Logic Tensor Network (LTN) is a neurosymbolic framework that supports querying, learning and reasoning with both rich data and rich abstract knowledge about the world. LTN uses a differentiable first-order logic language, called Real Logic, to incorporate data and logic.

Grounding_illustration

LTN converts Real Logic formulas (e.g. ∀x(cat(x) → ∃y(partOf(x,y)∧tail(y)))) into TensorFlow computational graphs. Such formulas can express complex queries about the data, prior knowledge to satisfy during learning, statements to prove ...

Computational_graph_illustration

One can represent and effectively compute the most important tasks of deep learning. Examples of such tasks are classification, regression, clustering, or link prediction. The "Getting Started" section of the README links to tutorials and examples of LTN code.

[Paper]

@misc{badreddine2021logic,
      title={Logic Tensor Networks}, 
      author={Samy Badreddine and Artur d'Avila Garcez and Luciano Serafini and Michael Spranger},
      year={2021},
      eprint={2012.13635},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

Installation

Clone the LTN repository and install it using pip install -e <local project path>.

Following are the dependencies we used for development (similar versions should run fine):

  • python 3.8
  • tensorflow >= 2.2 (for running the core system)
  • numpy >= 1.18 (for examples)
  • matplotlib >= 3.2 (for examples)

Repository structure

  • logictensornetworks/core.py -- core system for defining constants, variables, predicates, functions and formulas,
  • logictensornetworks/fuzzy_ops.py -- a collection of fuzzy logic operators defined using Tensorflow primitives,
  • logictensornetworks/utils.py -- a collection of useful functions,
  • tutorials/ -- tutorials to start with LTN,
  • examples/ -- various problems approached using LTN,
  • tests/ -- tests.

Getting Started

Tutorials

tutorials/ contains a walk-through of LTN. In order, the tutorials cover the following topics:

  1. Grounding in LTN part 1: Real Logic, constants, predicates, functions, variables,
  2. Grounding in LTN part 2: connectives and quantifiers (+ complement: choosing appropriate operators for learning),
  3. Learning in LTN: using satisfiability of LTN formulas as a training objective,
  4. Reasoning in LTN: measuring if a formula is the logical consequence of a knowledgebase.

The tutorials are implemented using jupyter notebooks.

Examples

examples/ contains a series of experiments. Their objective is to show how the language of Real Logic can be used to specify a number of tasks that involve learning from data and reasoning about logical knowledge. Examples of such tasks are: classification, regression, clustering, link prediction.

  • The binary classification example illustrates in the simplest setting how to ground a binary classifier as a predicate in LTN, and how to feed batches of data during training,
  • The multiclass classification examples (single-label, multi-label) illustrate how to ground predicates that can classify samples in several classes,
  • The MNIST digit addition example showcases the power of a neurosymbolic approach in a classification task that only provides groundtruth for some final labels (result of the addition), where LTN is used to provide prior knowledge about intermediate labels (possible digits used in the addition),
  • The regression example illustrates how to ground a regressor as a function symbol in LTN,
  • The clustering example illustrates how LTN can solve a task using first-order constraints only, without any label being given through supervision,
  • The Smokes Friends Cancer example is a classical link prediction problem of Statistical Relational Learning where LTN learns embeddings for individuals based on fuzzy groundtruths and first-order constraints.

The examples are presented with both jupyter notebooks and Python scripts.

Querying with LTN

Learning with LTN

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

LTN has been developed thanks to active contributions and discussions with the following people (in alphabetical order):

  • Alessandro Daniele (FBK)
  • Artur d’Avila Garcez (City)
  • Benedikt Wagner (City)
  • Emile van Krieken (VU Amsterdam)
  • Francesco Giannini (UniSiena)
  • Giuseppe Marra (UniSiena)
  • Ivan Donadello (FBK)
  • Lucas Bechberger (UniOsnabruck)
  • Luciano Serafini (FBK)
  • Marco Gori (UniSiena)
  • Michael Spranger (Sony AI)
  • Michelangelo Diligenti (UniSiena)
  • Samy Badreddine (Sony AI)
Comments
  • ValueError: mask cannot be scalar.

    ValueError: mask cannot be scalar.

    When I try define ltn.variable the following error is returned:

        <ipython-input-11-51fc9a0fab79>:5 axioms *
            bb12_relation = ltn.variable("P",features[labels_position=="P"])
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:600 _slice_helper
            return boolean_mask(tensor=tensor, mask=slice_spec)
        C:\Users\Milena\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py:1365 boolean_mask
            raise ValueError("mask cannot be scalar.")
    
        ValueError: mask cannot be scalar.
    

    Based on the code of multiclass-multilabel.ipynb I declare the first variable in the axioms function that returns the mentioned error: ltn.variable("P",features[labels_position=="P"])

    opened by MilenaTenorio 9
  • ltnw: run knowledgebase without training should be possibe

    ltnw: run knowledgebase without training should be possibe

    import logging; logging.basicConfig(level=logging.INFO)
    
    import logictensornetworks_wrapper as ltnw
    import tensorflow as tf
    
    ltnw.constant("c",[2.1,3])
    ltnw.constant("d",[3.4,1.5])
    ltnw.function("f",4,2,fun_definition=lambda x,y:x-y)
    mu = tf.constant([2.,3.])
    ltnw.predicate("P",2,pred_definition=lambda x:tf.exp(-tf.reduce_sum(tf.square(x-mu))))
    
    ltnw.formula("P(c)")
    
    ltnw.initialize_knowledgebase()
    
    with tf.Session() as sess:
        print(sess.run(ltnw.ask("P(c)")))
        print(sess.run(ltnw.ask("P(d)")))
        print(sess.run(ltnw.ask("P(f(c,d))")))
    

    Throws ValueError: No variables to optimize.

    bug 
    opened by mspranger 3
  • Lambda for functions need to be implemented using Functional API of TF

    Lambda for functions need to be implemented using Functional API of TF

    Here is what I did:

    import logictensornetworks as ltn
    f1 = ltn.Function.Lambda(lambda args: args[0]-args[1])
    c1 = ltn.constant([2.1,3])
    c2 = ltn.constant([4.5,0.8])
    print(f1([c1,c2])) # multiple arguments are passed as a list
    

    And I get this:

    WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'list'> input: [<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[2.1, 3. ]], dtype=float32)>, <tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[4.5, 0.8]], dtype=float32)>]
    Consider rewriting this model with the Functional API.
    tf.Tensor([-2.4  2.2], shape=(2,), dtype=float32)
    

    Here are the versions:

    tensorflow=2.4.0
    ltn = directly from this repo today (24 Jan 2021)
    
    opened by thoth291 2
  • Check of number_of_features_or_feed of ltn.variable

    Check of number_of_features_or_feed of ltn.variable

    opened by ivanDonadello 2
  • ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    ltnw.term: evaluating a term after redeclaring its constants, variables or functions

    The implementation of ltnw.term is incompatible with the redeclaration of constants, variables or functions

    ltnw.term is looking at the result value previously stored in the global dictionary ltnw.TERMS rather than reconstructing the term

    For instance, the code:

    ltnw.variable('?x',[[3.0,5.0],[2.0,6.0],[3.0,9.0]])
    print('1st call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    
    ltnw.variable('?x',[[3.0,10.0],[1.0,6.0]])
    print('2nd call')
    print('value of variable:\n'+str(ltnw.VARIABLES['var_x'].eval()))
    print('value of term:\n'+str(ltnw.term('?x').eval()))
    

    outputs:

    1st call
    value of variable:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    2nd call
    value of variable:
    [[ 3. 10.]
     [ 1.  6.]]
    value of term:
    [[3. 5.]
     [2. 6.]
     [3. 9.]]
    
    opened by sbadredd 2
  • Error in the axioms of the clustering example

    Error in the axioms of the clustering example

    Following issue #17 and #20 , the commit 578d7bcaa35c797ac1c94cf322f0a6ec524beaa2 updated the axioms in the clustering example.

    It introduced a typo in the masks. In pseudo-code the rules with masks should be:

    for all x,y s.t. close_threshold > distance(x,y): x,y belong to the same cluster
    for all x,y s.t. distance(x,y) > distant_threshold: x,y belong to different cluster
    

    However, the rules have been written:

    for all x,y s.t.  distance(x,y) > close_threshold: x,y belong to the same cluster
    for all x,y s.t. distant_threshold > distance(x,y) : x,y belong to different cluster
    

    Basically, the operands have been mixed. This explains why the latest results were not as good as the previous ones. This is easy to fix; the operands just have to be interchanged again

    bug 
    opened by sbadredd 1
  • Add runtime Type Checking when constructing expressions

    Add runtime Type Checking when constructing expressions

    Issue #19 defined classes for Term and Formula following the usual definitions of FOL (see also)

    This can be used to type-check the arguments of various functions:

    • The inputs of predicates and functions are instances of Term,
    • The expressions in connectives and quantifier operations are instances of Formula,
    • The masks in quantifiers are instances of Formula.

    This is already indicated in type hints. Adding a runtime validation would make a stronger API and ensure that the user correctly uses the different LTN classes

    enhancement 
    opened by sbadredd 0
  • Parent classes for Terms and Formulas

    Parent classes for Terms and Formulas

    Going further than issue #16, we can define classes for Term and Formula.

    • Variable and Constant would be subclasses of Term
    • The output of a Function is a Term
    • Proposition is a subclass of Formula
    • The output of a Predicate is a Formula, and so is the result of connective and quantifiers operations

    This can in turn be used for type checking the arguments of various functions:

    • The inputs of predicates and functions must be instances of Term
    • The inputs of connective and quantifier operations must be instances of Formula

    This could be useful for helping the user with better error messages and debugging

    enhancement 
    opened by sbadredd 0
  • Add a constructor for variables made from trainable constants

    Add a constructor for variables made from trainable constants

    A variable can be instantiated using two different types of objects:

    • A value (numpy, python list, ...) that will be fed in a tf.constant (the variable refers to a new object).
    • A tf.Tensor instance that will be used directly as the variable (the variable refers to the same object).

    The latter is useful when the variable denotes a sequence of trainable constants.

    c1 = ltn.constant([2.1,3], trainable=True)
    c2 = ltn.constant([4.5,0.8], trainable=True)
    
    with tf.GradientTape() as tape:
        # Notice that the assignation must be done within a tf.GradientTape.
        # Tensorflow will keep track of the gradients between c1/c2 and x.
        x = ltn.variable("x",tf.stack([c1,c2]))
        res = P2(x)
    tape.gradient(res,c1).numpy() # the tape keeps track of gradients between P2(x), x and c1
    

    The assignation must be done within a tf.GradientTape. This is explained in the tutorials, but a user could easily miss this information.

    I propose to add a constructor for variables from constants, that must explicitly take the tf.GradientTape instance as an argument. In this way, it will be harder to miss.

    enhancement 
    opened by sbadredd 0
  • Support masks using LTN syntax instead of TensorFlow operations

    Support masks using LTN syntax instead of TensorFlow operations

    To use a guarded quantifier in a LTN sentence, the user must use lambda functions in the middle of traditional LTN syntax. Also, he can use TensorFlow syntax to write the mask, which adds to the confusion.

    For example, in the MNIST single-digit additional example, we have the following mask:

    exists(...,...,
        mask_vars=[d1,d2,labels_z],
        mask_fn=lambda vars: tf.equal(vars[0]+vars[1],vars[2])
    )
    

    If we would write the mask in LTN syntax, this would give:

    exists(...,...,
        mask= Equal([Add([d1,d2]),labels_z])
    )
    

    I believe the latter is clearer and more coherent within an LTN expression.

    This implies that the user must define extra LTN symbols for Equal and Add. I believe this is worth it, for the sake of clarity. In case the user wouldn't want to do that, he can still re-use the lambda function inside of a Mask predicate:

    Mask = ltn.Predicate.Lambda(lambda vars: tf.equal(vars[0]+vars[1],vars[2]))
    ...
    exists(...,...,
        mask=Mask([d1,d2,labels_z])
    )
    

    The mask is still written using an LTN symbol and doesn't require changing the code much compared to the original approach

    enhancement 
    opened by sbadredd 0
  • Create classes for Variable, Constant and Proposition

    Create classes for Variable, Constant and Proposition

    At the moment, LTN implements most expressions using tf.Tensor objects with some added dynamic attributes.

    For example, for a non-trainable LTN constant, the logic is the following (simplified):

    def constant(value):
        result = tf.constant(value)
        result.active_doms = []
        return result
    

    This makes the system easy to break, and debugging difficult. When copying or operating with the constant, the user might not realize that a new tensor is created and the active_doms attribute is lost.

    I propose to separate the logic of LTN with the logic of Tensorflow, and use distinct types. Something like:

    class Constant:
        def __init__(self, value):
            self.tensor = tf.constant(value)
            self.active_doms = []
    

    This implies that LTN predicates and functions will have to be adapted to work with constant.tensor, variable.tensor, ...

    enhancement 
    opened by sbadredd 0
  • Add a ltn.Predicate constructor that takes in a logits model

    Add a ltn.Predicate constructor that takes in a logits model

    Constructors for ltn.Predicate

    The constructor for ltn.Predicate accepts a model that outputs one truth degree in [0,1].

    class ModelThatOutputsATruthDegree(tf.keras.Model):
        def __init__(self):
            super().__init__()
            self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.relu)
            self.dense2 = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid) # returns one value in [0,1]
    
        def call(self, x):
            x = self.dense1(x)
            return self.dense2(x)
    
    model = ModelThatOutputsATruthDegree()
    P1 = ltn.Predicate(model)
    P1(x) # -> call with a ltn Variable
    

    Issue

    Many models output several values simultaneously. For example, a model for the predicate P2 classifying images x into n classes type_1, ..., type_n will likely output n logits using the same hidden layers.

    Eventually, we would expect to call the corresponding predicate using the syntax P2(x,type). This requires two additional steps:

    1. Transforming the logits into values in [0,1],
    2. Indexing the class using the term type.

    Because this is a common use-case, we implemented a function ltn.utils.LogitsToPredicateModel for convenience. It is used in some of the examples (cf MNIST digit addition). The syntax is:

    logits_model(x) # how to call `logits_model`
    P2 = ltn.Predicate(ltn.utils.LogitsToPredicateModel(logits_model), single_label=True)
    P2([x,type]) # how to call the predicate
    

    It automatically adds a final argument for class indexing and performs a sigmoid or softmax activation depending on the parameter single_label.

    Proposition

    It would be more elegant to have the functionality of creating a predicate from a logits model as a class constructor for ltn.Predicate.

    A suggested syntax is:

    P2 = ltn.Predicate.FromLogits(logits_model, activation_function="softmax", with_class_indexing=True)
    
    • The functionality comes as a new class constructor,
    • The activation function is more explicit than the single_label parameter in ltn.utils.LogitsToPredicateModel,
    • with_class_indexing=False still allows creating predicates in the form of P1(x), like abovementioned.

    Changes to the rest of the API

    The proposition adds a new constructor but shouldn't change any other method of ltn.Predicate or any framework method in general.

    enhancement 
    opened by sbadredd 1
  • Weighted connective operators

    Weighted connective operators

    Hello,

    In my project, I needed to use connective fuzzy logic operator., So, I implemented a class that enables to add weights to classic fuzzy operators, based on this paper : https://www.researchgate.net/publication/2610015_The_Weighting_Issue_in_Fuzzy_Logic

    I think it may be useful for other people or even to add it to ltn operators, so here is my code :

    class WeightedConnective:
        """Class to compute a weighted connective fuzzy operator."""
    
        def __init__(self, single_connective: Callable = ltn.fuzzy_ops.And_Prod()):
            """Initialize WeightedConnective.
    
            Parameters
            ----------
            single_connective : Callable
                Function to compute the binary operation
            """
            self.single_connective = single_connective
    
        def __call__(self, *args: float, weights: list[float] | None = None) -> float:
            """Call function of WeightedConnective.
    
            Parameters
            ----------
            *args : float
                Truth values whose operation should be computed
            weights : list[float] | None
                List of weights for the predicates, None if all predicates should be weighted
                equally, default: None
    
            Returns
            -------
            float:
                Truth value of weighted connective operation between predicates
    
            Raises
            ------
            ValueError
                If no predicate was provided
            ValueError
                If the number of predicates and the number of weights are different
            """
            n = len(args)
            if n == 0:
                raise ValueError("No predicate was found")
            if n == 1:
                return args[0]
            if weights is None:
                weights = [1. / n for _ in range(n)]
            if len(weights) != n:
                raise ValueError(
                    f"Numbers of predicates and weights should be equal : {n} predicates and "
                    f"{len(weights)} weights were found")
    
            s = sum(weights)
            if s != 0:
                weights = [elt / s for elt in weights]
    
            w = max(weights)
            res = (weights[0] / w) * args[0]
            for i, x in enumerate(args):
                if i != 0:
                    res = self.single_connective(res, (weights[i] / w) * args[i])
            return res
    
    enhancement 
    opened by maelle101 1
  • Saving LTN model

    Saving LTN model

    Hello,

    I am working on a project using LTN. I train a model with several Neural Networks (the number varies between executions). Is there an easy way to save and then load an entire LTN model ? Or should I use several time tensorflow saving function and store other information (for example which Predicate corresponds to each NN) by a custom way ?

    Thanks in advance for any answer, and thanks for this great framework.

    opened by maelle101 3
  • Imbalanced classification

    Imbalanced classification

    first, thank you for this great framework, my question is; what is the best way to define variables for imbalanced classification (with a lot of categories) for which in each batch they might be empty? thank you!

    opened by mpourvali 3
  • Allow to permanently `diag` variables

    Allow to permanently `diag` variables

    Diagonal quantification

    Given 2 (or more) variables, ltn.diag allows to express statements about specific pairs (or tuples) of the variables, such that the i-th tuple contains the i-th instances of the variables.

    In simplified pseudo-code, the usual quantification would compute:

    for x_i in x:
        for y_j in y:
            results.append(P(x_i,y_j))
    aggregate(results)
    

    In contrast, diagonal quantification would compute:

    for x_i, y_i in zip(x,y):
        results.append(P(x_i,y_i))
    aggregate(results)
    

    In LTN code, given two variables x1 and x2, we use diagonal quantification as follows:

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    P = ltn.Predicate(...)
    P([x1,x2]) # -> returns 10x10 values
    ltn.diag(x1,x2)
    P([x1,x2]) # -> returns only 10 "zipped" values
    ltn.undiag(x1,x2)
    P([x1,x2]) # -> returns 10x10 values
    

    See also the second tutorial.

    Issue

    At the moment, every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

    Forall(ltn.diag(x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of 10x10 values
    

    However, there are cases where the second (normal) behavior for the two variables x1 and x2 is never useful. Some variables are designed from the start to be used as paired, zipped variables. In that case, forcing the user to re-use the keyword ltn.diag at every quantification is redundant.

    Proposition

    Define a new keyword ltn.diag_lock which can be used once at the instantiation of the variables, and will force the diag behavior in every subsequent quantification. ltn.undiag will not be called after an aggregation.

    x1 = ltn.Variable("x1",np.rand(10,2)) # 10 values in R^2
    x2 = ltn.Variable("x2",np.rand(10,2)) # 10 values in R^2
    ltn.diag_lock([x1,x2])
    P([x1,x2]) # -> returns only 10 "zipped" values
    Forall((x1,x2), P([x1,x2])) # -> returns an aggregate of only 10 "zipped values"
    Forall((x1,x2), P([x1,x2])) # -> still returns an aggregate of only 10 "zipped values"
    

    Possibly, we can add an ltn.undiag_lock too.

    The implementation details are left to define but shouldn't change the rest of the API.

    enhancement 
    opened by sbadredd 0
  • automated translation of tptp problems to ltn axioms

    automated translation of tptp problems to ltn axioms

    Hello,

    we're trying to automatically translate TPTP problems to axioms computable by the LTNs. Errors occur when trying to apply the gradient tape in the training step because of initialized variables outside of the tape scope as it is described in the tutorial notebooks. Is there by any chance already an implementation (or in the works) to translate a logic problem (written in some intermediate language) to LTN readable axioms?

    Best, Philip

    opened by phjlip 1
Releases(v2.0)
This is an unofficial PyTorch implementation of Meta Pseudo Labels

This is an unofficial PyTorch implementation of Meta Pseudo Labels. The official Tensorflow implementation is here.

Jungdae Kim 320 Jan 08, 2023
python library for invisible image watermark (blind image watermark)

invisible-watermark invisible-watermark is a python library and command line tool for creating invisible watermark over image.(aka. blink image waterm

Shield Mountain 572 Jan 07, 2023
Google Brain - Ventilator Pressure Prediction

Google Brain - Ventilator Pressure Prediction https://www.kaggle.com/c/ventilator-pressure-prediction The ventilator data used in this competition was

Samuele Cucchi 1 Feb 11, 2022
Code of the lileonardo team for the 2021 Emotion and Theme Recognition in Music task of MediaEval 2021

Emotion and Theme Recognition in Music The repository contains code for the submission of the lileonardo team to the 2021 Emotion and Theme Recognitio

Vincent Bour 8 Aug 02, 2022
Library for 8-bit optimizers and quantization routines.

bitsandbytes Bitsandbytes is a lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers and quantization functions. Paper -- V

Facebook Research 687 Jan 04, 2023
MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python

Digital Image Processing Python MATLAB codes of the book "Digital Image Processing Fourth Edition" converted to Python TO-DO: Refactor scripts, curren

Merve Noyan 24 Oct 16, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot. Graph Convolutional Networks for Hyperspectral Image Classification, IEEE TGRS, 2021.

Graph Convolutional Networks for Hyperspectral Image Classification Danfeng Hong, Lianru Gao, Jing Yao, Bing Zhang, Antonio Plaza, Jocelyn Chanussot T

Danfeng Hong 154 Dec 13, 2022
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch.

SE3 Transformer - Pytorch Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. May be needed for replicating Alphafold2 resu

Phil Wang 207 Dec 23, 2022
Virtual Dance Reality Stage is a feature that offers you to share a stage with another user virtually.

Virtual Dance Reality Stage is a feature that offers you to share a stage with another user virtually. It uses the concept of Image Background Removal using DeepLab Architecture (based on Semantic Se

Devashi Choudhary 5 Aug 24, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
这是一个利用facenet和retinaface实现人脸识别的库,可以进行在线的人脸识别。

Facenet+Retinaface:人脸识别模型在Pytorch当中的实现 目录 注意事项 Attention 所需环境 Environment 文件下载 Download 预测步骤 How2predict 参考资料 Reference 注意事项 该库中包含了两个网络,分别是retinaface和

Bubbliiiing 102 Dec 30, 2022
A quick recipe to learn all about Transformers

Transformers have accelerated the development of new techniques and models for natural language processing (NLP) tasks.

DAIR.AI 772 Dec 31, 2022
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

137 Dec 15, 2022
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
SAPIEN Manipulation Skill Benchmark

ManiSkill Benchmark SAPIEN Manipulation Skill Benchmark (abbreviated as ManiSkill, pronounced as "Many Skill") is a large-scale learning-from-demonstr

Hao Su's Lab, UCSD 107 Jan 08, 2023
Official Repository for our ECCV2020 paper: Imbalanced Continual Learning with Partitioning Reservoir Sampling

Imbalanced Continual Learning with Partioning Reservoir Sampling This repository contains the official PyTorch implementation and the dataset for our

Chris Dongjoo Kim 40 Sep 18, 2022
Towards Multi-Camera 3D Human Pose Estimation in Wild Environment

PanopticStudio Toolbox This repository has a toolbox to download, process, and visualize the Panoptic Studio (Panoptic) data. Note: Sep-21-2020: Curre

335 Jan 09, 2023
Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capability)

Protein GLM (wip) Implementation of a protein autoregressive language model, but with autoregressive infilling objective (editing subsequences capabil

Phil Wang 17 May 06, 2022
retweet 4 satoshi ⚡️

rt4sat retweet 4 satoshi This bot is the codebase for https://twitter.com/rt4sat please feel free to create an issue if you saw any bugs basically thi

6 Sep 30, 2022