TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

Overview

TensorFlow Similarity: Metric Learning for Humans

TensorFlow Similarity is a TensorFLow library for similarity learning also known as metric learning and contrastive learning.

TensorFlow Similarity is still in beta.

Introduction

Tensorflow Similarity offers state-of-the-art algorithms for metric learning and all the necessary components to research, train, evaluate, and serve similarity-based models.

Example of nearest neighbors search performed on the embedding generated by a similarity model trained on the Oxford IIIT Pet Dataset.

With TensorFlow Similarity you can train and serve models that find similar items (such as images) in a large corpus of examples. For example, as visible above, you can train a similarity model to find and cluster similar looking images of cats and dogs from the Oxford IIIT Pet Dataset by only training on a few classes. To train your own similarity model see this notebook.

Metric learning is different from traditional classification as it's objective is different. The model learns to minimize the distance between similar examples and maximize the distance between dissimilar examples, in a supervised or self-supervised fashion. Either way, TensorFlow Similarity provides the necessary losses, metrics, samplers, visualizers, and indexing sub-system to make this quick and easy.

Currently, TensorFlow Similarity supports supervised training. In future releases, it will support semi-supervised and self-supervised training.

To learn more about the benefits of using similarity training, you can check out the blog post.

What's new

For previous changes - see the release changelog

Getting Started

Installation

Use pip to install the library

pip install tensorflow_similarity

Documentation

The detailed and narrated notebooks are a good way to get started with TensorFlow Similarity. There is likely to be one that is similar to your data or your problem (if not, let us know). You can start working with the examples immediately in Google Colab by clicking the Google Colab icon.

For more information about specific functions, you can check the API documentation

For contributing to the project please check out the contribution guidelines

Minimal Example: MNIST similarity

Here is a bare bones example demonstrating how to train a TensorFlow Similarity model on the MNIST data. This example illustrates some of the main components provided by TensorFlow Similarity and how they fit together. Please refer to the hello_world notebook for a more detailed introduction.

Preparing data

TensorFlow Similarity provides data samplers, for various dataset types, that balance the batches to ensure smoother training. In this example, we are using the multi-shot sampler that integrate directly from the TensorFlow dataset catalog.

from tensorflow_similarity.samplers import TFDatasetMultiShotMemorySampler

# Data sampler that generates balanced batches from MNIST dataset
sampler = TFDatasetMultiShotMemorySampler(dataset_name='mnist', classes_per_batch=10)

Building a Similarity model

Building a TensorFlow Similarity model is similar to building a standard Keras model, except the output layer is usually a MetricEmbedding() layer that enforces L2 normalization and the model is instantiated as a specialized subclass SimilarityModel() that supports additional functionality.

from tensorflow.keras import layers
from tensorflow_similarity.layers import MetricEmbedding
from tensorflow_similarity.models import SimilarityModel

# Build a Similarity model using standard Keras layers
inputs = layers.Input(shape=(28, 28, 1))
x = layers.experimental.preprocessing.Rescaling(1/255)(inputs)
x = layers.Conv2D(64, 3, activation='relu')(x)
x = layers.Flatten()(x)
x = layers.Dense(64, activation='relu')(x)
outputs = MetricEmbedding(64)(x)

# Build a specialized Similarity model
model = SimilarityModel(inputs, outputs)

Training model via contrastive learning

To output a metric embedding, that are searchable via approximate nearest neighbor search, the model needs to be trained using a similarity loss. Here we are using the MultiSimilarityLoss(), which is one of the most efficient loss functions.

from tensorflow_similarity.losses import MultiSimilarityLoss

# Train Similarity model using contrastive loss
model.compile('adam', loss=MultiSimilarityLoss())
model.fit(sampler, epochs=5)

Building images index and querying it

Once the model is trained, reference examples must indexed via the model index API to be searchable. After indexing, you can use the model lookup API to search the index for the K most similar items.

from tensorflow_similarity.visualization import viz_neigbors_imgs

# Index 100 embedded MNIST examples to make them searchable
sx, sy = sampler.get_slice(0,100)
model.index(x=sx, y=sy, data=sx)

# Find the top 5 most similar indexed MNIST examples for a given example
qx, qy = sampler.get_slice(3713, 1)
nns = model.single_lookup(qx[0])

# Visualize the query example and its top 5 neighbors
viz_neigbors_imgs(qx[0], qy[0], nns)

Supported Algorithms

Supervised Losses

  • Triplet Loss
  • PN Loss
  • Multi Sim Loss
  • Circle Loss

Metrics

Tensorflow Similarity offers many of the most common metrics used for classification and retrieval evaluation. Including:

Name Type Description
Precision Classification
Recall Classification
F1 Score Classification
[email protected] Retrieval
Binary NDCG Retrieval

Citing

Please cite this reference if you use any part of TensorFlow similarity in your research:

@article{EBSIM21,
  title={TensorFlow Similarity: A Usuable, High-Performance Metric Learning Library},
  author={Elie Bursztein, James Long, Shun Lin, Owen Vallis, Francois Chollet},
  journal={Fixme},
  year={2021}
}

Disclaimer

This is not an official Google product.

Comments
  • SimClr training loss not decreasing for many many epochs in unsupervised_hello_world.ipynb

    SimClr training loss not decreasing for many many epochs in unsupervised_hello_world.ipynb

    Just running the example verbatim from the notebook unsupervised_hello_world.ipynb for SimClr. The train loss seemed to get stuck around 6.9287. The solution has no evidence of a collapse with proj_std 0.02 for both train and val. The binary accuracy is not great at ~0.5

    Anyone ran the same in the notebook and got the same result? I wonder if there's a bug in the implementation (haven't looked in the code in detail yet).

    Epoch 728/800
    87/87 [==============================] - ETA: 0s - loss: 6.9287 - proj_std: 0.0214binary_accuracy: 0.4910
    87/87 [==============================] - 49s 561ms/step - loss: 6.9287 - proj_std: 0.0214 - val_loss: 6.9288 - val_proj_std: 0.0211 - binary_accuracy: 0.4910
    
    
    Screen Shot 2022-02-03 at 12 26 58 PM
    opened by kechan 17
  • Augmentor for Barlow Twins

    Augmentor for Barlow Twins

    opened by dewball345 11
  • Resnet18 and Resnet50 has different rescaling/preprocessing implementation?

    Resnet18 and Resnet50 has different rescaling/preprocessing implementation?

    I noted Resnet18 has this line of code, which rescale pixel value to be [0, 1], (or [-1, 1]) don't remember which but close enough.

    x = imagenet_utils.preprocess_input(
            x, data_format=data_format, mode=preproc_mode
        )
    

    But rescaling (1/255) is probably not done for resnet50. I instantiated it and use .summary() and get_layer().get_summary() and don't see any layers.Rescale(...), while EfficientNet has this.

    Whats more confusing is that in the unsupervised_hello_world notebook (https://colab.research.google.com/github/tensorflow/similarity/blob/master/examples/unsupervised_hello_world.ipynb), a comment:

    "This augmenter returns values between [0, 1], but our ResNet backbone model expects the values to be between [0, 255] so we also scale the final output values in the augmentation function."

    If what i observed is true, it is all well with resnet18 but not quite for resnet50.

    In fact, I noticed there maybe no rescaling inside resnet50 when i tried similarity training swapping out effnet and noticed a drop in accuracy. After adding in layers.Rescale, it regains the accuracy difference.

    I also understand ResNet50 rescaling may not be done in keras original implementation, before tf2.0 provided rescaling layers. So I am not sure if this is in keeping with convention. But I suspect you could have used imagenet_utils.preprocess_input(...) to do the same for resnet50? Forgetting to do 1/255. is a common mistake (at least for me) whenever a new project got started. In general, I have seen ppl done it either in tf.data.Dataset, or built it inside the model.

    So I am not sure if the implementations are as intended here. I also wish for a boolean model parameter to decide to perform rescaling or not inside the model.

    opened by kechan 10
  • Index - Single Input

    Index - Single Input

    Hello,

    The SimilarityModel.index method seems to require a batch size greater than 1, as it uses the add_batch method from the Indexer, which appears to cause some strided slice errors.

    Could we get an update to index to check for a batch size of one and use the add from Indexer instead of add_batch? Or given there's already a single_lookup just a new method single_index if it's preffered?

    Also as a side-note, is there a reason that the SimilarityModel._index is limited to either the compile or load_index methods? I'm using the sub-classing API, and due to custom training loops accommodating dynamic sizes, I never need to call compile so had to manually set the _index when running through the first time without a saved one.

    Thank you for your time :)

    (Note i'm happy to make a small PR to fix it!)

    component:model component:indexing 
    opened by phillips96 9
  • Equal val_mean_pos and val_mean_neg during training

    Equal val_mean_pos and val_mean_neg during training

    Hi everyone!

    After upgrade tfsim to 0.15.2 and start using EfficientNetSim i have same mean_pos and mean_neg on validation set:

    Epoch 2/128
    256/256 [==============================] - 57s 225ms/step - loss: 1.5161 - mean_pos: 0.0641 - mean_neg: 0.1761 - val_loss: 1.0291 - val_mean_pos: 0.0828 - val_mean_neg: 0.0828
    Epoch 3/128
    256/256 [==============================] - 57s 222ms/step - loss: 1.3810 - mean_pos: 0.0529 - mean_neg: 0.1934 - val_loss: 0.8178 - val_mean_pos: 0.0771 - val_mean_neg: 0.0771
    Epoch 4/128
    256/256 [==============================] - 56s 218ms/step - loss: 1.2050 - mean_pos: 0.0454 - mean_neg: 0.2100 - val_loss: 0.7022 - val_mean_pos: 0.0855 - val_mean_neg: 0.0855
    Epoch 5/128
    256/256 [==============================] - 56s 217ms/step - loss: 1.0587 - mean_pos: 0.0417 - mean_neg: 0.2269 - val_loss: 0.6784 - val_mean_pos: 0.0712 - val_mean_neg: 0.0712
    Epoch 6/128
    256/256 [==============================] - 56s 217ms/step - loss: 0.9288 - mean_pos: 0.0385 - mean_neg: 0.2400 - val_loss: 0.5951 - val_mean_pos: 0.0586 - val_mean_neg: 0.0586
    Epoch 7/128
    256/256 [==============================] - 56s 217ms/step - loss: 0.8379 - mean_pos: 0.0359 - mean_neg: 0.2513 - val_loss: 0.4841 - val_mean_pos: 0.0930 - val_mean_neg: 0.0930
    Epoch 8/128
    256/256 [==============================] - 56s 217ms/step - loss: 0.6939 - mean_pos: 0.0347 - mean_neg: 0.2650 - val_loss: 0.5244 - val_mean_pos: 0.0813 - val_mean_neg: 0.0813
    Epoch 9/128
    256/256 [==============================] - 56s 219ms/step - loss: 0.6005 - mean_pos: 0.0333 - mean_neg: 0.2800 - val_loss: 0.4762 - val_mean_pos: 0.0842 - val_mean_neg: 0.0842
    ...
    

    Could you please suggest a possible reason for this behavior?

    Use this model:

    model = tfsim.architectures.EfficientNetSim(
        input_shape=(224 224, 3),
        embedding_size=128,
        weights='imagenet',
        trainable='partial',
        l2_norm=True,
        include_top=True,
        pooling='max',    # Can change to use `gem` -> GeneralizedMeanPooling2D
        gem_p=3.0,        # Increase the contrast between activations in the feature map.
    )
    
    model.compile(
        optimizer=tf.keras.optimizers.Adam(0.0001),
        loss=tfsim.losses.MultiSimilarityLoss(distance='cosine'),
        metrics=[
            tfsim.training_metrics.avg_pos(distance='cosine'),
            tfsim.training_metrics.avg_neg(distance='cosine'),
        ],
    )
    
    history = model.fit(sampler, epochs=128, validation_data=(x_test, y_test))
    

    Test set contain 60 classes and 24 samples per class.

    opened by DanilZittser 8
  • The supervised_visualization notebook is running very slowly on Google Colab (with GPU)

    The supervised_visualization notebook is running very slowly on Google Colab (with GPU)

    I just started exploring this library with my own dataset, and I found training is very slow. I haven't profiled why this is slow compared with my vanilla classification workflow. But I tried to test with supervised_visualization.ipynb.

    One thing i noticed is the copy on GitHub seems to say it should take about 70ms/step. I ran the same notebook without a single change on Google Colab with GPU (P100), and it was excruciating slow at 10s/step. What should I check?

    So i suspect the slowness I saw with my own dataset (also trained on google colab gpu) is related to what I saw in supervised_visualization.ipynb.

    Please help.

    type:bug component:samplers 
    opened by kechan 8
  • Resolves Error while using projector

    Resolves Error while using projector

    Since the new major release of Bokeh version 3.0, the plot_width (and plot_height) properties have been removed. These have been replaced by standard width and height for all layout-able models according to Migration Guide. The minor update fixes the error generated while using the projector. A recent closed issue in the official Bokeh Repository can be found here. The update was successfully tested in a notebook.

    opened by ZohebAbai 7
  • distributed training  support

    distributed training support

    When using distributed training the simclr loss should be calculated on all samples across gpus like here: https://github.com/google-research/simclr/blob/master/objective.py Is it planned to add this functionality?

    type:bug component:losses 
    opened by yonigottesman 6
  • SimCLR Contrastive model save gives 'EagerTensor is not JSON serializable'

    SimCLR Contrastive model save gives 'EagerTensor is not JSON serializable'

    Versions TensorFlow: 2.8.0 TensorFlow Similarity 0.15.4

    How to repro:

    Ran unsupervised_hello_world.ipynb in Colab, choose 'simclr' as ALGORITHM. Execute all the cells required to instantiate a contrastive_model. Without training it, just skip to this cell

    contrastive_model.save(DATA_PATH / 'models' / f'trained_model')

    Errors

    [/usr/local/lib/python3.7/dist-packages/tensorflow_similarity/models/contrastive_model.py](https://localhost:8080/#) in save(self, filepath, save_index, compression, overwrite, include_optimizer, save_format, signatures, options, save_traces)
        535             base_config = self.get_config()
        536             config = json.dumps(
    --> 537                 {**metadata, **base_config}, cls=NumpyFloatValuesEncoder
        538             )
        539             json.dump(config, f)
    
    [/usr/local/lib/python3.7/dist-packages/tensorflow_similarity/models/contrastive_model.py](https://localhost:8080/#) in default(self, obj)
         51         if isinstance(obj, np.float32):
         52             return float(obj)
    ---> 53         return json.JSONEncoder.default(self, obj)
         54 
         55
    
    [/usr/lib/python3.7/json/encoder.py](https://localhost:8080/#) in default(self, o)
        177 
        178         """
    --> 179         raise TypeError(f'Object of type {o.__class__.__name__} '
        180                         f'is not JSON serializable')
        181 
    
    TypeError: Object of type EagerTensor is not JSON serializable
    
    type:bug component:losses 
    opened by kechan 6
  • Google Summer of Code

    Google Summer of Code

    This work was part of my Google Summer of Code project. The following are the major contributions of the project.

    • Addition of multiple negatives ranking loss
    • Addition of an example notebook for fine-tuning CLIP using multiple negatives ranking loss
    component:losses 
    opened by abhisharsinha 5
  • Complete benchmarking pipeline - dataset creation, training, evaluation

    Complete benchmarking pipeline - dataset creation, training, evaluation

    Benchmarking includes:

    • generating dataset - so far have generated cars196. Had a memory problem loading the entire dataset in ram, also had an issue putting 23 gb total in storage, so rewrote it using tfrecords. However, supports a sharded npz implementation(where each shard is sub-sharded) and the original full npz.
    • training with dataset - have trained CircleLoss, PNLoss, MultiSimilarityLoss
    • evaluation - get results from model.calibrate as well as model.evaluate_retrieval

    Benchmarking metrics are in json form and can be generated yourself by following README. My test results are in the README as well.

    HOWEVER, seems to be an issue with the code. I'm guessing that it has to do with the model architectures and not the losses. Almost looks like model is reset after each epoch, so thinking that somewhere gradients aren't getting passed/updated.

    I'm going to try making a dummy model from scratch(like in MNIST tutorial) and see if that works.

    opened by dewball345 5
  • Request for Sampler analogue like `tf.keras.utils.image_dataset_from_directory`

    Request for Sampler analogue like `tf.keras.utils.image_dataset_from_directory`

    tf keras.image_dataset_from_directory takes a directory structured like this:

    main_directory/
    ...class_a/
    ......a_image_1.jpg
    ......a_image_2.jpg
    ...class_b/
    ......b_image_1.jpg
    ......b_image_2.jpg
    

    and returns a Dataset object. I have a similar dataset and I'd love to be able to use a Sampler on it.

    I've already got most of my own implementation but I'm working on optimizing some of the code, as I am running into memory usage vs disk read speed tradeoff concerns. So with permission, I'd love to also claim this issue so I can contribute what I've written, and maybe get someone more experienced to look it over.

    opened by Ejjaffe 0
  • [REQUEST] TFDatasetMultiShotMemorySampler for custom datasets

    [REQUEST] TFDatasetMultiShotMemorySampler for custom datasets

    hi, I am testing using different dataflows for training, I have tested using the sampler and the dataset (using tf.keras.utils.image_dataset_from_directory), I have found that loading the data and feeding it to the sampler ends up with a very different max batch size for the same gpu, 20 in one, over 30 in other, the dataset is the one that can have the most, but the data is not divided in a good way per batch, so, I want to try the dataset as the input for the memory sampler, but the current function is made to only download a non custom dataset, I will try modifications to make it work but I don't think my code will be generic and I haven't used overload functions before so I leave this as a request that should be simple to implement.

    opened by Lunatik00 0
  • Create Contrastive Sampler / Utilities for constructing datasets.

    Create Contrastive Sampler / Utilities for constructing datasets.

    Constructing datasets for the contrastive model is currently adhoc and could benefit from some utility funcs. For example.

    # Compute the indicies for query, index, val, and train splits
    query_idxs, index_idxs, val_idxs, train_idxs = [], [], [], []
    
    for cid in range(ds_info.features["label"].num_classes):
        idxs = tf.random.shuffle(tf.where(y_raw_train == cid))
        idxs = tf.reshape(idxs, (-1,))
        query_idxs.extend(idxs[:100])  # 200 query examples per class
        index_idxs.extend(idxs[100:200])  # 200 index examples per class
        val_idxs.extend(idxs[200:300])  # 100 validation examples per class
        train_idxs.extend(idxs[300:])  # The remaining are used for training
    
    
    random.shuffle(query_idxs)
    random.shuffle(index_idxs)
    random.shuffle(val_idxs)
    random.shuffle(train_idxs)
    
    
    def create_split(idxs: list) -> tuple:
        x, y = [], []
        for idx in idxs:
            x.append(x_raw_train[int(idx)])
            y.append(y_raw_train[int(idx)])
        return tf.convert_to_tensor(np.array(x), dtype=tf.float32), tf.convert_to_tensor(
            np.array(y), dtype=tf.int64
        )
    
    
    x_query, y_query = create_split(query_idxs)
    x_index, y_index = create_split(index_idxs)
    x_val, y_val = create_split(val_idxs)
    x_train, y_train = create_split(train_idxs)
    
    type:feature component:samplers 
    opened by owenvallis 0
  • Not benefiting from checkpointing

    Not benefiting from checkpointing

    Hello,

    I save checkpoints with:

    checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(
      filepath = checkpoint_path + '/epoch-{epoch:02d}/',
      monitor = 'val_loss',
      save_freq = 'epoch',
      save_weights_only = False,
      save_best_only = False,
      mode = 'auto')
    

    After loading the latest checkpoint and continuing training, I would expect the loss value to be around the loss value in the last checkpoint.

        model = tf.keras.models.load_model(
            model_path,
            custom_objects={"SimilarityModel": tfsim.models.SimilarityModel,
                            'MyOptimizer': tfa.optimizers.RectifiedAdam})
    
        model.load_index(model_path)
    
        model.fit(
            datasampler,
            callbacks = callbacks,
            epochs = args.epochs,
            initial_epoch=initial_epoch_number,
            steps_per_epoch = N_TRAIN_SAMPLES ,
            verbose=2
        )
    
    

    However, the loss value does not continue from where it left. It looks like it's simply starting the training from scratch and not benefiting from checkpoints.

    type:bug component:model component:losses 
    opened by mstfldmr 6
  • Migrate TFA usage to core keras

    Migrate TFA usage to core keras

    SyncBatchNormalization is now a core Keras API;

    We can reference: keras.layers.experimental.SyncBatchNormalization

    https://source.corp.google.com/piper///depot/google3/third_party/py/keras/layers/normalization/batch_normalization.py;l=1089

    opened by LukeWood 0
  • [FEATURE REQUEST] use of dataset in tfsim.callbacks.EvalCallback

    [FEATURE REQUEST] use of dataset in tfsim.callbacks.EvalCallback

    Hi, I have a relatively big dataset, considering the available ram, I currently have access to machines that I can use with the dataset, so that is not a problem for me, but since the ram use is a lot I checked if there was an implementation to use a dataset (tf.data.Dataset(), the same way it can be an input for the model.fit() function) and it wasn't, it could help people with less compute resources to use this function with their datasets (I read the dataset using the function tf.keras.utils.image_dataset_from_directory(), it can be batched or unbatched)

    opened by Lunatik00 1
Releases(v0.16)
  • v0.16(May 27, 2022)

    Added

    • Cross-batch memory (XBM). Thanks @chjort
    • VicReg Loss - Improvement of Barlow Twins. Thanks @dewball345
    • Add augmenter function for Barlow Twins. Thanks @dewball345

    Changed

    • Simplified MetricEmbedding layer. Function tracing and serialization are better supported now.
    • Refactor image augmentation modules into separate utils modules to help make them more composable. Thanks @dewball345
    • GeneralizedMeanPooling layers default value for P is now 3.0. This better aligns with the value in the paper.
    • EvalCallback now supports split validation callback. Thanks @abhisharsinha
    • Distance and losses refactor. Refactor distances call signature to now accept query and key inputs instead of a single set of embeddings or labels. Thanks @chjort

    Fixed

    • Fix TFRecordDatasetSampler to ensure the correct number of examples per class per batch. Deterministic is now set to True and we have updated the docstring to call out the requirements for the tf record files.
    • Removed unneeded tf.function and registar_keras_serializable decorators.
    • Refactored the model index attribute to raise a more informative AttributeError if the index does not exist.
    • Freeze all BatchNormalization layers in architectures when loading weights.
    • Fix bug in losses.utils.LogSumExp(). tf.math.log(1 + x) should be tf.math.log(tf.math.exp(-my_max) + x). This is needed to properly account for removing the row wise max before computing the logsumexp.
    • Fix multisim loss offsets. The tfsim version of multisim uses distances instead of the inner product. However, multisim requires that we "center" the pairwise distances around 0. Here we add a new center param, which we set to 1.0 for cosine distance. Additionally, we also flip the lambda (lmda) param to add the threshold to the values instead of subtracting it. These changes will help improve the pos and neg weighting in the log1psumexp.
    • Fix nmslib save and load. nmslib requires a string path and will only read and write to local files. In order to support writing to a remote file path, we first write to a local tmp dir and then write that to the user provided path using tf.io.gfile.GFile.
    • Fix serialization of Simclr params in get_config()
    • Other fixes and improvements...
    Source code(tar.gz)
    Source code(zip)
  • v0.15(Jan 21, 2022)

    This release add self-supervised training support

    Added

    • Refactored Augmenters to be a class
    • Added SimCLRAugmenter
    • Added SiamSiamLoss()
    • Added SimCLR Loss
    • Added ContrastiveModel() for self-supervised training
    • Added encoder_dev() metrics as suggested in SiamSiam to detect collapsing
    • Added visualize_views() to see view side by side
    • Added self-supervised hello world narrated notebook
    • Refactored Augmenter() as class.

    Changed

    • Remove augmentation argument from architectures as the augmentation arg could lead to issues when saving the model or training on TPU.
    • Removed RandAugment which is not used directly by the package and causes issues with TF 2.8+
    Source code(tar.gz)
    Source code(zip)
  • v0.14(Oct 9, 2021)

  • v0.13(Sep 13, 2021)

Resources complimenting the Machine Learning Course led in the Faculty of mathematics and informatics part of Sofia University.

Machine Learning and Data Mining, Summer 2021-2022 How to learn data science and machine learning? Programming. Learn Python. Basic Statistics. Take a

Simeon Hristov 8 Oct 04, 2022
MCMC samplers for Bayesian estimation in Python, including Metropolis-Hastings, NUTS, and Slice

Sampyl May 29, 2018: version 0.3 Sampyl is a package for sampling from probability distributions using MCMC methods. Similar to PyMC3 using theano to

Mat Leonard 304 Dec 25, 2022
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
Official Pytorch implementation of "Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral)"

Learning Debiased Representation via Disentangled Feature Augmentation (Neurips 2021, Oral): Official Project Webpage This repository provides the off

Kakao Enterprise Corp. 68 Dec 17, 2022
Simple-Neural-Network From Scratch in Python

Simple-Neural-Network From Scratch in Python This is a simple Neural Network created without any Machine Learning Libraries. The only dependencies are

Aum Shah 1 Dec 28, 2021
Extracts essential Mediapipe face landmarks and arranges them in a sequenced order.

simplified_mediapipe_face_landmarks Extracts essential Mediapipe face landmarks and arranges them in a sequenced order. The default 478 Mediapipe face

Irfan 13 Oct 04, 2022
Deep Halftoning with Reversible Binary Pattern

Deep Halftoning with Reversible Binary Pattern ICCV Paper | Project Website | BibTex Overview Existing halftoning algorithms usually drop colors and f

Menghan Xia 17 Nov 22, 2022
Github Traffic Insights as Prometheus metrics.

github-traffic Github Traffic collects your repository's traffic data and exposes it as Prometheus metrics. Grafana dashboard that displays the metric

Grafana Labs 34 Oct 27, 2022
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 02, 2023
🧮 Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
Code to accompany the paper "Finding Bipartite Components in Hypergraphs", which is published in NeurIPS'21.

Finding Bipartite Components in Hypergraphs This repository contains code to accompany the paper "Finding Bipartite Components in Hypergraphs", publis

Peter Macgregor 5 May 06, 2022
Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Bobby Cox 1 Nov 17, 2021
Official implementation of "Implicit Neural Representations with Periodic Activation Functions"

Implicit Neural Representations with Periodic Activation Functions Project Page | Paper | Data Vincent Sitzmann*, Julien N. P. Martel*, Alexander W. B

Vincent Sitzmann 1.4k Jan 06, 2023
Code repository for Semantic Terrain Classification for Off-Road Autonomous Driving

BEVNet Datasets Datasets should be put inside data/. For example, data/semantic_kitti_4class_100x100. Training BEVNet-S Example: cd experiments bash t

(Brian) JoonHo Lee 24 Dec 12, 2022
This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

BiCAT This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transfor

John 15 Dec 06, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
Official pytorch implementation of the paper: "SinGAN: Learning a Generative Model from a Single Natural Image"

SinGAN Project | Arxiv | CVF | Supplementary materials | Talk (ICCV`19) Official pytorch implementation of the paper: "SinGAN: Learning a Generative M

Tamar Rott Shaham 3.2k Dec 25, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
Gems & Holiday Package Prediction

Predictive_Modelling Gems & Holiday Package Prediction This project is based on 2 cases studies : Gems Price Prediction and Holiday Package prediction

Avnika Mehta 1 Jan 27, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022