The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

Overview

Logo

PyPi version PyPi stats Anaconda version Anaconda downloads

Commit activity License

Losses unit tests Miners unit tests Reducers unit tests Regularizers unit tests

Samplers unit tests Testers unit tests Trainers unit tests Utils unit tests

News

March 3: v0.9.97 has various bug fixes and improvements:

January 12: v0.9.96 greatly increases the flexibility of the testers and AccuracyCalculator. See the release notes

December 10: v0.9.95 includes a new tuple miner, BatchEasyHardMiner. See the release notes

Documentation

Google Colab Examples

See the examples folder for notebooks you can download or run on Google Colab.

PyTorch Metric Learning Overview

This library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train/test workflow.

high_level_module_overview

How loss functions work

Using losses and miners in your training loop

Let’s initialize a plain TripletMarginLoss:

from pytorch_metric_learning import losses
loss_func = losses.TripletMarginLoss()

To compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size.

# your training loop
for i, (data, labels) in enumerate(dataloader):
	optimizer.zero_grad()
	embeddings = model(data)
	loss = loss_func(embeddings, labels)
	loss.backward()
	optimizer.step()

The TripletMarginLoss computes all possible triplets within the batch, based on the labels you pass into it. Anchor-positive pairs are formed by embeddings that share the same label, and anchor-negative pairs are formed by embeddings that have different labels.

Sometimes it can help to add a mining function:

from pytorch_metric_learning import miners, losses
miner = miners.MultiSimilarityMiner()
loss_func = losses.TripletMarginLoss()

# your training loop
for i, (data, labels) in enumerate(dataloader):
	optimizer.zero_grad()
	embeddings = model(data)
	hard_pairs = miner(embeddings, labels)
	loss = loss_func(embeddings, labels, hard_pairs)
	loss.backward()
	optimizer.step()

In the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.

Customizing loss functions

Loss functions can be customized using distances, reducers, and regularizers. In the diagram below, a miner finds the indices of hard pairs within a batch. These are used to index into the distance matrix, computed by the distance object. For this diagram, the loss function is pair-based, so it computes a loss per pair. In addition, a regularizer has been supplied, so a regularization loss is computed for each embedding in the batch. The per-pair and per-element losses are passed to the reducer, which (in this diagram) only keeps losses with a high value. The averages are computed for the high-valued pair and element losses, and are then added together to obtain the final loss.

high_level_loss_function_overview

Now here's an example of a customized TripletMarginLoss:

from pytorch_metric_learning.distances import CosineSimilarity
from pytorch_metric_learning.reducers import ThresholdReducer
from pytorch_metric_learning.regularizers import LpRegularizer
from pytorch_metric_learning import losses
loss_func = losses.TripletMarginLoss(distance = CosineSimilarity(), 
				     reducer = ThresholdReducer(high=0.3), 
			 	     embedding_regularizer = LpRegularizer())

This customized triplet loss has the following properties:

  • The loss will be computed using cosine similarity instead of Euclidean distance.
  • All triplet losses that are higher than 0.3 will be discarded.
  • The embeddings will be L2 regularized.

Using loss functions for unsupervised / self-supervised learning

The TripletMarginLoss is an embedding-based or tuple-based loss. This means that internally, there is no real notion of "classes". Tuples (pairs or triplets) are formed at each iteration, based on the labels it receives. The labels don't have to represent classes. They simply need to indicate the positive and negative relationships between the embeddings. Thus, it is easy to use these loss functions for unsupervised or self-supervised learning.

For example, the code below is a simplified version of the augmentation strategy commonly used in self-supervision. The dataset does not come with any labels. Instead, the labels are created in the training loop, solely to indicate which embeddings are positive pairs.

# your training for-loop
for i, data in enumerate(dataloader):
	optimizer.zero_grad()
	embeddings = your_model(data)
	augmented = your_model(your_augmentation(data))
	labels = torch.arange(embeddings.size(0))

	embeddings = torch.cat([embeddings, augmented], dim=0)
	labels = torch.cat([labels, labels], dim=0)

	loss = loss_func(embeddings, labels)
	loss.backward()
	optimizer.step()

If you're interested in MoCo-style self-supervision, take a look at the MoCo on CIFAR10 notebook. It uses CrossBatchMemory to implement the momentum encoder queue, which means you can use any tuple loss, and any tuple miner to extract hard samples from the queue.

Highlights of the rest of the library

  • For a convenient way to train your model, take a look at the trainers.
  • Want to test your model's accuracy on a dataset? Try the testers.
  • To compute the accuracy of an embedding space directly, use AccuracyCalculator.

If you're short of time and want a complete train/test workflow, check out the example Google Colab notebooks.

To learn more about all of the above, see the documentation.

Installation

Required PyTorch version

  • pytorch-metric-learning >= v0.9.90 requires torch >= 1.6
  • pytorch-metric-learning < v0.9.90 doesn't have a version requirement, but was tested with torch >= 1.2

Pip

pip install pytorch-metric-learning

To get the latest dev version:

pip install pytorch-metric-learning --pre

To install on Windows:

pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install pytorch-metric-learning

To install with evaluation and logging capabilities (This will install the unofficial pypi version of faiss-gpu):

pip install pytorch-metric-learning[with-hooks]

To install with evaluation and logging capabilities (CPU) (This will install the unofficial pypi version of faiss-cpu):

pip install pytorch-metric-learning[with-hooks-cpu]

Conda

conda install pytorch-metric-learning -c metric-learning -c pytorch

To use the testing module, you'll need faiss, which can be installed via conda as well. See the installation instructions for faiss.

Library contents

Distances

Name Reference Papers
CosineSimilarity
DotProductSimilarity
LpDistance
SNRDistance Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning

Losses

Name Reference Papers
AngularLoss Deep Metric Learning with Angular Loss
ArcFaceLoss ArcFace: Additive Angular Margin Loss for Deep Face Recognition
CircleLoss Circle Loss: A Unified Perspective of Pair Similarity Optimization
ContrastiveLoss Dimensionality Reduction by Learning an Invariant Mapping
CosFaceLoss - CosFace: Large Margin Cosine Loss for Deep Face Recognition
- Additive Margin Softmax for Face Verification
FastAPLoss Deep Metric Learning to Rank
GeneralizedLiftedStructureLoss In Defense of the Triplet Loss for Person Re-Identification
IntraPairVarianceLoss Deep Metric Learning with Tuplet Margin Loss
LargeMarginSoftmaxLoss Large-Margin Softmax Loss for Convolutional Neural Networks
LiftedStructreLoss Deep Metric Learning via Lifted Structured Feature Embedding
MarginLoss Sampling Matters in Deep Embedding Learning
MultiSimilarityLoss Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning
NCALoss Neighbourhood Components Analysis
NormalizedSoftmaxLoss - NormFace: L2 Hypersphere Embedding for Face Verification
- Classification is a Strong Baseline for DeepMetric Learning
NPairsLoss Improved Deep Metric Learning with Multi-class N-pair Loss Objective
NTXentLoss - Representation Learning with Contrastive Predictive Coding
- Momentum Contrast for Unsupervised Visual Representation Learning
- A Simple Framework for Contrastive Learning of Visual Representations
ProxyAnchorLoss Proxy Anchor Loss for Deep Metric Learning
ProxyNCALoss No Fuss Distance Metric Learning using Proxies
SignalToNoiseRatioContrastiveLoss Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning
SoftTripleLoss SoftTriple Loss: Deep Metric Learning Without Triplet Sampling
SphereFaceLoss SphereFace: Deep Hypersphere Embedding for Face Recognition
TripletMarginLoss Distance Metric Learning for Large Margin Nearest Neighbor Classification
TupletMarginLoss Deep Metric Learning with Tuplet Margin Loss

Miners

Name Reference Papers
AngularMiner
BatchEasyHardMiner Improved Embeddings with Easy Positive Triplet Mining
BatchHardMiner In Defense of the Triplet Loss for Person Re-Identification
DistanceWeightedMiner Sampling Matters in Deep Embedding Learning
EmbeddingsAlreadyPackagedAsTriplets
HDCMiner Hard-Aware Deeply Cascaded Embedding
MaximumLossMiner
MultiSimilarityMiner Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning
PairMarginMiner
TripletMarginMiner FaceNet: A Unified Embedding for Face Recognition and Clustering
UniformHistogramMiner

Reducers

Name Reference Papers
AvgNonZeroReducer
ClassWeightedReducer
DivisorReducer
DoNothingReducer
MeanReducer
PerAnchorReducer
ThresholdReducer

Regularizers

Name Reference Papers
CenterInvariantRegularizer Deep Face Recognition with Center Invariant Loss
LpRegularizer
RegularFaceRegularizer RegularFace: Deep Face Recognition via Exclusive Regularization
SparseCentersRegularizer SoftTriple Loss: Deep Metric Learning Without Triplet Sampling
ZeroMeanRegularizer Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning

Samplers

Name Reference Papers
MPerClassSampler
TuplesToWeightsSampler
FixedSetOfTriplets

Trainers

Name Reference Papers
MetricLossOnly
TrainWithClassifier
CascadedEmbeddings Hard-Aware Deeply Cascaded Embedding
DeepAdversarialMetricLearning Deep Adversarial Metric Learning
UnsupervisedEmbeddingsUsingAugmentations
TwoStreamMetricLoss

Testers

Name Reference Papers
GlobalEmbeddingSpaceTester
WithSameParentLabelTester
GlobalTwoStreamEmbeddingSpaceTester

Utils

Name Reference Papers
AccuracyCalculator
HookContainer
InferenceModel
TorchInitWrapper
DistributedLossWrapper
DistributedMinerWrapper
LogitGetter

Base Classes, Mixins, and Wrappers

Name Reference Papers
CrossBatchMemory Cross-Batch Memory for Embedding Learning
GenericPairLoss
MultipleLosses
MultipleReducers
EmbeddingRegularizerMixin
WeightMixin
WeightRegularizerMixin
BaseDistance
BaseMetricLossFunction
BaseMiner
BaseTupleMiner
BaseSubsetBatchMiner
BaseReducer
BaseRegularizer
BaseTrainer
BaseTester

Benchmark results

See powerful-benchmarker to view benchmark results and to use the benchmarking tool.

Development

Unit tests can be run with the default unittest library:

python -m unittest discover

You can specify the test datatypes and test device as environment variables. For example, to test using float32 and float64 on the CPU:

TEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover

To run a single test file instead of the entire test suite, specify the file name:

python -m unittest tests/losses/test_angular_loss.py

Code is formatted using black and isort:

pip install black isort
./format_code.sh

Acknowledgements

Contributors

Thanks to the contributors who made pull requests!

Algorithm implementations + useful features

Example notebooks

General improvements and bug fixes

Facebook AI

Thank you to Ser-Nam Lim at Facebook AI, and my research advisor, Professor Serge Belongie. This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to Ashish Shah and Austin Reiter for reviewing my code during its early stages of development.

Open-source repos

This library contains code that has been adapted and modified from the following great open-source repos:

Logo

Thanks to Jeff Musgrave for designing the logo.

Citing this library

If you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:

@misc{musgrave2020pytorch,
    title={PyTorch Metric Learning},
    author={Kevin Musgrave and Serge Belongie and Ser-Nam Lim},
    year={2020},
    eprint={2008.09164},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Comments
  • Conda UnsatisfiableError: The following specifications were found to be incompatible with your CUDA driver

    Conda UnsatisfiableError: The following specifications were found to be incompatible with your CUDA driver

    Fix the following conda errors (not sure if they are reproducible errors):

    Windows:

    UnsatisfiableError: The following specifications were found to be incompatible with each other:
    
    Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver:
    
      - feature:/win-64::__cuda==10.2=0
      - feature:|@/win-64::__cuda==10.2=0
    
    Your installed CUDA driver is: 10.2
    

    Linux:

    UnsatisfiableError: The following specifications were found to be incompatible with each other:
    
    Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver:
    
      - feature:/linux-64::__cuda==10.1=0
      - feature:|@/linux-64::__cuda==10.1=0
    
    Your installed CUDA driver is: 10.1
    
    help wanted pip/conda 
    opened by KevinMusgrave 32
  • Using a custom collate_fn with testers and logging_presets

    Using a custom collate_fn with testers and logging_presets

    Hello, it is not clear how can I use RNN architecture with this package, basically my problem is that I have to pad the sequences in each batch, but I don't have the access to the DataLoader, how should I approach this problem?

    documentation 
    opened by levtelyatnikov 29
  • How to use NTXentLoss as in CPC?

    How to use NTXentLoss as in CPC?

    Hello! Thanks for this incredible contribution.

    I want to know how to use the NTXentLoss as in CPC model. I mean, I have a positive sample and N-1 negative samples.

    Thank you for your help in this matter.

    Frequently Asked Questions question 
    opened by vgaraujov 26
  • [Question]: Contrastive learning with multiple modalities

    [Question]: Contrastive learning with multiple modalities

    Hi there,

    Thanks for putting together such a great library. I am interested in multimodal applications where contrastive learning is used to learn a shared embedding space where two embedding that belong to different modalities (e.g., text, image) are close together. A common loss is the InfoNCE loss where in-batch negatives are used. One possible implementation is the one used in CLIP: https://github.com/mlfoundations/open_clip/blob/main/src/training/train.py#L23

    Any suggestions as to how to use this library to reproduce this kind of loss together with a CrossBatchMemory? I have the impression that the current implementation assumes you have a single embedding matrix. However, I was thinking that by concatenating image and text embeddings, and then using labels to say which ones belong to the same "class", I should be able to achieve the same behaviour. Am I wrong?

    Also, as far as I can see, the CrossBatchMemory reuses the labels from the previous iteration and doesn't compute new ones. For the self-supervised setup where the labels are just indexes, this won't work. Do I have to extend the class myself to take this into account?

    Thanks, Alessandro

    question 
    opened by aleSuglia 23
  • 100G GPU Memory occupation of AngularLoss

    100G GPU Memory occupation of AngularLoss

    I'm training a resnet18 using AngularLoss on a dataset containing 100000 images with size(256, 512) including 4922 classes, but the error shows : RuntimeError: CUDA out of memory. Tried to allocate 100.11 GiB (GPU 0; 15.90 GiB total capacity; 5.50 GiB already allocated; 9.43 GiB free; 5.65 GiB reserved in total by PyTorch)

    Other loss functions are working well, here's my main code: ` loss_func = losses.AngularLoss() mining_func = miners.AngularMiner()

    ...
            for key in output.keys():
                if key == "embedding":
                    indices_tuple = self.hard_case_miner(output["embedding"], batch["labels"])
                    loss = self.criterion[key](output["embedding"], batch["labels"], indices_tuple)
    ....
            loss.backward()
            self.optimizer.step()
    

    `

    Please show me where's wrong, thanks.

    possible bug? 
    opened by Mactarvish 23
  • Plotting training and validation loss

    Plotting training and validation loss

    I would like to plot training and validation loss over the training iterations. I'm using the hooks.get_loss_history() and working with record-keeper to visualize the loss. It's working but I'm not able to plot the training and validation loss in the same plot and I'm not sure which loss I am plotting with hooks.get_loss_history() in the first place. Would be grateful for any advice, thanks!

    Frequently Asked Questions question 
    opened by simonasantamaria 23
  • How to use ArcFaceLoss with trainer?

    How to use ArcFaceLoss with trainer?

    I am starting using ArcFaceLoss, but not quite understand how to use it. Following sample code, I think it should be like this:

    # efnet will output embeddings_num
    efnet = models.create_model("efficient_net", embeddings_num).to(device)
    optimizer = optim.Adam(efnet.parameters(), lr=0.00001, weight_decay=0.001)
    
    # Set the arcface loss function
    loss_func = losses.ArcFaceLoss(num_classes=9, embedding_size=embeddings_num).to(device)
    loss_optimizer = optim.Adam(loss_func.parameters(), lr=0.00001) 
    
    # Set the mining function
    miner = miners.MultiSimilarityMiner(epsilon=0.1)
    # Set the dataloader sampler
    sampler = samplers.MPerClassSampler(train_data.targets, m=4, length_before_new_iter=len(train_data))
    

    But then, I am stuck on defining the models and loss_funcs dictionary.

    # Package the above stuff into dictionaries.
    models = {"trunk": efnet} ???
    optimizers = {"trunk_optimizer": optimizer, "arc_optimizer": loss_optimizer}
    loss_funcs = ??
    mining_funcs = {"tuple_miner": miner}
    

    and the rest the same as your MetricLossOnly code sample?

    from pytorch_metric_learning import losses, miners, samplers, trainers, testers
    # batch_size=32
    trainer = trainers.MetricLossOnly(models,
                                      optimizers,
                                      batch_size,
                                      loss_funcs,
                                      mining_funcs,
                                      train_data,
                                      sampler=sampler,
                                      dataloader_num_workers = 24,
                                      end_of_iteration_hook = hooks.end_of_iteration_hook,
                                      end_of_epoch_hook = end_of_epoch_hook)
    
    question 
    opened by KennyTC 21
  • Error in computing similarity with multiple GPUs

    Error in computing similarity with multiple GPUs

    Hi Kevin. Thank you for providing this wonderful code for metric learning. I am facing a weird issue for which I seek your guidance. When I use a simple contrastive loss to train my network on multiple GPUs. I get the following error.

    File "../pytorch_metric_learning/distances/base_distance.py", line 26, in forward
        query_emb, ref_emb, query_emb_normalized, ref_emb_normalized
      File "../pytorch_metric_learning/distances/base_distance.py", line 74, in set_default_stats
        self.get_norm(query_emb)
    RuntimeError: CUDA error: device-side assert triggered
    

    However, this error does not occur when I train my network on a single GPU. Could you please let me know what might cause this issue and its possible fix? Its not the pytorch version issue because my torch==1.7.1 and torchvision==0.8.2. I have used your latest code for metric learning.
    Thanks.

    possible bug? 
    opened by shashankvkt 21
  • compatibility with pytorch lightning

    compatibility with pytorch lightning

    Hi there,

    I have some boiler plate code using metric learning, something which does along the lines of:

    model=SomeModel()
    loss_func = losses.LargeMarginSoftmaxLoss(...).to(torch.device('cuda'))
    params =  list(model.parameters()) + list(model_loss.parameters())
    loss_optimizer = torch.optim.Adam(params, lr=0.01)
    ## then during training:
    loss_optimizer.step()
    

    This seems perfectly fine with vanilla pytorch and metric learning, but when I refactor the code to fit into the PyTorch lightning framework, I get dramatically different results.

    Are there any known issues or should the setup be changed to tailor to lightning?

    The refactored lightning code:

    class ModelLoss(torch.nn.Module):
        def __init__(self, loss_func=None):
            super().__init__()
            self.loss_func = loss_func
    
        def forward(self, pred, target):
            loss = self.loss_func(pred, target)
            return loss
    
    class LMSNet(LightningModule):
        def __init__(self, some_test_loader=test_loader):
                super().__init__()
                ## inistialise do_something
                R = regularizers.RegularFaceRegularizer()
               loss_func = losses.LargeMarginSoftmaxLoss(...,weight_regularizer=R)
                self.model_loss = ModelLoss(loss_func=loss_func)
                self.test_loader=some_test_loader
    
        def forward(self, data=None, get_encodings=False):
            x = self.do_something(data)
           ## do something
            return x
    
        def training_step(self, batch, batch_idx):
            data, target, paths= batch
            output = self(data)
            loss = self.model_loss(output, target)
            self.log('loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
            return loss
    
    question 
    opened by annongitter 21
  • How to use ContrastiveLoss

    How to use ContrastiveLoss

    Hi,

    I have a Siamese NN architercture that holds a Bert Transformer for each of the siblings Sub-networks. So I have sentences pairs and I want to encode each of the sentence in order to get their Embeddings.

    def forward(set1, set2): embeddings1 = BERT(set1) #here we have batch_size x 768 after avg pooling embeddings2 = BERT(set2) #here we have batch_size x 768 after avg pooling return embeddings1, embeddings2

    I am wondering how to pass those 2 Embeddings Tensors to ContrastiveLoss? Should I concatenate them in 0 dimension in order to have 2 x batch_size x 768 -----> for example if batch size equals to 16 then after concat I will have 32x768. After that I must repeat the tensor of Labels. I suppose..... Labels.repeat(2) #32x768

    I can not understand how the loss between Embeddings is calculated. It calculates first the distance for every possible pair in 32 batch_size, based on labels?

    Could you please provide me an example?

    Thanks in advanced

    question 
    opened by icsd13152 20
  • Metric learning loss for Multi label learning

    Metric learning loss for Multi label learning

    Hi

    In reference to the following discussion https://github.com/KevinMusgrave/pytorch-metric-learning/issues/178 , may I know if there are any more methods to implement metric learning in multi label settings.

    'from pytorch_metric_learning.trainers import MetricLossOnly' --> is not working for me. My data set is multilabel and imbalanced.

    Thanks

    question 
    opened by priyarana 17
  • Clarification in CircleLoss Documentation

    Clarification in CircleLoss Documentation

    After cross-referencing the paper, s_n and s_p are irrelevant to the loss function and should be removed from the documentation to avoid confusion (as it did to me for much longer than it should have).

    EDIT: Also it appears the correct formula should be equation (6) in the paper, instead of equation (4).

    opened by ayhyap 0
  • 3D embedding tensor

    3D embedding tensor

    Hello, I'm getting the ValueError: embeddings must be a 2D tensor of shape (batch_size, embedding_size) message because, indeed, my embedding is a 3D tensor. However, I've provided the distance function. So, is there a workaround for this shapes verification?

    enhancement 
    opened by celsofranssa 6
  • Make VICRegLoss use BaseMetricLossFunction

    Make VICRegLoss use BaseMetricLossFunction

    With the self-supervised wrapper planned for version 2, the loss function can be used like in the paper by simply using the wrapper. So from a usability point of view, there's no reason to not have it extend BaseMetricLossFunction.

    enhancement 
    opened by KevinMusgrave 1
  • SupCR: Supervised Contrastive Regression

    SupCR: Supervised Contrastive Regression

    Thanks to everyone contributing to this repo. I'd like to suggest the addition of the new regression-focused contrastive loss from Dina Katabi's group at MIT. SupCR is focused on regression tasks by making sure embeddings are close when the regression values (labels) are close numerically.

    Screenshot from 2022-12-15 15-50-56

    Full paper here.

    new algorithm request 
    opened by TKassis 2
  • Modified arcface loss to keep cost function monotonically decreasing

    Modified arcface loss to keep cost function monotonically decreasing

    As mentioned in section 2.8 of the old arcface paper under target logit analysis, we can see that whenever the angle between feature vector and target center is too large, the cost function behaviour can change and it wouldn't be monotonically decreasing.

    Specifically, when the angle between the feature vector and target center is more obtuse than (180 - margin), the cos function can start to increase.

    For instance, consider that my margin is 30 degrees.

    If the normed feature vector makes an angle of 130 degrees with the target center, then the logit = cos(130 + 30) = cos(160) = -0.93.

    Now consider the normed feature vector makes an angle of 170 degrees with the target center, then the logit = cos(170 + 30) = cos(200) = -0.93.

    This doesn't help in our training as we want to penalize highly obtuse angle even more. Hence we make the following change in our loss function as mentioned here to keep this function monotonically decreasing.

    Thanks, Vinayak.

    opened by ElisonSherton 4
Releases(v1.6.3)
  • v1.6.3(Nov 1, 2022)

  • v1.6.2(Sep 20, 2022)

    Additional fix to v1.6.1

    To be consistent with the common definition of mean average precision, the divisor has been changed again:

    • v1.6.1 divisor was min(k, num_relevant)
    • v1.6.2 divisor is num_relevant

    Again, this has no effect on mean_average_precision_at_r

    Source code(tar.gz)
    Source code(zip)
  • v1.6.1(Sep 20, 2022)

    Bug Fixes

    Fixed a bug in mean_average_precision in AccuracyCalculator. Previously, the divisor for each sample was the number of correctly retrieved samples. In the new version, the divisor for each sample is min(k, num_relevant).

    For example, if class "A" has 11 samples, then num_relevant is 11 for every sample with the label "A".

    • If k = 5, meaning that 5 nearest neighbors are retrieved for each sample, then the divisor will be 5.
    • If k = 100, meaning that 100 nearest neighbors are retrieved for each sample, then the divisor will be 11.

    The bug in previous versions did not affect mean_average_precision_at_r.

    Other minor changes

    Added additional shape checks to AccuracyCalculator.get_accuracy.

    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(Sep 3, 2022)

    Features

    DistributedLossWrapper and DistributedMinerWrapper now support ref_emb and ref_labels:

    from pytorch_metric_learning import losses
    from pytorch_metric_learning.utils import distributed as pml_dist
    
    loss_func = losses.ContrastiveLoss()
    loss_func = pml_dist.DistributedLossWrapper(loss_func)
    
    loss = loss_func(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)
    

    Thanks @NoTody for PR #503

    Source code(tar.gz)
    Source code(zip)
  • v1.5.2(Aug 3, 2022)

    Bug fixes

    In previous versions, when embeddings_come_from_same_source == True, the first nearest-neighbor of each query embedding was discarded, with the assumption that it must be the query embedding itself.

    While this is usually the case, it's not always the case. It is possible for two different embeddings to be exactly equal to each other, and discarding the first nearest-neighbor in this case can be incorrect.

    This release fixes this bug by excluding each embedding's index from the k-nn results.

    Sort-of breaking changes

    In order for the above bug fix to work, AccuracyCalculator now requires that reference[:len(query)] == query when embeddings_come_from_same_source == True. For example, the following will raise an error:

    query = torch.randn(100, 10)
    ref = torch.randn(100, 10)
    ref = torch.cat([ref, query], dim=0)
    AC.get_accuracy(query, ref, labels1, labels2, True)
    # ValueError
    

    To fix this, move query to the beginning of ref:

    query = torch.randn(100, 10)
    ref = torch.randn(100, 10)
    ref = torch.cat([query, ref], dim=0)
    AC.get_accuracy(query, ref, labels1, labels2, True)
    

    Note that this change doesn't affect the case where query is ref.

    Source code(tar.gz)
    Source code(zip)
  • v1.5.1(Jul 16, 2022)

  • v1.5.0(Jun 29, 2022)

    Features

    For some loss functions, labels are now optional if indices_tuple is provided:

    loss = loss_func(embeddings, indices_tuple=pairs)
    

    The losses for which you can do this are:

    • CircleLoss
    • ContrastiveLoss
    • IntraPairVarianceLoss
    • GeneralizedLiftedStructureLoss
    • LiftedStructureLoss
    • MarginLoss
    • MultiSimilarityLoss
    • NTXentLoss
    • SignalToNoiseRatioContrastiveLoss
    • SupConLoss
    • TripletMarginLoss
    • TupletMarginLoss

    This issue has come up several times:

    #412 #490 #482 #473 #179 #263

    Source code(tar.gz)
    Source code(zip)
  • v1.4.0(Jun 9, 2022)

  • v1.3.2(May 29, 2022)

    Bug fixes

    • Fixed a bug in BatchEasyHardMiner where get_max_per_row was not always returning correct values, resulting in invalid pairs and triplets. #476
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(May 27, 2022)

    Bug fixes

    • Fixed ThresholdReducer being incompatible with older versions of PyTorch (#465)
    • Fixed VICRegLoss being incompatible with older versions of PyTorch, and missing a division by 2 (#467 and #470 by @cwkeam)

    Other

    • Made CustomKNN more memory efficient by removing torch.cat call.
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Mar 30, 2022)

  • v1.2.1(Mar 17, 2022)

  • v1.2.0(Mar 1, 2022)

  • v1.1.2(Feb 16, 2022)

  • v1.1.1(Feb 12, 2022)

  • v1.1.0(Dec 28, 2021)

    New features

    CentroidTripletLoss

    Implementation of On the Unreasonable Effectiveness of Centroids in Image Retrieval

    VICRegLoss

    Implementation of VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

    AccuracyCalculator

    • Added mean reciprocal rank as an accuracy metric. Available as "mean_reciprocal_rank".
    • Added return_per_class argument for AccuracyCalculator. This is like avg_of_avgs but returns the accuracy per class, instead of averaging them for you.

    Related issues

    #369 #372 #374 #394

    Contributors

    Thanks to @cwkeam and @mlw214!

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0(Nov 28, 2021)

    Reference embeddings for tuple losses

    You can separate the source of anchors and positive/negatives. In the example below, anchors will be selected from embeddings and positives/negatives will be selected from ref_emb.

    loss_fn = TripletMarginLoss()
    loss = loss_fn(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)
    

    Efficient mode for DistributedLossWrapper

    • efficient=True: each process uses its own embeddings for anchors, and the gathered embeddings for positives/negatives. Gradients will not be equal to those in non-distributed code, but the benefit is reduced memory and faster training.
    • efficient=False: each process uses gathered embeddings for both anchors and positives/negatives. Gradients will be equal to those in non-distributed code, but at the cost of doing unnecessary operations (i.e. doing computations where both anchors and positives/negatives have no gradient).

    The default is False. You can set it to True like this:

    from pytorch_metric_learning import losses
    from pytorch_metric_learning.utils import distributed as pml_dist
    
    loss_func = losses.ContrastiveLoss()
    loss_func = pml_dist.DistributedLossWrapper(loss_func, efficient=True)
    

    Documentation: https://kevinmusgrave.github.io/pytorch-metric-learning/distributed/

    Customizing k-nearest-neighbors for AccuracyCalculator

    You can use a different type of faiss index:

    import faiss
    from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator
    from pytorch_metric_learning.utils.inference import FaissKNN
    
    knn_func = FaissKNN(index_init_fn=faiss.IndexFlatIP, gpus=[0,1,2])
    ac = AccuracyCalculator(knn_func=knn_func)
    

    You can also use a custom distance function:

    from pytorch_metric_learning.distances import SNRDistance
    from pytorch_metric_learning.utils.inference import CustomKNN
    
    knn_func = CustomKNN(SNRDistance())
    ac = AccuracyCalculator(knn_func=knn_func)
    

    Relevant docs:

    Issues resolved

    https://github.com/KevinMusgrave/pytorch-metric-learning/issues/204 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/251 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/256 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/292 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/330 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/337 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/345 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/347 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/349 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/353 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/359 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/361 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/362 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/363 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/368 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/376 https://github.com/KevinMusgrave/pytorch-metric-learning/issues/380

    Contributors

    Thanks to @yutanakamura-tky and @KinglittleQ for pull requests, and @mensaochun for providing helpful code in #380

    Source code(tar.gz)
    Source code(zip)
  • v0.9.99(May 10, 2021)

    Bug fixes

    • Accuracy Calculation bug in GlobalTwoStreamEmbeddingSpaceTester (#301)
    • Mixed precision bug in convert_to_weights (#300)

    Features

    • HierarchicalSampler
    • Improved functionality for InferenceModel (#296 and #304)
      • train_indexer now accepts a dataset
      • also added functions save_index, load_index, and add_to_indexer
    • Added power argument to LpRegularizer (#299)
    • Return exception if labels has more than 1 dimension (#307)
    • Added a global flag for turning on/off collect_stats (#311)
    • TripletMarginLoss smooth variant uses the input margin now (#315)
    • Use package-specific logger, "PML", instead of root logger (#318)
    • Cleaner key verification in the trainers (#102)

    Thanks to @elias-ramzi, @gkouros, @vltanh, and @Hummer12007

    Source code(tar.gz)
    Source code(zip)
  • v0.9.98(Apr 3, 2021)

    AccuracyCalculator breaking change (issue #290)

    The k parameter in AccuracyCalculator has a new behavior. The allowed values are:

    • None. This means k will be set to the total number of reference embeddings.
    • An integer greater than 0. This means k will be set to the input integer.
    • "max_bin_count". This means k will be set to max(bincount(reference_labels)) - self_count where self_count == 1 if the query and reference embeddings come from the same source.

    The old behavior is described here.

    If your dataset is large, you might find the k-nn search is now very slow. This is because the new default behavior is to set k to len(reference_embeddings). To avoid this, you can set k to a number, like k = 1000 or try k = "max_bin_count" to get behavior similar (though not identical) to the old default.

    Apologies for the drastic change. I'm hoping to have things stable and following semantic versioning when v1.0 arrives.

    Bug fixes

    • lmu.convert_to_triplets has been fixed (#291)
    • Losses and miners should now be compatible with autocast (#293)

    New features / improvements

    Source code(tar.gz)
    Source code(zip)
  • v0.9.97(Mar 4, 2021)

    Bug fixes

    • Small fix for NTXentLoss with no negative pairs #272
    • Fixed .detach() bug in NTXentLoss #282
    • Fixed parameter override bug in MatchFinder.get_matching_pairs() #286 by @joaqo

    New features and improvements

    AccuracyCalculator now uses torch instead of numpy

    • All the calculations (except for NMI and AMI) are done with torch. Calculations will be done on the same device and dtype as the input query tensor.
    • You can still pass numpy arrays into AccuracyCalculator.get_accuracy, but the arrays will be immediately converted to torch tensors.

    Faster custom label comparisons in AccuracyCalculator

    • See #264 by @mlopezantequera

    Numerical stability improvement for DistanceWeightedMiner

    See #278 by @z1w

    UniformHistogramMiner

    This is like DistanceWeightedMiner, except that it works well with high dimension embeddings, and works with any distance metric (not just L2 normalized distance). Documentation

    PerAnchorReducer

    This converts unreduced pairs to unreduced elements. For example, NTXentLoss returns losses per positive pair. If you used PerAnchorReducer with NTXentLoss, then the losses per pair would first be converted to losses per batch element, before being passed to the inner reducer. See the documentation

    BaseTester no longer converts embeddings from torch to numpy

    This includes the get_all_embeddings function. If you want get_all_embeddings to return numpy arrays, you can set the return_as_numpy flag to True:

    embeddings, labels = tester.get_all_embeddings(dataset, model, return_as_numpy=True)
    

    The embeddings are converted to numpy only for the visualizer and visualizer_hook, if specified.

    Reduced usage of .to(device) and .type(dtype)

    Tensors are initialized on device and with the necessary dtype, and they are moved to device and cast to dtypes only when necessary. See this code snippet for details.

    Simplified DivisorReducer

    Replaced "divisor_summands" with "divisor".

    Source code(tar.gz)
    Source code(zip)
  • v0.9.96(Jan 12, 2021)

    New Features

    Thanks to @mlopezantequera for adding the following features!

    Testers: allow any combination of query and reference sets (#250)

    To evaluate different combinations of query and reference sets, use the splits_to_eval argument for tester.test().

    For example, let's say your dataset_dict has two keys: "dataset_a" and "train".

    • The default splits_to_eval = None is equivalent to:
    splits_to_eval = [('dataset_a', ['dataset_a']), ('train', ['train'])]
    
    • dataset_a as the query, and train as the reference:
    splits_to_eval = [('dataset_a', ['train'])]
    
    • dataset_a as the query, and dataset_a + train as the reference:
    splits_to_eval = [('dataset_a', ['dataset_a', 'train'])]
    

    Then pass splits_to_eval to tester.test:

    tester.test(dataset_dict, epoch, model, splits_to_eval = splits_to_eval)
    

    Note that this new feature makes the old reference_set init argument obsolete, so reference_set has been removed.

    AccuracyCalculator: allow arbitrary label comparion functions (#254)

    AccuracyCalculator now has an optional init argument, label_comparison_fn, which is a function that compares two numpy arrays of labels and returns a boolean array. The default is numpy.equal. If a custom function is used, then you must exclude clustering based metrics ("NMI" and "AMI"). The following is an example of a custom function for two-dimensional labels. It returns True if the 0th column matches, and the 1st column does not match:

    def example_label_comparison_fn(x, y):
        return (x[:, 0] == y[:, 0]) & (x[:, 1] != y[:, 1])
    
    AccuracyCalculator(exclude=("NMI", "AMI"), 
                        label_comparison_fn=example_label_comparison_fn)
    

    Other Changes

    • BaseTrainer and BaseTester now take in an optional dtype argument. This is the type that the dataset output will be converted to, e.g. torch.float16. If set to the default value of None, then no type casting will be done.
    • Removed self.dim_reduced_embeddings from BaseTester and the associated code in HookContainer, due to lack of use.
    • tester.test() now returns all_accuracies, whereas before, it returned nothing and you'd have to access all_accuracies either through the end_of_testing_hook or by accessing tester.all_accuracies.
    • tester.embeddings_and_labels is deleted at the end of tester.test() to free up memory.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.95(Dec 11, 2020)

    New

    BatchEasyHardMiner

    This new miner is an implementation of Improved Embeddings with Easy Positive Triplet Mining. See the documentation. Thanks @marijnl!

    New metric added to AccuracyCalculator

    The new metric is mean_average_precision, which is the commonly used k-nn based mAP in information retrieval. Note that this differs from the already existing metric, mean_average_precision_at_r.

    Bug fixes

    • dtype casting in MultiSimilarityMiner changed to work with autocast. See #233 by @thinline72
    • Added logic for dealing with zero rows in the weight matrix in DistanceWeightedMiner by ignoring them. For example, if the entire weight matrix is 0, then no triplets will be returned. Previously, the zero rows would cause a RuntimeError. See #230 by @tpanum
    Source code(tar.gz)
    Source code(zip)
  • v0.9.94(Nov 6, 2020)

    Various bug fixes and improvements

    • A list or dictionary of miners can be passed into MultipleLosses. #212
    • Fixed bug where MultipleLosses failed in list mode. #213
    • Fixed bug where IntraPairVarianceLoss and MarginLoss were overriding sub_loss_names instead of _sub_loss_names. This likely caused embedding regularizers to have no effect for these two losses. #215
    • ModuleWithRecordsAndReducer now creates copies of the input reducer when necessary. #216
    • Moved cos.clone() inside torch.no_grad() in RegularFaceRegularizer. Should be more efficient? #219
    • In utils.inference, moved faiss import inside of FaissIndexer since that is the only class that requires it. #222
    • Added a copy_weights init argument to LogitGetter, to make copying optional #223
    Source code(tar.gz)
    Source code(zip)
  • v0.9.93(Oct 6, 2020)

    Small update

    • Optimized get_random_triplet_indices, so if you were using DistanceWeightedMiner, or if you ever set the triplets_per_anchor argument to something other than "all" anywhere in your code, it should run a lot faster now. Thanks @AlexSchuy
    Source code(tar.gz)
    Source code(zip)
  • v0.9.92(Sep 14, 2020)

    New Features

    DistributedLossWrapper and DistributedMinerWrapper

    Added DistributedLossWrapper and DistributedMinerWrapper. Wrap a loss or miner with these when using PyTorch's DistributedDataParallel (i.e. multiprocessing). Most of the code is by @JohnGiorgi (https://github.com/JohnGiorgi/DeCLUTR).

    from pytorch_metric_learning import losses, miners
    from pytorch_metric_learning.utils import distributed as pml_dist
    loss_func = pml_dist.DistributedLossWrapper(loss = losses.ContrastiveLoss())
    miner = pml_dist.DistributedMinerWrapper(miner = miners.MultiSimilarityMiner())
    

    For a working example, see the "Multiprocessing with DistributedDataParallel" notebook.

    Added enqueue_idx to CrossBatchMemory

    Now you can make CrossBatchMemory work with MoCo. This adds a great deal of flexibility to the MoCo framework, because you can use any tuple loss and tuple miner in CrossBatchMemory.

    Previously this wasn't possible because all embeddings passed into CrossBatchMemory would go into the memory queue. In contrast, MoCo only queues the momentum encoder's embeddings.

    The new enqueue_idx argument lets you do this, by specifying which embeddings should be added to memory. Here's a modified snippet from the MoCo on CIFAR10 notebook:

    from pytorch_metric_learning.losses import CrossBatchMemory, NTXentLoss
    
    loss_fn = CrossBatchMemory(loss = NTXentLoss(), embedding_size = 64, memory_size = 16384)
    
    ### snippet from the training loop ###
    for images, _ in train_loader:
      ...
      previous_max_label = torch.max(loss_fn.label_memory)
      num_pos_pairs = encQ_out.size(0)
      labels = torch.arange(0, num_pos_pairs)
      labels = torch.cat((labels , labels)).to(device)
    
      ### add an offset so that the labels do not overlap with any labels in the memory queue ###
      labels += previous_max_label + 1
    
      ### we want to enqueue the output of encK, which is the 2nd half of the batch ###
      enqueue_idx = torch.arange(num_pos_pairs, num_pos_pairs*2)
    
      all_enc = torch.cat([encQ_out, encK_out], dim=0)
    
      ### now only encK_out will be added to the memory queue ###
      loss = loss_fn(all_enc, labels, enqueue_idx = enqueue_idx)
      ...
    

    Check out the MoCo on CIFAR10 notebook to see the entire script.

    TuplesToWeightsSampler

    This is a simple offline miner. It does the following:

    1. Take a random subset of your dataset, if you provide subset_size
    2. Use a specified miner to mine tuples from the subset dataset.
    3. Compute weights based on how often an element appears in the mined tuples.
    4. Randomly sample, using the weights as probabilities.
    from pytorch_metric_learning.samplers import TuplesToWeightsSampler
    from pytorch_metric_learning.miners import MultiSimilarityMiner
    
    miner = MultiSimilarityMiner(epsilon=-0.2)
    sampler = TuplesToWeightsSampler(model, miner, dataset, subset_size = 5000)
    # then pass the sampler into your Dataloader
    

    LogitGetter

    Added utils.inference.LogitGetter to make it easier to compute logits of classifier loss functions.

    from pytorch_metric_learning.losses import ArcFaceLoss
    from pytorch_metric_learning.utils.inference import LogitGetter
    
    loss_fn = ArcFaceLoss(num_classes = 100, embedding_size = 512)
    LG = LogitGetter(loss_fn)
    logits = LG(embeddings)
    

    Other

    • Added optional batch_size argument to MPerClassSampler. If you pass in this argument, then each batch is guaranteed to have m samples per class. Otherwise, most batches will have m samples per class, but it's not guaranteed for every batch. Note there restrictions on the values of m and batch_size. For example, batch_size must be a multiple of m. For all the restrictions, see the documentation.

    • Added trainable_attributes to BaseTrainer and to standardize the set_to_train and set_to_eval functions.

    • Added save_models init argument to HookContainer. If set to False then models will not be saved.

    • Added losses_sizes as a stat for BaseReducer

    • Added a type check and conversion in common_functions.labels_to_indices to go from torch tensor to numpy

    Source code(tar.gz)
    Source code(zip)
  • v0.9.91(Aug 31, 2020)

    Bug Fixes and Improvements

    • Fixed CircleLoss bug, by improving the logsumexp keep_mask implementation. See https://github.com/KevinMusgrave/pytorch-metric-learning/issues/173
    • Fixed convert_to_weights bug, which caused a runtime error when an empty indices_tuple was passed in. See https://github.com/KevinMusgrave/pytorch-metric-learning/issues/174
    • ProxyAnchorLoss now adds miner weights to the exponents which are fed to logsumexp. This is equivalent to scaling each loss component by e^(miner_weight). The previous behavior was to scale each loss component by just miner_weight.

    Other updates

    Source code(tar.gz)
    Source code(zip)
  • v0.9.90(Aug 8, 2020)

    ********** Summary **********

    The main update is the new distances module, which adds an extra level of modularity to loss functions. It is a pretty big design change, which is why so many arguments have become obsolete. See the documentation for a description of the new module.

    Other updates include support for half-precision, new regularizers and mixins, improved documentation, and default values for most initialization parameters.

    ********** Breaking Changes **********

    Dependencies

    This library now requires PyTorch >= 1.6.0. Previously there was no explicit version requirement.

    Losses and Miners

    All loss functions

    normalize_embeddings has been removed

    • If you never used this argument, nothing needs to be done.
    • normalize_embeddings = True: just remove the argument.
    • normalize_embeddings = False: remove the argument and instead pass it into a distance object. For example:
    from pytorch_metric_learning.distances import LpDistance
    loss_func = TripletMarginLoss(distance=LpDistance(normalize_embeddings=False))
    

    ContrastiveLoss, GenericPairLoss, BatchHardMiner, HDCMiner, PairMarginMiner

    use_similarity has been removed

    • If you never used this argument, nothing needs to be done.
    • use_similarity = True: remove the argument and:
    ### if you had set normalize_embeddings = False ###
    from pytorch_metric_learning.distances import DotProductSimilarity
    loss_func = ContrastiveLoss(distance=DotProductSimilarity(normalize_embeddings=False))
    
    #### otherwise ###
    from pytorch_metric_learning.distances import CosineSimilarity
    loss_func = ContrastiveLoss(distance=CosineSimilarity())
    

    squared_distances has been removed

    • If you never used this argument, nothing needs to be done.
    • squared_distances = True: remove the argument and instead pass power=2 into a distance object. For example:
    from pytorch_metric_learning.distances import LpDistance
    loss_func = ContrastiveLoss(distance=LpDistance(power=2))
    
    • squared_distances = False: just remove the argument.

    ContrastiveLoss, TripletMarginLoss

    power has been removed

    • If you never used this argument, nothing needs to be done.
    • power = 1: just remove the argument
    • power = X, where X != 1: remove the argument and instead pass it into a distance object. For example:
    from pytorch_metric_learning.distances import LpDistance
    loss_func = TripletMarginLoss(distance=LpDistance(power=2))
    

    TripletMarginLoss

    distance_norm has been removed

    • If you never used this argument, nothing needs to be done.
    • distance_norm = 2: just remove the argument
    • distance_norm = X, where X != 2: remove the argument and instead pass it as p into a distance object. For example:
    from pytorch_metric_learning.distances import LpDistance
    loss_func = TripletMarginLoss(distance=LpDistance(p=1))
    

    NPairsLoss

    l2_reg_weight has been removed

    • If you never used this argument, nothing needs to be done.
    • l2_reg_weight = 0: just remove the argument
    • l2_reg_weight = X, where X > 0: remove the argument and instead pass in an LpRegularizer and weight:
    from pytorch_metric_learning.regularizers import LpRegularizer
    loss_func = NPairsLoss(embedding_regularizer=LpRegularizer(), embedding_reg_weight=0.123)
    

    SignalToNoiseRatioContrastiveLoss

    regularizer_weight has been removed

    • If you never used this argument, nothing needs to be done.
    • regularizer_weight = 0: just remove the argument
    • regularizer_weight = X, where X > 0: remove the argument and instead pass in a ZeroMeanRegularizer and weight:
    from pytorch_metric_learning.regularizers import LpRegularizer
    loss_func = SignalToNoiseRatioContrastiveLoss(embedding_regularizer=ZeroMeanRegularizer(), embedding_reg_weight=0.123)
    

    SoftTripleLoss

    reg_weight has been removed

    • If you never used this argument, do the following to obtain the same default behavior:
    from pytorch_metric_learning.regularizers import SparseCentersRegularizer
    weight_regularizer = SparseCentersRegularizer(num_classes, centers_per_class)
    SoftTripleLoss(..., weight_regularizer=weight_regularizer, weight_reg_weight=0.2)
    
    • reg_weight = X: remove the argument, and use the SparseCenterRegularizer as shown above.

    WeightRegularizerMixin and all classification loss functions

    • If you never specified regularizer or reg_weight, nothing needs to be done.
    • regularizer = X: replace with weight_regularizer = X
    • reg_weight = X: replace with weight_reg_weight = X

    Classification losses

    • For all losses and miners, default values have been set for as many arguments as possible. This has caused a change in ordering in positional arguments for several of the classification losses. The typical form is now:
    loss_func = SomeClassificatinLoss(num_classes, embedding_loss, <keyword arguments>)
    

    See the documentation for specifics

    Reducers

    ThresholdReducer

    threshold has been replaced by low and high

    • Replace threshold = X with low = X

    Regularizers

    All regularizers

    normalize_weights has been removed

    • If you never used this argument, nothing needs to be done.
    • normalize_weights = True: just remove the argument.
    • normalize_weights = False: remove the argument and instead pass normalize_embeddings = False into a distance object. For example:
    from pytorch_metric_learning.distances import DotProductSimilarity
    loss_func = RegularFaceRegularizer(distance=DotProductSimilarity(normalize_embeddings=False))
    

    Inference

    MatchFinder

    mode has been removed

    • Replace mode="sim" with either distance=CosineSimilarity() or distance=DotProductSimilarity()
    • Replace mode="dist" with distance=LpDistance()
    • Replace mode="squared_dist" with distance=LpDistance(power=2)

    ********** New Features **********

    Distances

    Distances bring an additional level of modularity to building loss functions. Here's an example of how they work.

    Consider the TripletMarginLoss in its default form:

    from pytorch_metric_learning.losses import TripletMarginLoss
    loss_func = TripletMarginLoss(margin=0.2)
    

    This loss function attempts to minimize [dap - dan + margin]+.

    In other words, it tries to make the anchor-positive distances (dap) smaller than the anchor-negative distances (dan).

    Typically, dap and dan represent Euclidean or L2 distances. But what if we want to use a squared L2 distance, or an unnormalized L1 distance, or completely different distance measure like signal-to-noise ratio? With the distances module, you can try out these ideas easily:

    ### TripletMarginLoss with squared L2 distance ###
    from pytorch_metric_learning.distances import LpDistance
    loss_func = TripletMarginLoss(margin=0.2, distance=LpDistance(power=2))
    
    ### TripletMarginLoss with unnormalized L1 distance ###
    loss_func = TripletMarginLoss(margin=0.2, distance=LpDistance(normalize_embeddings=False, p=1))
    
    ### TripletMarginLoss with signal-to-noise ratio###
    from pytorch_metric_learning.distances import SNRDistance
    loss_func = TripletMarginLoss(margin=0.2, distance=SNRDistance())
    

    You can also use similarity measures rather than distances, and the loss function will make the necessary adjustments:

    ### TripletMarginLoss with cosine similarity##
    from pytorch_metric_learning.distances import CosineSimilarity
    loss_func = TripletMarginLoss(margin=0.2, distance=CosineSimilarity())
    

    With a similarity measure, the TripletMarginLoss internally swaps the anchor-positive and anchor-negative terms: [san - sap + margin]+. In other words, it will try to make the anchor-negative similarities smaller than the anchor-positive similarities.

    All losses, miners, and regularizers accept a distance argument. So you can try out the MultiSimilarityMiner using SNRDistance, or the NTXentLoss using LpDistance(p=1) and so on. Note that some losses/miners/regularizers have restrictions on the type of distances they can accept. For example, some classification losses only allow CosineSimilarity or DotProductSimilarity as their distance measure between embeddings and weights. To view restrictions for specific loss functions, see the documentation

    There are four distances implemented (LpDistance, SNRDistance, CosineSimilarity, DotProductSimilarity), but of course you can extend the BaseDistance class and write a custom distance measure if you want. See the documentation for more.

    EmbeddingRegularizerMixin

    All loss functions now extend EmbeddingRegularizerMixin, which means you can optionally pass in (to any loss function) an embedding regularizer and its weight. The embedding regularizer will compute some loss based on the embeddings alone, ignoring labels and tuples. For example:

    from pytorch_metric_learning.regularizers import LpRegularizer
    loss_func = MultiSimilarityLoss(embedding_regularizer=LpRegularizer(), embedding_reg_weight=0.123)
    

    WeightRegularizerMixin is now a subclass of WeightMixin

    As in previous versions, classification losses extend WeightRegularizerMixin, which which means you can optionally pass in a weight matrix regularizer. Now that WeightRegularizerMixin extends WeightMixin, you can also specify the weight initialization function in object form:

    from ..utils import common_functions as c_f
    import torch
    
    # use kaiming_uniform, with a=1 and mode='fan_out'
    weight_init_func = c_f.TorchInitWrapper(torch.nn.kaiming_uniform_, a=1, mode='fan_out')
    loss_func = SomeClassificationLoss(..., weight_init_func=weight_init_func)
    

    New Regularizers

    For increased modularity, the regularizers hard-coded in several loss functions were separated into their own classes. The new regularizers are:

    • LpRegularizer
    • SparseCentersRegularizer
    • ZeroMeanRegularizer

    Support for half-precision

    In previous versions, various functions would break in half-precision (float16) mode. Now all distances, losses, miners, regularizers, and reducers work with half-precision, float32, and double (float64).

    New collect_stats argument

    All distances, losses, miners, regularizers, and reducers now have a collect_stats argument, which is True by default. This means that various statistics are collected in each forward pass, and these statistics can be useful to look at during experiments. However, if you don't care about collecting stats, you can set collect_stats=False, and the stat computations will be skipped.

    Other updates

    • You no longer have to explicitly call .to(device) on classification losses, because their weight matrices will be moved to the correct device during the forward pass if necessary. See issue https://github.com/KevinMusgrave/pytorch-metric-learning/issues/139

    • Reasonable default values have been set for all losses and miners, to make these classes easier to try out. In addition, equations have been added to many of the class descriptions in the documentation. See issue https://github.com/KevinMusgrave/pytorch-metric-learning/issues/140

    • Calls to torch.nonzero have been replaced by torch.where.

    • The documentation for ArcFaceLoss and CosFaceLoss have been fixed to reflect the actual usage. (The documentation previously indicated that some arguments are positional, when they are actually keyword arguments.)

    • The tensorboard_folder argument for utils.logging_presets.get_record_keeper is now optional. If you don't specify it, then there will be no tensorboard logs, which can be useful if speed is a concern.

    • The loss dictionary in BaseTrainer is now cleared at the end of each epoch, to free up GPU memory. See issue https://github.com/KevinMusgrave/pytorch-metric-learning/issues/171

    Source code(tar.gz)
    Source code(zip)
  • v0.9.89(Jul 25, 2020)

    CrossBatchMemory

    • Fixed bug where CrossBatchMemory would use self-comparisons as positive pairs. This was uniquely a CrossBatchMemory problem because of the nature of adding each current batch to the queue.
    • Fixed bug where DistanceWeightedMiner would not work with CrossBatchMemory due to missing ref_label
    • Changed 3rd keyword argument of forward() from input_indices_tuple to indices_tuple to be consistent with all other losses.

    AccuracyCalculator

    • Fixed bug in AccuracyCalculator where it would return NaN if the reference set contained none of query set labels. Now it will log a warning and return 0.

    BaseTester

    • Fixed bug where "compared_to_training_set" mode of BaseTester fails due to list(None) bug.

    InferenceModel

    • New get_nearest_neighbors function will return nearest neighbors of a query. By @btseytlin

    Loss and miner utils

    • Switched to fill_diagonal_ in the get_all_pairs_indices and get_all_triplets_indices code, instead of creating torch.eye.
    Source code(tar.gz)
    Source code(zip)
  • v0.9.88(Jun 20, 2020)

  • v0.9.87(Jun 20, 2020)

    v0.9.87 comes with some major changes that may cause your existing code to break.

    BREAKING CHANGES

    Losses

    • The avg_non_zero_only init argument has been removed from ContrastiveLoss, TripletMarginLoss, and SignalToNoiseRatioContrastiveLoss. Here's how to translate from old to new code:
      • avg_non_zero_only=True: Just remove this input parameter. Nothing else needs to be done as this is the default behavior.
      • avg_non_zero_only=False: Remove this input parameter and replace it with reducer=reducers.MeanReducer(). You'll need to add this to your imports: from pytorch_metric_learning import reducers
    • learnable_param_names and num_class_per_param has been removed from BaseMetricLossFunction due to lack of use.
      • MarginLoss is the only built-in loss function that is affected by this. Here's how to translate from old to new code:
        • learnable_param_names=["beta"]: Remove this input parameter and instead pass in learn_beta=True.
        • num_class_per_param=N: Remove this input parameter and instead pass in num_classes=N.

    AccuracyCalculator

    • The average_per_class init argument is now avg_of_avgs. The new name better reflects the functionality.
    • The old way to import was: from pytorch_metric_learning.utils import AccuracyCalculator. This will no longer work. The new way is: from pytorch_metric_learning.utils.accuracy_calculator import AccuracyCalculator. The reason for this change is to avoid an unnecessary import of the Faiss library, especially when this library is used in other packages.

    New feature: Reducers

    Reducers specify how to go from many loss values to a single loss value. For example, the ContrastiveLoss computes a loss for every positive and negative pair in a batch. A reducer will take all these per-pair losses, and reduce them to a single value. Here's where reducers fit in this library's flow of filters and computations:

    Your Data --> Sampler --> Miner --> Loss --> Reducer --> Final loss value

    Reducers are passed into loss functions like this:

    from pytorch_metric_learning import losses, reducers
    reducer = reducers.SomeReducer()
    loss_func = losses.SomeLoss(reducer=reducer)
    loss = loss_func(embeddings, labels) # in your training for-loop
    

    Internally, the loss function creates a dictionary that contains the losses and other information. The reducer takes this dictionary, performs the reduction, and returns a single value on which .backward() can be called. Most reducers are written such that they can be passed into any loss function.

    See the documentation for details.

    Other updates

    Utils

    Inference

    • InferenceModel has been added to the library. It is a model wrapper that makes it convenient to find matching pairs within a batch, or from a set of pairs. Take a look at this notebook to see example usage.

    AccuracyCalculator

    • The k value for k-nearest neighbors can optionally be specified as an init argument.
    • k-nn based metrics now receive knn distances in their kwargs. See #118 by @marijnl

    Other stuff

    Unit tests were added for almost all losses, miners, regularizers, and reducers.

    Bug fixes

    Trainers

    • Fixed a labels related bug in TwoStreamMetricLoss. See #112 by @marijnl

    Loss and miner utils

    • Fixed bug where convert_to_triplets could encounter a RuntimeError. See #95
    Source code(tar.gz)
    Source code(zip)
Owner
Kevin Musgrave
Computer science PhD student studying computer vision and machine learning.
Kevin Musgrave
Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Fangjun Kuang 119 Jan 03, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

Pretrained models for Pytorch (Work in progress) The goal of this repo is: to help to reproduce research papers results (transfer learning setups for

Remi 8.7k Dec 31, 2022
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
A tiny package to compare two neural networks in PyTorch

Compare neural networks by their feature similarity

Anand Krishnamoorthy 180 Dec 30, 2022
A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision

🤗 Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.

Hugging Face 3.5k Jan 08, 2023
3D-RETR: End-to-End Single and Multi-View3D Reconstruction with Transformers

3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers (BMVC 2021) Zai Shi*, Zhao Meng*, Yiran Xing, Yunpu Ma, Roger Wattenhofe

Zai Shi 36 Dec 21, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
A Closer Look at Structured Pruning for Neural Network Compression

A Closer Look at Structured Pruning for Neural Network Compression Code used to reproduce experiments in https://arxiv.org/abs/1810.04622. To prune, w

Bayesian and Neural Systems Group 140 Dec 05, 2022
Differentiable ODE solvers with full GPU support and O(1)-memory backpropagation.

PyTorch Implementation of Differentiable ODE Solvers This library provides ordinary differential equation (ODE) solvers implemented in PyTorch. Backpr

Ricky Chen 4.4k Jan 04, 2023
Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking"

model_based_energy_constrained_compression Code for paper "Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and

Haichuan Yang 16 Jun 15, 2022
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023
Official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards

TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards. It can reduce GPU memory and scale up the training when the model has massive linear layers (e.g., ViT, BERT and

Kaiyu Yue 275 Nov 22, 2022
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

TorchSSL: A PyTorch-based Toolbox for Semi-Supervised Learning

1k Dec 28, 2022
PyTorch wrappers for using your model in audacity!

PyTorch wrappers for using your model in audacity!

130 Dec 14, 2022
High-level batteries-included neural network training library for Pytorch

Pywick High-Level Training framework for Pytorch Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with st

382 Dec 06, 2022