Implementation of Supervised Contrastive Learning with AMP, EMA, SWA, and many other tricks

Overview

SupCon-Framework

The repo is an implementation of Supervised Contrastive Learning. It's based on another implementation, but with several differencies:

  • Fixed bugs (incorrect ResNet implementations, which leads to a very small max batch size),
  • Offers a lot of additional functionality (first of all, rich validation).

To be more precise, in this implementations you will find:

  • Augmentations with albumentations
  • Hyperparameters are moved to .yml configs
  • t-SNE visualizations
  • 2-step validation (for features before and after the projection head) using metrics like AMI, NMI, mAP, precision_at_1, etc with PyTorch Metric Learning.
  • Exponential Moving Average for a more stable training, and Stochastic Moving Average for a better generalization and just overall performance.
  • Automatic Mixed Precision (torch version) training in order to be able to train with a bigger batch size (roughly by a factor of 2).
  • LabelSmoothing loss, and LRFinder for the second stage of the training (FC).
  • TensorBoard logs, checkpoints
  • Support of timm models, and pytorch-optimizer

Install

  1. Clone the repo:
git clone https://github.com/ivanpanshin/SupCon-Framework && cd SupCon-Framework/
  1. Create a clean virtual environment
python3 -m venv venv
source venv/bin/activate
  1. Install dependencies
python -m pip install --upgrade pip
pip install -r requirements.txt

Training

In order to execute Cifar10 training run:

python train.py --config_name configs/train/train_supcon_resnet18_cifar10_stage1.yml
python swa.py --config_name configs/train/swa_supcon_resnet18_cifar10_stage1.yml
python train.py --config_name configs/train/train_supcon_resnet18_cifar10_stage2.yml
python swa.py --config_name configs/train/swa_supcon_resnet18_cifar10_stage2.yml

In order to run LRFinder on the second stage of the training, run:

python learning_rate_finder.py --config_name configs/train/lr_finder_supcon_resnet18_cifar10_stage2.yml

The process of training Cifar100 is exactly the same, just change config names from cifar10 to cifar100.

After that you can check the results of the training either in logs or runs directory. For example, in order to check tensorboard logs for the first stage of Cifar10 training, run:

tensorboard --logdir runs/supcon_first_stage_cifar10

Visualizations

This repo is supplied with t-SNE visualizations so that you can check embeddings you get after the training. Check t-SNE.ipynb for details.

Those are t-SNE visualizations for Cifar10 for validation and train with SupCon (top), and validation and train with CE (bottom).

Those are t-SNE visualizations for Cifar100 for validation and train with SupCon (top), and validation and train with CE (bottom).

Results

Model Stage Dataset Accuracy
ResNet18 Frist CIFAR10 95.9
ResNet18 Second CIFAR10 94.9
ResNet18 Frist CIFAR100 79.0
ResNet18 Second CIFAR100 77.9

Note that even though the accuracy on the second stage is lower, it's not always the case. In my experience, the difference between stages is usually around 1 percent, including the difference that favors the second stage.

Training time for the whole pipeline (without any early stopping) on CIFAR10 or CIFAR100 is around 4 hours (single 2080Ti with AMP). However, with reasonable early stopping that value goes down to around 2.5-3 hours.

Custom datasets

It's fairly easy to adapt this pipeline to custom datasets. First, you need to check tools/datasets.py for that. Second, add a new class for your dataset. The only guideline here is to follow the same augmentation logic, that is

        if self.second_stage:
            image = self.transform(image=image)['image']
        else:
            image = self.transform(image)

Third, add your dataset to DATASETS dict still inside tools/datasets.py, and you're good to go.

FAQ

  • Q: What hyperparameters I should try to change?

    A: First of all, learning rate. Second of all, try to change the augmentation policy. SupCon is build around "cropping + color jittering" scheme, so you can try changing the cropping size or the intensity of jittering. Check tools.utils.build_transforms for that.

  • Q: What backbone and batch size should I use?

    A: This is quite simple. Take the biggest backbone you can, and after that take the highest batch size your GPU can offer. The reason for that: SupCon is more prone (than regular classification training with CE/LabelSmoothing/etc) to improving with stronger backbones. Moverover, it has a property of explicit hard positive and negative mining. It means that the higher the batch size - the more difficult and helpful samples you supply to your model.

  • Q: Do I need the second stage of the training?

    A: Not necessarily. You can do classification based only on embeddings. In order to do that compute embeddings for the train set, and at inference time do the following: take a sample, compute its embedding, take the closest one from the training, take its class. To make this fast and efficient, you something like faiss for similarity search. Note that this is actually how validation is done in this repo. Moveover, during training you will see a metric precision_at_1. This is actually just accuracy based solely on embeddings.

  • Q: Should I use AMP?

    A: If your GPU has tensor cores (like 2080Ti) - yes. If it doesn't (like 1080Ti) - check the speed with AMP and without. If the speed dropped slightly (or even increased by a bit) - use it, since SupCon works better with bigger batch sizes.

  • Q: How should I use EMA?

    A: You only need to choose the ema_decay_per_epoch parameter in the config. The heuristic is fairly simple. If your dataset is big, then something as small as 0.3 will do just fine. And as your dataset gets smaller, you can increase ema_decay_per_epoch. Thanks to bonlime for this idea. I advice you to check his great pytorch tools repo, it's a hidden gem.

  • Q: Is it better than training with Cross Entropy/Label Smoothing/etc?

    A: Unfortunately, in my experience, it's much easier to get better results with something like CE. It's more stable, faster to train, and simply produces better or the same results. For instance, in case on CIFAR10/100 it's trivial to train ResNet18 up tp 96/81 percent respectively. Of cource, I've seen cased where SupCon performs better, but it takes quite a bit of work to make it outperform CE.

  • Q: How long should I train with SupCon?

    A: The answer is tricky. On one hand, authors of the original paper claim that the longer you train with SupCon, the better it gets. However, I did not observe such a behavior in my tests. So the only recommendation I can give is the following: start with 100 epochs for easy datasets (like CIFAR10/100), and 1000 for more industrial ones. Then - monitor the training process. If the validaton metric (such as precision_at_1) doesn't impove for several dozens of epochs - you can stop the training. You might incorporate early stopping for this reason into the pipeline.

Owner
Ivan Panshin
Machine Learning Engineer: CV, NLP, tabular data. Kaggle (top 0.003% worldwide) and Open Source
Ivan Panshin
Mock authentication API that acceccpts email and password and returns authentication result.

Mock authentication API that acceccpts email and password and returns authentication result.

Herman Shpryhau 1 Feb 11, 2022
Storefront - A store App developed using Django, RESTFul API, JWT

Storefront A store App developed using Django, RESTFul API, JWT. SQLite has been

Muhammad Algshy 1 Jan 07, 2022
This script will pull and analyze syscalls in given application(s) allowing for easier security research purposes

SyscallExtractorAnalyzer This script will pull and analyze syscalls in given application(s) allowing for easier security research purposes Goals Teach

Truvis Thornton 18 Jul 09, 2022
JWT authentication for Pyramid

JWT authentication for Pyramid This package implements an authentication policy for Pyramid that using JSON Web Tokens. This standard (RFC 7519) is of

Wichert Akkerman 73 Dec 03, 2021
This program automatically logs you into a Zoom session at your alloted time

This program automatically logs you into a Zoom session at your alloted time. Optionally you can choose to have end the session at your allotted time.

9 Sep 19, 2022
FastAPI extension that provides JWT Auth support (secure, easy to use, and lightweight)

FastAPI JWT Auth Documentation: https://indominusbyte.github.io/fastapi-jwt-auth Source Code: https://github.com/IndominusByte/fastapi-jwt-auth Featur

Nyoman Pradipta Dewantara 468 Jan 01, 2023
A Python tool to generate and refresh Amazon access tokens.

amazon_auth A Python tool to generate and refresh Amazon access tokens. Description This tool generates and outputs Amazon access and refresh tokens f

15 Nov 21, 2022
A module making it easier to manage Discord oAuth with Quart

quart_discord A module making it easier to manage Discord oAuth with Quart Install pip install git+https://github.com/xelA/ 5 Oct 27, 2022

Object Moderation Layer

django-oml Welcome to the documentation for django-oml! OML means Object Moderation Layer, the idea is to have a mixin model that allows you to modera

Angel Velásquez 12 Aug 22, 2019
Simple Login - Login Extension for Flask - maintainer @cuducos

Login Extension for Flask The simplest way to add login to flask! How it works First, install it from PyPI: $ pip install flask_simplelogin Then, use

Flask Extensions 181 Jan 01, 2023
Django-react-firebase-auth - A web app showcasing OAuth2.0 + OpenID Connect using Firebase, Django-Rest-Framework and React

Demo app to show Django Rest Framework working with Firebase for authentication

Teshank Raut 6 Oct 13, 2022
Python module for generating and verifying JSON Web Tokens

python-jwt Module for generating and verifying JSON Web Tokens. Note: From version 2.0.1 the namespace has changed from jwt to python_jwt, in order to

David Halls 210 Dec 24, 2022
蓝鲸用户管理是蓝鲸智云提供的企业组织架构和用户管理解决方案,为企业统一登录提供认证源服务。

蓝鲸用户管理 简体中文 | English 蓝鲸用户管理是蓝鲸智云提供的企业组织架构和用户管理解决方案,为企业统一登录提供认证源服务。 总览 架构设计 代码目录 功能 支持多层级的组织架构管理 支持通过多种方式同步数据:OpenLDAP、Microsoft Active Directory(MAD)

腾讯蓝鲸 35 Dec 14, 2022
API with high performance to create a simple blog and Auth using OAuth2 ⛏

DogeAPI API with high performance built with FastAPI & SQLAlchemy, help to improve connection with your Backend Side to create a simple blog and Cruds

Yasser Tahiri 111 Jan 05, 2023
Out-of-the-box support register, sign in, email verification and password recovery workflows for websites based on Django and MongoDB

Using djmongoauth What is it? djmongoauth provides out-of-the-box support for basic user management and additional operations including user registrat

hao 3 Oct 21, 2021
python implementation of JSON Web Signatures

python-jws 🚨 This is Unmaintained 🚨 This library is unmaintained and you should probably use For histo

Brian J Brennan 57 Apr 18, 2022
Simple yet powerful authorization / authentication client library for Python web applications.

Authomatic Authomatic is a framework agnostic library for Python web applications with a minimalistic but powerful interface which simplifies authenti

1k Dec 28, 2022
Library - Recent and favorite documents

Thingy Thingy is used to quickly access recent and favorite documents. It's an XApp so it can work in any distribution and many desktop environments (

Linux Mint 23 Sep 11, 2022
Complete Two-Factor Authentication for Django providing the easiest integration into most Django projects.

Django Two-Factor Authentication Complete Two-Factor Authentication for Django. Built on top of the one-time password framework django-otp and Django'

Bouke Haarsma 1.3k Jan 04, 2023
A full Rest-API With Oauth2 and JWT for request & response a JSON file Using FastAPI and SQLAlchemy 🔑

Pexon-Rest-API A full Rest-API for request & response a JSON file, Building a Simple WorkFlow that help you to Request a JSON File Format and Handling

Yasser Tahiri 15 Jul 22, 2022