An essential implementation of BYOL in PyTorch + PyTorch Lightning

Overview

Essential BYOL

A simple and complete implementation of Bootstrap your own latent: A new approach to self-supervised Learning in PyTorch + PyTorch Lightning.

Good stuff:

  • good performance (~67% linear eval accuracy on CIFAR100)
  • minimal code, easy to use and extend
  • multi-GPU / TPU and AMP support provided by PyTorch Lightning
  • ImageNet support (needs testing)
  • linear evaluation is performed during training without any additional forward pass
  • logging with Wandb

Performance

Linear Evaluation Accuracy

Here is the accuracy after training for 1000 epochs:

Dataset [email protected] [email protected]
CIFAR10 91.1% 99.8%
CIFAR100 67.0% 90.5%

Training and Validation Curves

CIFAR10

CIFAR100

Environment

conda create --name essential-byol python=3.8
conda activate essential-byol
conda install pytorch=1.7.1 torchvision=0.8.2 cudatoolkit=XX.X -c pytorch
pip install pytorch-lightning==1.1.6 pytorch-lightning-bolts==0.3 wandb opencv-python

The code has been tested using these versions of the packages, but it will probably work with slightly different environments as well. When your run the code (see below for commands), PyTorch Lightning will probably throw a warning, advising you to install additional packages as gym, sklearn and matplotlib. They are not needed for this implementation to work, but you can install them to get rid of the warnings.

Datasets

Three datasets are supported:

  • CIFAR10
  • CIFAR100
  • ImageNet

For imagenet you need to pass the appropriate --data_dir, while for CIFAR you can just pass --download to download the dataset.

Commands

The repo comes with minimal model specific arguments, check main.py for info. We also support all the arguments of the PyTorch Lightning trainer. Default parameters are optimized for CIFAR100 but can also be used for CIFAR10.

Sample commands for running CIFAR100 on a single GPU setup:

python main.py \
    --gpus 1 \
    --dataset CIFAR100 \
    --batch_size 256 \
    --max_epochs 1000 \
    --arch resnet18 \
    --precision 16 \
    --comment wandb-comment

and multi-GPU setup:

python main.py \
    --gpus 2 \
    --distributed_backend ddp \
    --sync_batchnorm \
    --dataset CIFAR100 \
    --batch_size 256 \
    --max_epochs 1000 \
    --arch resnet18 \
    --precision 16 \
    --comment wandb-comment

Logging

Logging is performed with Wandb, please create an account, and follow the configuration steps in the terminal. You can pass your username using --entity. Training and validation stats are logged at every epoch. If you want to completely disable logging use --offline.

Contribute

Help is appreciated. Stuff that needs work:

  • test ImageNet performance
  • exclude bias and bn from LARS adaptation (see comments in the code)
Owner
Enrico Fini
PhD Student at University of Trento
Enrico Fini
This repo is about to create the Streamlit application for given ML model.

HR-Attritiion-using-Streamlit This repo is about to create the Streamlit application for given ML model. Problem Statement: Managing peoples at workpl

Pavan Giri 0 Dec 10, 2021
Matthew Colbrook 1 Apr 08, 2022
Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Patrick Varilly 28 Nov 25, 2022
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
a generic C++ library for image analysis

VIGRA Computer Vision Library Copyright 1998-2013 by Ullrich Koethe This file is part of the VIGRA computer vision library. You may use,

Ullrich Koethe 378 Dec 30, 2022
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
Fit Fast, Explain Fast

FastExplain Fit Fast, Explain Fast Installing pip install fast-explain About FastExplain FastExplain provides an out-of-the-box tool for analysts to

8 Dec 15, 2022
Code for the paper "How Attentive are Graph Attention Networks?"

How Attentive are Graph Attention Networks? This repository is the official implementation of How Attentive are Graph Attention Networks?. The PyTorch

175 Dec 29, 2022
This repository contains the code for designing risk bounded motion plans for car-like robot using Carla Simulator.

Nonlinear Risk Bounded Robot Motion Planning This code simulates the bicycle dynamics of car by steering it on the road by avoiding another static car

8 Sep 03, 2022
Objective of the repository is to learn and build machine learning models using Pytorch. 30DaysofML Using Pytorch

30 Days Of Machine Learning Using Pytorch Objective of the repository is to learn and build machine learning models using Pytorch. List of Algorithms

Mayur 119 Nov 24, 2022
A collection of resources on GAN Inversion.

This repo is a collection of resources on GAN inversion, as a supplement for our survey

Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
List of awesome things around semantic segmentation 🎉

Awesome Semantic Segmentation List of awesome things around semantic segmentation 🎉 Semantic segmentation is a computer vision task in which we label

Dam Minh Tien 18 Nov 26, 2022
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
SSL_SLAM2: Lightweight 3-D Localization and Mapping for Solid-State LiDAR (mapping and localization separated) ICRA 2021

SSL_SLAM2 Lightweight 3-D Localization and Mapping for Solid-State LiDAR (Intel Realsense L515 as an example) This repo is an extension work of SSL_SL

Wang Han 王晗 1.3k Jan 08, 2023
DeepSpamReview: Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures. Summer Internship project at CoreView Systems.

Detection of Fake Reviews on Online Review Platforms using Deep Learning Architectures Dataset: https://s3.amazonaws.com/fast-ai-nlp/yelp_review_polar

Ashish Salunkhe 37 Dec 17, 2022
Official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021.

Introduction This repository is the official PyTorch implementation of Data-free Knowledge Distillation for Object Detection, WACV 2021. Data-free Kno

NVIDIA Research Projects 50 Jan 05, 2023
TransNet V2: Shot Boundary Detection Neural Network

TransNet V2: Shot Boundary Detection Neural Network This repository contains code for TransNet V2: An effective deep network architecture for fast sho

Tomáš Souček 212 Dec 27, 2022