A python library for self-supervised learning on images.

Overview

Lightly Logo

GitHub Unit Tests codecov

Lightly is a computer vision framework for self-supervised learning.

We, at Lightly, are passionate engineers who want to make deep learning more efficient. We want to help popularize the use of self-supervised methods to understand and filter raw image data. Our solution can be applied before any data annotation step and the learned representations can be used to analyze and visualize datasets as well as for selecting a core set of samples.

Tutorials

Want to jump to the tutorials and see lightly in action?

Benchmarks

Currently implemented models and their accuracy on cifar10. All models have been evaluated using kNN. We report the max test accuracy over the epochs as well as the maximum GPU memory consumption. All models in this benchmark use the same augmentations as well as the same ResNet-18 backbone. Training precision is set to FP32 and SGD is used as an optimizer with cosineLR.

Model Epochs Batch Size Test Accuracy Peak GPU usage
MoCo 200 128 0.83 2.1 GBytes
SimCLR 200 128 0.78 2.0 GBytes
SimSiam 200 128 0.73 3.0 GBytes
MoCo 200 512 0.85 7.4 GBytes
SimCLR 200 512 0.83 7.8 GBytes
SimSiam 200 512 0.81 7.0 GBytes
MoCo 800 512 0.90 7.2 GBytes
SimCLR 800 512 0.89 7.7 GBytes
SimSiam 800 512 0.91 6.9 GBytes

Terminology

Below you can see a schematic overview of the different concepts present in the lightly Python package. The terms in bold are explained in more detail in our documentation.

Overview of the lightly pip package

Quick Start

Lightly requires Python 3.6+. We recommend installing Lightly in a Linux or OSX environment.

Requirements

  • hydra-core>=1.0.0
  • numpy>=1.18.1
  • pytorch_lightning>=0.10.0
  • requests>=2.23.0
  • torchvision
  • tqdm

Installation

You can install Lightly and its dependencies from PyPI with:

pip3 install lightly

We strongly recommend that you install Lightly in a dedicated virtualenv, to avoid conflicting with your system packages.

Command-Line Interface

Lightly is accessible also through a command-line interface (CLI). To train a SimCLR model on a folder of images you can simply run the following command:

lightly-train input_dir=/mydataset

To create an embedding of a dataset you can use:

lightly-embed input_dir=/mydataset checkpoint=/mycheckpoint

The embeddings with the corresponding filename are stored in a human-readable .csv file.

Next Steps

Head to the documentation and see the things you can achieve with Lightly!

Development

To install dev dependencies (for example to contribute to the framework) you can use the following command:

pip3 install -e ".[dev]"

For more information about how to contribute have a look here.

Running Tests

Unit tests are within the tests folder and we recommend to run them using pytest. There are two test configurations available. By default only a subset will be run. This is faster and should take less than a minute. You can run it using

python -m pytest -s -v

To run all tests (including the slow ones) you can use the following command.

python -m pytest -s -v --runslow

Code Linting

We provide a Pylint config following the Google Python Style Guide.

You can run the linter from your terminal either on a folder

pylint lightly/

or on a specific file

pylint lightly/core.py

Further Reading

Self-supervised Learning:

FAQ

  • Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?

    • Self-supervised learning has become increasingly popular among scientists over the last year because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
  • How can I contribute?

    • Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
  • Is this framework for free?

    • Yes, this framework completely free to use and we provide the code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind lightly commited to keep this framework open-source.
  • If this framework is free, how is the company behind lightly making money?

    • Training self-supervised models is only part of the solution. The company behind lightly focuses on processing and analyzing embeddings created by self-supervised models. By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently. This framework acts as an interface for our platform to easily upload and download datasets, embeddings and models. Whereas the platform will cost for additional features this frameworks will always remain free of charge (even for commercial use).
Comments
  • Add Masked Autoencoder implementation

    Add Masked Autoencoder implementation

    The paper Masked Autoencoders Are Scalable Vision Learners https://arxiv.org/abs/2111.06377 is suggesting that a masked auto-encoder (similar to pre-training on NLP) works very well as a pretext task for self-supervised learning. Let's add it to Lightly.

    image

    type: idea 
    opened by IgorSusmelj 19
  • Feature proposals

    Feature proposals

    Hi; first of all thanks for making this library; it looks very useful.

    There are two papers that id like to try and implement in your repo:

    Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere

    This paper introduces a contrastive loss function with some quite different properties from the more cited ones. Out of all loss functions ive tried for my projects, this is the one I had most success with; and also the one that a-priori seems the most sensible to me, of whats currently been published. The paper provides source and its a trivially implemented additional loss function really. Ive got experience implementing it, plus some generalizations I came up with I could provide. Should be an easy addition to this package. My motivation for contribution is a selfish one, as having it included here in the benchmarks would help me better understand the relative strengths and weaknesses on different types of datasets.

    Scaling Deep Contrastive Learning Batch Size with Almost Constant Peak Memory Usage

    I also recently came across this paper. It also provides (rather ugly imo) torch code. Implementing it in this package first would be easier to me than implementing it in my private repos; since itd be easier to properly test the implementation it using the infrastructure provided here. The title is quite descriptive; given the importance of batch-size for within-batch mining techniques, and the fact that many of us are working with single gpus, being able to scale batch sizes arbitrarily is super useful, and I think this paper has the right idea of how to go about it.

    The contribution guide mentions to first discuss such additions; so tell me what you think!

    type: idea 
    opened by EelcoHoogendoorn 14
  • Adding DINO to lightly

    Adding DINO to lightly

    First of all Kudos on creating this amazing library ❤️.

    I think all of us in self supervised learning community have heard of DINO. For the past couple of weeks, I have been trying to port the DINO implementation in facebook's implementation to PL. I have implemented it atleast for my use case which is a kaggle competition. I initially looked at lightly for implmenetation but I did not see any, so I borrowed and adopted code from original implementation and converted it into lightning.

    Honestly it was a tedious task. I was wondering if you guys would be interested in adding DINO to lightly.

    Here's how I think we should structure the implementations

    1. Augmentations : Dino heavily relies on multi-crop strategy. Since lightly already has a collate_fn for implementing augmentations, dino augmentations can be implemented in the same way.
    2. Model forward pass and Model Heads : The forward pass for the model is weird since we have to deal with multicrop and global crops, so this needs to be implemented as a nn.Module like other heads in lightly.
    3. Loss Functions : I am not sure how this should be implemented for lightly, although FB have a custom class for that.
    4. Utility functions : FB has used some tricks for stable training and results, so these need to be included as well.

    I have used lightning to implement all this and so far at least for my use case I was able to train vit-base-16 due my hardware constraints.

    This goes without saying I would personally like to work on PR :heart:

    Please let me know.

    opened by Atharva-Phatak 11
  • Queue for SwaV

    Queue for SwaV

    Aims to solve https://github.com/lightly-ai/lightly/issues/1006

    Implements queue for SwaV with minimal code change in the training loop. Implemented as per details from the paper:

    Rationale

    image

    A trick for small batch sizes - start using queue prototypes at a later epoch

    image
    opened by ibro45 10
  • Malte lig 1262 advanced selection external docs

    Malte lig 1262 advanced selection external docs

    Description

    • adds docs for selection using the new config
    • adapts existing docs (getting started, docker active learning)
    • already included: DIVERSIFY->DIVERSITY

    Tabs vs. Dropdowns

    Here tabs are used for the SelectionStrategy and Dropdowns for the configuration examples. image

    opened by MalteEbner 8
  • MSN: Masked Siamese Networks for Label-Efficient Learning

    MSN: Masked Siamese Networks for Label-Efficient Learning

    Hello, I would like to share this new interesting SSL method proposed by Meta that achieves SOTA results and shows interesting generalization properties.

    It is based on ViT and I think it could be worth to integrate it in this codebase. Also, the official implementation is available :)

    MSN Paper, MSN GIthub code

    opened by masc-it 8
  • recipe for using the Lightly Docker as API worker

    recipe for using the Lightly Docker as API worker

    opened by MalteEbner 8
  • Guarin lig 364 add download urls function to lightlyapi

    Guarin lig 364 add download urls function to lightlyapi

    Draft for downloading samples with readurl. I included an iterable dataset for completeness.

    The following code takes 20s to load 1k samples:

    import torch
    from tqdm import tqdm
    
    from lightly.data.iterable_dataset import ImageIterableDataset
    from lightly.api import ApiWorkflowClient
    
    client = ApiWorkflowClient(
        token="token",
        dataset_id="dataset id"
    )
    
    samples = client.download_raw_samples(client.dataset_id)
    dataset = ImageIterableDataset(samples)
    dataloader = torch.utils.data.DataLoader(
        dataset,
        num_workers=8,
        batch_size=8
    )
    
    for image, filename, frame_idx in tqdm(dataloader):
        pass
    

    The same code for videos runs in 10s for 5 videos with 300 frames each using only 2 workers.

    opened by guarin 8
  • enhance the upload speed

    enhance the upload speed

    using lightly-uploadand lightly-magic is not very fast and does not utilise all the cores. There seems to be a bottleneck somewhere.

    Tasks

    • [ ] figure out where most of the time goes when processing
      • [ ] split apart numbers of images/s to images/s processing and images/s uploading
    • [ ] ensure when num_workers is set, we really process with the amount of workers
    type: enhancement 
    opened by japrescott 8
  • [Active Learning] Compute Active learning scores for object detection

    [Active Learning] Compute Active learning scores for object detection

    Create a scorer for object detection that can compute scores like prediction entropy, ...

    Input of predictions: For each of the N samples, there are B bounding boxes giving the probability of one of the C classes.

    • [x] Find a good way to represent the model predictions of an object detection model
    • [x] Decide which kind of models to support (Yolo, SSD, ...), see also https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3
    • [x] Decide which kinds of scores to compute and find a good way to compute them, e.g. see https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3 or https://arxiv.org/pdf/2004.04699.pdf
    • [x] Implement it
    type: enhancement 
    opened by MalteEbner 8
  • NNMemoryBank not working with DataParallel

    NNMemoryBank not working with DataParallel

    I have been using the NNMemoryBank as a component in my module and noticed that at each forward pass NNMemoryBank.bank is equal to None. This only occurs when my module is wrapped in DataParallel. As a result, throughout training my NN pairs are always random noise (surprisingly, this only hurt the contrastive learning performance by a few percentage point on linear probing??).

    Here is a simple test case that highlights the issue:

    import torch
    from lightly.models.modules import NNMemoryBankModule
    memory_bank = NNMemoryBankModule(size=1000)
    print(memory_bank.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.bank)
    
    memory_bank = NNMemoryBankModule(size=1000)
    memory_bank = torch.nn.DataParallel(memory_bank, device_ids=[0,1])
    print(memory_bank.module.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.module.bank)
    

    The output of the first is None and a torch.Tensor, as expected. The output for the second is None for both.

    opened by kfallah 7
  • Malte lig 2288 download corrupted files as artifact pip

    Malte lig 2288 download corrupted files as artifact pip

    Description

    • Generated new openapi code
    • Adds the download_compute_worker_run_corruptness_check_information method to download the new corruptness check artifacts

    How was it tested?

    Manually running the example code.

    Why no unittests?

    The other artifact download functions are not tested either, because the code is really tiny. Thus I kept it consistent.

    ONLY MERGE AFTER https://github.com/lightly-ai/lightly-core/pull/2212

    opened by MalteEbner 1
  • MemoryBankModule - no distributed all gather prior to queueing a batch

    MemoryBankModule - no distributed all gather prior to queueing a batch

    If I'm not mistaken, the batch is not all_gathered before the queue is updated. I was just comparing the code against the one it was based on (MoCo's memory bank) and noticed this: https://github.com/facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L55

    Was this done on purpose?

    Is it because different processes would use the same queued batches, unlike the DistributedDataLoader batches which are indexed in DDP way?

    opened by ibro45 0
  • how can I reconstruct images (MAE)

    how can I reconstruct images (MAE)

    Hi and thank you for writing this great library.

    I'm trying to figure out if my MAE training worked beyond looking at mse numbers. I input an image of shape 256x256 (rgb) and then I get predictions of shape bx38x3072 I'm using the default torchvision.models.vit_b_32

    I am not sure how to reshape this back into an image? it also just doesn't make sense based on the numbers - there should be 64 patches of 3072?

    opened by DanTaranis 7
  • SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    Hi,

    Is it possible to use online evaluation of SSL encoder with a linear classifier to run on different data then the one being used for pre-training?

    opened by aqibsaeed 2
Releases(v1.2.40)
Owner
Lightly
Lightly
RGB-D Local Implicit Function for Depth Completion of Transparent Objects

RGB-D Local Implicit Function for Depth Completion of Transparent Objects [Project Page] [Paper] Overview This repository maintains the official imple

NVIDIA Research Projects 43 Dec 12, 2022
Interactive Terraform visualization. State and configuration explorer.

Rover - Terraform Visualizer Rover is a Terraform visualizer. In order to do this, Rover: generates a plan file and parses the configuration in the ro

Tu Nguyen 2.3k Jan 07, 2023
RealFormer-Pytorch Implementation of RealFormer using pytorch

RealFormer-Pytorch Implementation of RealFormer using pytorch. Includes comparison with classical Transformer on image classification task (ViT) wrt C

Simo Ryu 90 Dec 08, 2022
official implementation for the paper "Simplifying Graph Convolutional Networks"

Simplifying Graph Convolutional Networks Updates As pointed out by #23, there was a subtle bug in our preprocessing code for the reddit dataset. After

Tianyi 727 Jan 01, 2023
(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework

(Py)TOD: Tensor-based Outlier Detection, A General GPU-Accelerated Framework Background: Outlier detection (OD) is a key data mining task for identify

Yue Zhao 127 Jan 05, 2023
Some experiments with tennis player aging curves using Hilbert space GPs in PyMC. Only experimental for now.

NOTE: This is still being developed! Setup notes This document uses Jeff Sackmann's tennis data. You can obtain it as follows: git clone https://githu

Martin Ingram 1 Jan 20, 2022
TensorFlow for Raspberry Pi

TensorFlow on Raspberry Pi It's officially supported! As of TensorFlow 1.9, Python wheels for TensorFlow are being officially supported. As such, this

Sam Abrahams 2.2k Dec 16, 2022
Continual reinforcement learning baselines: experiment specifications, implementation of existing methods, and common metrics. Easily extensible to new methods.

Continual Reinforcement Learning This repository provides a simple way to run continual reinforcement learning experiments in PyTorch, including evalu

55 Dec 24, 2022
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022
这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer

Time Series Research with Torch 这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer。 建立原因 相较于mxnet和TF,Torch框架中的神经网络层需要提前指定输入维度: # 建立线性层 TensorF

Chi Zhang 85 Dec 29, 2022
DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021)

DPC: Unsupervised Deep Point Correspondence via Cross and Self Construction (3DV 2021) This repo is the implementation of DPC. Tested environment Pyth

Dvir Ginzburg 30 Nov 30, 2022
Imagededup - 😎 Finding duplicate images made easy

imagededup is a python package that simplifies the task of finding exact and near duplicates in an image collection.

idealo 4.3k Jan 07, 2023
Elastic weight consolidation technique for incremental learning.

Overcoming-Catastrophic-forgetting-in-Neural-Networks Elastic weight consolidation technique for incremental learning. About Use this API if you dont

Shivam Saboo 89 Dec 22, 2022
Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Paddle-PANet 目录 结果对比 论文介绍 快速安装 结果对比 CTW1500 Method Backbone Fine

7 Aug 08, 2022
AISTATS 2019: Confidence-based Graph Convolutional Networks for Semi-Supervised Learning

Confidence-based Graph Convolutional Networks for Semi-Supervised Learning Source code for AISTATS 2019 paper: Confidence-based Graph Convolutional Ne

MALL Lab (IISc) 56 Dec 03, 2022
Code to train models from "Paraphrastic Representations at Scale".

Paraphrastic Representations at Scale Code to train models from "Paraphrastic Representations at Scale". The code is written in Python 3.7 and require

John Wieting 71 Dec 19, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
A PyTorch Implementation of "Neural Arithmetic Logic Units"

Neural Arithmetic Logic Units [WIP] This is a PyTorch implementation of Neural Arithmetic Logic Units by Andrew Trask, Felix Hill, Scott Reed, Jack Ra

Kevin Zakka 181 Nov 18, 2022
Official implementation of the paper Momentum Capsule Networks (MoCapsNet)

Momentum Capsule Network Official implementation of the paper Momentum Capsule Networks (MoCapsNet). Abstract Capsule networks are a class of neural n

8 Oct 20, 2022
Deep learning (neural network) based remote photoplethysmography: how to extract pulse signal from video using deep learning tools

Deep-rPPG: Camera-based pulse estimation using deep learning tools Deep learning (neural network) based remote photoplethysmography: how to extract pu

Terbe Dániel 138 Dec 17, 2022