This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

Related tags

Deep LearningISL
Overview

ISL

This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted to ICCV2021. The code contains training and testing several network architecture (ResNet18, ResNet50 and AlexNet) on four datasets (SVHN, CIFAR10, CIFAR100 and ImageNet) using our proposed ISL method.

Quick Start

Prerequisites

  • python 3.5+
  • pytorch 1.0.0+
  • torchvision 0.2.1 (not compatible with higher version)
  • other packages like numpy and PIL

Dataset Preparation

Please follow the instruction in this to download the ImageNet dataset. For small datasets like SVHN, you can either download them manually or set the download parameter in torchvision.dataset True to download them automatically.

After downloading them, please put them in data/, an SSD is highly recommended for training on ImageNet.

Training and Testing

Small Datasets

For training on SVHN, CIFAR10 or CIFAR100, please run:

python small_main.py --data='data/' --arch='resnet18/resnet50/alexnet' --dataset='svhn/cifar10/cifar100'

The training code contains testing the weighted $k$NN on features with $k=200$ every 5 epochs. For testing an existing weight file, just run:

python small_main.py --data='data/' --arch='resnet18/resnet50/alexnet' --dataset='svhn/cifar10/cifar100' --test-only=True --recompute=True --resume='weight_file'

ImageNet

For training on ImageNet, just run:

python imagenet_main.py --data='data/' --arch='resnet18/resnet50/alexnet'

During training, we monitor the weighted $k$NN with $k=1$ every two epochs, that's because using $k=200$ will be slow on big dataset like ImageNet.

For testing using $k$NN with $k=200$, you can run:

python imagenet_main.py --data='data/' --arch='resnet18/resnet50/alexnet' --test-only=True --recompute=True --resume='weight_file'

To reproduce the ResNet ImageNet result in our paper, you need to run the code on a 16GB memory GPU like NVIDIA Tesla V100 (AlexNet can run on a 11 GB memory GPU like RTX 2080Ti). The performance will drop slightly if trained on two GPUs as observed in our experiments. Also, you may need to switch the training stage manually because sometimes the program just fails to identify the end of training GANs and it might not be able to use the best G for neighborhood mining. The total training time lasts for around 4 days in our experiments using a single GPU and batch size equals to 256.

Owner
IVG Lab, Department of Automation, Tsinghua Univeristy
A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction.

Graph2SMILES A graph-to-sequence model for one-step retrosynthesis and reaction outcome prediction. 1. Environmental setup System requirements Ubuntu:

29 Nov 18, 2022
Codes for TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization.

TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject Localization This is the official implementaion of paper TS-CAM: Token Semant

vasgaowei 112 Jan 02, 2023
High-performance moving least squares material point method (MLS-MPM) solver.

High-Performance MLS-MPM Solver with Cutting and Coupling (CPIC) (MIT License) A Moving Least Squares Material Point Method with Displacement Disconti

Yuanming Hu 2.2k Dec 31, 2022
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning. CVPR 2018

Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning Tensorflow code and models for the paper: Large Scale Fine-Grained Categ

Yin Cui 187 Oct 01, 2022
Self-Supervised Learning for Domain Adaptation on Point-Clouds

Self-Supervised Learning for Domain Adaptation on Point-Clouds Introduction Self-supervised learning (SSL) allows to learn useful representations from

Idan Achituve 66 Dec 20, 2022
Collection of common code that's shared among different research projects in FAIR computer vision team.

fvcore fvcore is a light-weight core library that provides the most common and essential functionality shared in various computer vision frameworks de

Meta Research 1.5k Jan 07, 2023
Rl-quickstart - Reinforcement Learning Quickstart

Reinforcement Learning Quickstart To get setup with the repository, git clone ht

UCLA DataRes 3 Jun 16, 2022
Code examples and benchmarks from the paper "Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective"

Code For the Paper "Understanding Entropy Coding With Asymmetric Numeral Systems (ANS): a Statistician's Perspective" Author: Robert Bamler Date: 22 D

4 Nov 02, 2022
Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX.

Equinox Callable PyTrees and filtered JIT/grad transformations = neural networks in JAX Equinox brings more power to your model building in JAX. Repr

Patrick Kidger 909 Dec 30, 2022
PyTorch Implementation of NCSOFT's FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

FastPitchFormant - PyTorch Implementation PyTorch Implementation of FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis. Qu

Keon Lee 63 Jan 02, 2023
Aydin is a user-friendly, feature-rich, and fast image denoising tool

Aydin is a user-friendly, feature-rich, and fast image denoising tool that provides a number of self-supervised, auto-tuned, and unsupervised image denoising algorithms.

Royer Lab 99 Dec 14, 2022
Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation)

Recall Loss for Semantic Segmentation (This repo implements the paper: Recall Loss for Semantic Segmentation) Download Synthia dataset The model uses

32 Sep 21, 2022
Nodule Generation Algorithm Baseline and template code for node21 generation track

Nodule Generation Algorithm This codebase implements a simple baseline model, by following the main steps in the paper published by Litjens et al. for

node21challenge 10 Apr 21, 2022
This code implements constituency parse tree aggregation

README This code implements constituency parse tree aggregation. Folder details code: This folder contains the code that implements constituency parse

Adithya Kulkarni 0 Oct 11, 2021
CL-Gym: Full-Featured PyTorch Library for Continual Learning

CL-Gym: Full-Featured PyTorch Library for Continual Learning CL-Gym is a small yet very flexible library for continual learning research and developme

Iman Mirzadeh 36 Dec 25, 2022
PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT.

MoCo v3 for Self-supervised ResNet and ViT Introduction This is a PyTorch implementation of MoCo v3 for self-supervised ResNet and ViT. The original M

Facebook Research 887 Jan 08, 2023
SIEM Logstash parsing for more than hundred technologies

LogIndexer Pipeline Logstash Parsing Configurations for Elastisearch SIEM and OpenDistro for Elasticsearch SIEM Why this project exists The overhead o

146 Dec 29, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

DV Lab 116 Dec 20, 2022
Fast Style Transfer in TensorFlow

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! You can even style videos! It takes 100ms o

Jefferson 5 Oct 24, 2021