Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

Overview


            

COResets and Data Subset selection

GitHub Decile Documentation GitHub Stars GitHub Forks

Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

In this README

What is CORDS?

CORDS is COReset and Data Selection library for making machine learning time, energy, cost, and compute efficient. CORDS is built on top of pytorch. Deep Learning systems are extremely compute intensive today with large turn around times, energy inefficiencies, higher costs and resourse requirements [1,2]. CORDS is an effort to make deep learning more energy, cost, resource and time efficient while not sacrificing accuracy. The following are the goals CORDS tries to achieve:

Data Efficiency

Reducing End to End Training Time

Reducing Energy Requirement

Faster Hyper-parameter tuning

Reducing Resource (GPU) Requirement and Costs

The primary purpose of CORDS is to select the right representative data subsets from massive datasets, and it does so iteratively. CORDS uses some recent advances in data subset selection and particularly, ideas of coresets and submodularity select such subsets. CORDS implements a number of state of the art data subset selection algorithms and coreset algorithms. Some of the algorithms currently implemented with CORDS include:

We are continuously incorporating newer and better algorithms into CORDS. Some of the features of CORDS includes:

  • Reproducability of SOTA in Data Selection and Coresets: Enable easy reproducability of SOTA described above. We are trying to also add more algorithms so if you have an algorithm you would like us to include, please let us know,.
  • Benchmarking: We have benchmarked CORDS (and the algorithms present right now) on several datasets including CIFAR-10, CIFAR-100, MNIST, SVHN and ImageNet.
  • Ease of Use: One of the main goals of CORDS is that it is easy to use and add to CORDS. Feel free to contribute to CORDS!
  • Modular design: The data selection algorithms are separate from the training loop, thereby enabling modular design and also varied scenarios of utility.
  • Broad number of usecases: CORDS is currently implemented for simple image classification tasks and hyperparameter tuning, but we are working on integrating a number of additional use cases like object detection, speech recognition, semi-supervised learning, Auto-ML, etc.

Installation

  1. To install latest version of CORDS package using PyPI:

    pip install -i https://test.pypi.org/simple/ cords
  2. To install using source:

    git clone https://github.com/decile-team/cords.git
    cd cords
    pip install -r requirements/requirements.txt

Next Steps

Tutorials

Documentation

The documentation for the latest version of CORDS can always be found here.

Comments
  • Logistic Regression support for Gradmatch

    Logistic Regression support for Gradmatch

    Logistic Regression model throws errors when we do back propagation. The fix for this is perhaps making freeze=False in forward function of utils/models/logreg_net.py

    opened by nlokeshiisc 4
  • [Bug] Got weight with same value when running examples.

    [Bug] Got weight with same value when running examples.

    Hi, I tested the example with Supervised learning and Glister strategy. https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb But when I print the weight of the train loader, they are all 1.0. I believe that by using Glister strategy, we will get different weights.

    tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
            1., 1.], device='cuda:0')
    

    Is that a bug or something special? Thanks.

    opened by HaoKang-Timmy 3
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi,

    I was trying to deploy CORDS selection to my training, but this error popped out Segmentation fault (core dumped).

    I imitated code from https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb.

    So basically I put my training and testing loader into GLISTERDataLoader, and switched this part into my code

    for _, (inputs, targets, weights) in enumerate(dataloader): inputs = inputs.to(device) targets = targets.to(device, non_blocking=True) weights = weights.to(device) optimizer.zero_grad() outputs = model(inputs) losses = criterion_nored(outputs, targets) loss = torch.dot(losses, weights/(weights.sum())) loss.backward()

    before modifying my code was running fine, so I believe there is an error inside the CORDS, my dataset is CIFAR10.

    Thanks

    opened by chengwuxinlin 2
  • Replace apricot with submodlib

    Replace apricot with submodlib

    Fixes #16
    submodlib is now used for the CRAIG strategy/dataloader as well as the submodular strategy/dataloader. Please let me know if you have any feedback!

    Notes:

    • I am not sure if sum redundancy (a submodular function implemented in apricot) has an analogue in submodlib, so it is disabled as an option for now.
    • It doesn't seem like submodularselectionstrategy.py is used in the corresponding dataloader. This may be a good opportunity to refactor, so that behavior is consistent between the two.
    • Any existing code that specifies the "optimizer" (greedy algorithm) used by apricot will break, since the names used by submodlib are different than those used by apricot (e.g. 'LazyGreedy' instead of 'Lazy'). This includes configs that use this option.
    opened by ghost 1
  • Typo in cords_cifar10_glister_train.ipynb

    Typo in cords_cifar10_glister_train.ipynb

    There is a typo in the cords_cifar10_glister_train.ipynb notebook : https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/cords_cifar10_glister_train.ipynb

    glister_trn.configdata.train_args.print_every = 1
    glister_trn.configdata.train_args.device = 'cuda'
    glister_trn.configdata.dss_args.fraction = fraction
    

    instead of

    glister_trn.cfg.train_args.print_every = 1
    glister_trn.cfg.train_args.device = 'cuda'
    glister_trn.cfg.dss_args.fraction = fraction
    
    opened by eendee 1
  • Evaluation on ImageNet

    Evaluation on ImageNet

    Hello, thanks for a very interesting and useful project.

    Could you mind providing an evaluation method for ImageNet? I tried to, adding loader for ImageNet to custom_dataset.py, but failed due to a GPU memory issue during subset selection.

    Many thanks!

    opened by Hayoung93 1
  • For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    1. For GRAD-MATCH method, there are weights associated with each data point in X(subset of training set). Do the weights have physical significance? for example, if the value of the weight is higher, the relevant selected data has the greater contribution to the residual?
    2. During the iteration, the selective index is in the selected indices, so the iteration break. why this happen? [email protected]
    opened by lishaguo 1
  • Questions about accuracy logging

    Questions about accuracy logging

    Hello! Thanks for your great work.

    I'm currently working on this code and I want to ask a question about accuracy logging.

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L530-L541

    In line 541 of train.py, val_acc contains cumulative accuracies over input batches. For example, if the loader contains 4500 examples and the batch size is 1000, then tst_acc has 5 accuracies per each evaluation. (the first element of tst_acc will be the accuracy over the first 1000 examples)

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L631-L633

    In line 633, it prints the best value in tst_acc. In this case, the resulted best accuracies over different algorithms and seeds might be the values evaluated on different test samples.

    Is this what you intended? In my experience, I think evaluating algorithms on an identical test dataset is a convention. In addition, is the reported test accuracies in the GRAD-MATCH paper the best values as above or the last test accuracy?

    Best, Jang-Hyun

    opened by Janghyun1230 1
  • CORDS gradient calculations for different loss functions

    CORDS gradient calculations for different loss functions

    a) Implement gradient calculation for Squared Loss, Negative logistic loss, General loss function gradient computation, Hinge loss.

    b) Integrate the new gradient calculation with different selection strategies

    enhancement 
    opened by krishnatejakk 1
  • Refactor the folders in the repo

    Refactor the folders in the repo

    • Add a folder called benchmarks which has all the results/benchmarks for the various cases. We should remove the results from the main readme and point them to that folder. Also, add the notebooks to reproduce the benchmark results
    • Rename notebooks to tutorials. Add different tutorials based on use-cases (NLP, Vision, SSL, Hyper-parameter tunings, NAS, etc.)
    opened by rishabhk108 0
  • Inquiry about performance of gradmatch

    Inquiry about performance of gradmatch

    Hello, I ran some experiments with gradmatch and randomonline, and find these two actually reach similar performances after 300 epochs, which is around 93, is there something important to note for reproducing the results? Thanks for your help!

    opened by pipilurj 0
  • Implement faster version of OMP

    Implement faster version of OMP

    Implement the following versions of OMP:

    1. FNNOMP (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012095)
    2. SNNOMP (https://hal.univ-lorraine.fr/hal-01585253/document)
    high priority in progress 
    opened by krishnatejakk 0
  • Gradmatch Data subset selection method making training slow

    Gradmatch Data subset selection method making training slow

    I tried to run some experiments as follows:

    • Ran full cifar10 without any subset selection method to train resnet50 which took around 32m 31s.
    • Ran Gradmatch cifar10 subset selection with 0.1 fractions taking longer time than full cifar10 i.e 22h 48m 40s.
    • Ran Gradmatch cifar10 subset selection with 0.3 fractions taking longer time than 0.1 Gradmatch selection method.

    I am using scaled resolution images of cifar10 i.e 224x224 resolution and accordingly defined resnet50 architecture. Can you let me know how to speed up experiments 2 and 3? In general subset selection method should faster the whole training process right?

    opened by animesh-007 9
  • Implement CRUST Algorithm

    Implement CRUST Algorithm

    1. Implement the CRUST strategy in the supervised learning setting.
    2. Create the CRUST data loader class building it on top of adaptive_dataloader class.
    enhancement 
    opened by krishnatejakk 0
Releases(v0.0.1)
  • v0.0.1(Mar 24, 2022)

    What's Changed

    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/73
    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/74

    New Contributors

    • @sahasrarjn made their first contribution in https://github.com/decile-team/cords/pull/73

    Full Changelog: https://github.com/decile-team/cords/compare/v0.0.0...v0.0.1

    Source code(tar.gz)
    Source code(zip)
  • v0.0.0(Mar 4, 2022)

    Pre-release of CORDS

    What's Changed

    • Dev by @krishnatejakk in https://github.com/decile-team/cords/pull/9
    • CONFIG Files Pull by @krishnatejakk in https://github.com/decile-team/cords/pull/10
    • New Gradient Computation Code by @krishnatejakk in https://github.com/decile-team/cords/pull/11
    • Feature: add support for hyperparameter tuning with subset selection by @savan77 in https://github.com/decile-team/cords/pull/12
    • Added checkpoints to save the model and updated documentation by @dheerajnbhat in https://github.com/decile-team/cords/pull/15
    • test CI and dual tests by @noilreed in https://github.com/decile-team/cords/pull/29
    • Dual CI flow merge to main by @noilreed in https://github.com/decile-team/cords/pull/30
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/36
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/40
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/66

    New Contributors

    • @krishnatejakk made their first contribution in https://github.com/decile-team/cords/pull/9
    • @savan77 made their first contribution in https://github.com/decile-team/cords/pull/12
    • @dheerajnbhat made their first contribution in https://github.com/decile-team/cords/pull/15
    • @noilreed made their first contribution in https://github.com/decile-team/cords/pull/29

    Full Changelog: https://github.com/decile-team/cords/commits/v0.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
decile-team
DECILE: Data EffiCient machIne LEarning
decile-team
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
Official Implementation for the paper DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification

DeepFace-EMD: Re-ranking Using Patch-wise Earth Mover’s Distance Improves Out-Of-Distribution Face Identification Official Implementation for the pape

Anh M. Nguyen 36 Dec 28, 2022
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 short.

Session-aware BERT4Rec Official repository for "Exploiting Session Information in BERT-based Session-aware Sequential Recommendation", SIGIR 2022 shor

Jamie J. Seol 22 Dec 13, 2022
Vanilla and Prototypical Networks with Random Weights for image classification on Omniglot and mini-ImageNet. Made with Python3.

vanilla-rw-protonets-project Vanilla Prototypical Networks and PNs with Random Weights for image classification on Omniglot and mini-ImageNet. Made wi

Giovani Candido 8 Aug 31, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE: A Benchmark Suite for Data-centric NLP You can get the english version of README. 以数据为中心的AI测评(DataCLUE) 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE

CLUE benchmark 135 Dec 22, 2022
Gym for multi-agent reinforcement learning

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gym. Our website, with

Farama Foundation 1.6k Jan 09, 2023
Many Class Activation Map methods implemented in Pytorch for CNNs and Vision Transformers. Including Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM and XGrad-CAM

Class Activation Map methods implemented in Pytorch pip install grad-cam ⭐ Tested on many Common CNN Networks and Vision Transformers. ⭐ Includes smoo

Jacob Gildenblat 6.6k Jan 06, 2023
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
This repository contains the code for EMNLP-2021 paper "Word-Level Coreference Resolution"

Word-Level Coreference Resolution This is a repository with the code to reproduce the experiments described in the paper of the same name, which was a

79 Dec 27, 2022
MG-GCN: Scalable Multi-GPU GCN Training Framework

MG-GCN MG-GCN: multi-GPU GCN training framework. For more information, please read our paper. After cloning our repository, run git submodule update -

Translational Data Analytics (TDA) Lab @GaTech 6 Oct 24, 2022
The implementation of the lifelong infinite mixture model

Lifelong infinite mixture model 📋 This is the implementation of the Lifelong infinite mixture model 📋 Accepted by ICCV 2021 Title : Lifelong Infinit

Fei Ye 5 Oct 20, 2022
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

COResets and Data Subset selection Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order

decile-team 244 Jan 09, 2023
Adaout is a practical and flexible regularization method with high generalization and interpretability

Adaout Adaout is a practical and flexible regularization method with high generalization and interpretability. Requirements python 3.6 (Anaconda versi

lambett 1 Feb 09, 2022
Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques

Tackling data scarcity in Speech Translation using zero-shot multilingual Machine Translation techniques This repository is derived from the NMTGMinor

Tu Anh Dinh 1 Sep 07, 2022
Fast and scalable uncertainty quantification for neural molecular property prediction, accelerated optimization, and guided virtual screening.

Evidential Deep Learning for Guided Molecular Property Prediction and Discovery Ava Soleimany*, Alexander Amini*, Samuel Goldman*, Daniela Rus, Sangee

Alexander Amini 75 Dec 15, 2022