Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

Overview


            

COResets and Data Subset selection

GitHub Decile Documentation GitHub Stars GitHub Forks

Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

In this README

What is CORDS?

CORDS is COReset and Data Selection library for making machine learning time, energy, cost, and compute efficient. CORDS is built on top of pytorch. Deep Learning systems are extremely compute intensive today with large turn around times, energy inefficiencies, higher costs and resourse requirements [1,2]. CORDS is an effort to make deep learning more energy, cost, resource and time efficient while not sacrificing accuracy. The following are the goals CORDS tries to achieve:

Data Efficiency

Reducing End to End Training Time

Reducing Energy Requirement

Faster Hyper-parameter tuning

Reducing Resource (GPU) Requirement and Costs

The primary purpose of CORDS is to select the right representative data subsets from massive datasets, and it does so iteratively. CORDS uses some recent advances in data subset selection and particularly, ideas of coresets and submodularity select such subsets. CORDS implements a number of state of the art data subset selection algorithms and coreset algorithms. Some of the algorithms currently implemented with CORDS include:

We are continuously incorporating newer and better algorithms into CORDS. Some of the features of CORDS includes:

  • Reproducability of SOTA in Data Selection and Coresets: Enable easy reproducability of SOTA described above. We are trying to also add more algorithms so if you have an algorithm you would like us to include, please let us know,.
  • Benchmarking: We have benchmarked CORDS (and the algorithms present right now) on several datasets including CIFAR-10, CIFAR-100, MNIST, SVHN and ImageNet.
  • Ease of Use: One of the main goals of CORDS is that it is easy to use and add to CORDS. Feel free to contribute to CORDS!
  • Modular design: The data selection algorithms are separate from the training loop, thereby enabling modular design and also varied scenarios of utility.
  • Broad number of usecases: CORDS is currently implemented for simple image classification tasks and hyperparameter tuning, but we are working on integrating a number of additional use cases like object detection, speech recognition, semi-supervised learning, Auto-ML, etc.

Installation

  1. To install latest version of CORDS package using PyPI:

    pip install -i https://test.pypi.org/simple/ cords
  2. To install using source:

    git clone https://github.com/decile-team/cords.git
    cd cords
    pip install -r requirements/requirements.txt

Next Steps

Tutorials

Documentation

The documentation for the latest version of CORDS can always be found here.

Comments
  • Logistic Regression support for Gradmatch

    Logistic Regression support for Gradmatch

    Logistic Regression model throws errors when we do back propagation. The fix for this is perhaps making freeze=False in forward function of utils/models/logreg_net.py

    opened by nlokeshiisc 4
  • [Bug] Got weight with same value when running examples.

    [Bug] Got weight with same value when running examples.

    Hi, I tested the example with Supervised learning and Glister strategy. https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb But when I print the weight of the train loader, they are all 1.0. I believe that by using Glister strategy, we will get different weights.

    tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
            1., 1.], device='cuda:0')
    

    Is that a bug or something special? Thanks.

    opened by HaoKang-Timmy 3
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi,

    I was trying to deploy CORDS selection to my training, but this error popped out Segmentation fault (core dumped).

    I imitated code from https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb.

    So basically I put my training and testing loader into GLISTERDataLoader, and switched this part into my code

    for _, (inputs, targets, weights) in enumerate(dataloader): inputs = inputs.to(device) targets = targets.to(device, non_blocking=True) weights = weights.to(device) optimizer.zero_grad() outputs = model(inputs) losses = criterion_nored(outputs, targets) loss = torch.dot(losses, weights/(weights.sum())) loss.backward()

    before modifying my code was running fine, so I believe there is an error inside the CORDS, my dataset is CIFAR10.

    Thanks

    opened by chengwuxinlin 2
  • Replace apricot with submodlib

    Replace apricot with submodlib

    Fixes #16
    submodlib is now used for the CRAIG strategy/dataloader as well as the submodular strategy/dataloader. Please let me know if you have any feedback!

    Notes:

    • I am not sure if sum redundancy (a submodular function implemented in apricot) has an analogue in submodlib, so it is disabled as an option for now.
    • It doesn't seem like submodularselectionstrategy.py is used in the corresponding dataloader. This may be a good opportunity to refactor, so that behavior is consistent between the two.
    • Any existing code that specifies the "optimizer" (greedy algorithm) used by apricot will break, since the names used by submodlib are different than those used by apricot (e.g. 'LazyGreedy' instead of 'Lazy'). This includes configs that use this option.
    opened by ghost 1
  • Typo in cords_cifar10_glister_train.ipynb

    Typo in cords_cifar10_glister_train.ipynb

    There is a typo in the cords_cifar10_glister_train.ipynb notebook : https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/cords_cifar10_glister_train.ipynb

    glister_trn.configdata.train_args.print_every = 1
    glister_trn.configdata.train_args.device = 'cuda'
    glister_trn.configdata.dss_args.fraction = fraction
    

    instead of

    glister_trn.cfg.train_args.print_every = 1
    glister_trn.cfg.train_args.device = 'cuda'
    glister_trn.cfg.dss_args.fraction = fraction
    
    opened by eendee 1
  • Evaluation on ImageNet

    Evaluation on ImageNet

    Hello, thanks for a very interesting and useful project.

    Could you mind providing an evaluation method for ImageNet? I tried to, adding loader for ImageNet to custom_dataset.py, but failed due to a GPU memory issue during subset selection.

    Many thanks!

    opened by Hayoung93 1
  • For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    1. For GRAD-MATCH method, there are weights associated with each data point in X(subset of training set). Do the weights have physical significance? for example, if the value of the weight is higher, the relevant selected data has the greater contribution to the residual?
    2. During the iteration, the selective index is in the selected indices, so the iteration break. why this happen? [email protected]
    opened by lishaguo 1
  • Questions about accuracy logging

    Questions about accuracy logging

    Hello! Thanks for your great work.

    I'm currently working on this code and I want to ask a question about accuracy logging.

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L530-L541

    In line 541 of train.py, val_acc contains cumulative accuracies over input batches. For example, if the loader contains 4500 examples and the batch size is 1000, then tst_acc has 5 accuracies per each evaluation. (the first element of tst_acc will be the accuracy over the first 1000 examples)

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L631-L633

    In line 633, it prints the best value in tst_acc. In this case, the resulted best accuracies over different algorithms and seeds might be the values evaluated on different test samples.

    Is this what you intended? In my experience, I think evaluating algorithms on an identical test dataset is a convention. In addition, is the reported test accuracies in the GRAD-MATCH paper the best values as above or the last test accuracy?

    Best, Jang-Hyun

    opened by Janghyun1230 1
  • CORDS gradient calculations for different loss functions

    CORDS gradient calculations for different loss functions

    a) Implement gradient calculation for Squared Loss, Negative logistic loss, General loss function gradient computation, Hinge loss.

    b) Integrate the new gradient calculation with different selection strategies

    enhancement 
    opened by krishnatejakk 1
  • Refactor the folders in the repo

    Refactor the folders in the repo

    • Add a folder called benchmarks which has all the results/benchmarks for the various cases. We should remove the results from the main readme and point them to that folder. Also, add the notebooks to reproduce the benchmark results
    • Rename notebooks to tutorials. Add different tutorials based on use-cases (NLP, Vision, SSL, Hyper-parameter tunings, NAS, etc.)
    opened by rishabhk108 0
  • Inquiry about performance of gradmatch

    Inquiry about performance of gradmatch

    Hello, I ran some experiments with gradmatch and randomonline, and find these two actually reach similar performances after 300 epochs, which is around 93, is there something important to note for reproducing the results? Thanks for your help!

    opened by pipilurj 0
  • Implement faster version of OMP

    Implement faster version of OMP

    Implement the following versions of OMP:

    1. FNNOMP (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012095)
    2. SNNOMP (https://hal.univ-lorraine.fr/hal-01585253/document)
    high priority in progress 
    opened by krishnatejakk 0
  • Gradmatch Data subset selection method making training slow

    Gradmatch Data subset selection method making training slow

    I tried to run some experiments as follows:

    • Ran full cifar10 without any subset selection method to train resnet50 which took around 32m 31s.
    • Ran Gradmatch cifar10 subset selection with 0.1 fractions taking longer time than full cifar10 i.e 22h 48m 40s.
    • Ran Gradmatch cifar10 subset selection with 0.3 fractions taking longer time than 0.1 Gradmatch selection method.

    I am using scaled resolution images of cifar10 i.e 224x224 resolution and accordingly defined resnet50 architecture. Can you let me know how to speed up experiments 2 and 3? In general subset selection method should faster the whole training process right?

    opened by animesh-007 9
  • Implement CRUST Algorithm

    Implement CRUST Algorithm

    1. Implement the CRUST strategy in the supervised learning setting.
    2. Create the CRUST data loader class building it on top of adaptive_dataloader class.
    enhancement 
    opened by krishnatejakk 0
Releases(v0.0.1)
  • v0.0.1(Mar 24, 2022)

    What's Changed

    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/73
    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/74

    New Contributors

    • @sahasrarjn made their first contribution in https://github.com/decile-team/cords/pull/73

    Full Changelog: https://github.com/decile-team/cords/compare/v0.0.0...v0.0.1

    Source code(tar.gz)
    Source code(zip)
  • v0.0.0(Mar 4, 2022)

    Pre-release of CORDS

    What's Changed

    • Dev by @krishnatejakk in https://github.com/decile-team/cords/pull/9
    • CONFIG Files Pull by @krishnatejakk in https://github.com/decile-team/cords/pull/10
    • New Gradient Computation Code by @krishnatejakk in https://github.com/decile-team/cords/pull/11
    • Feature: add support for hyperparameter tuning with subset selection by @savan77 in https://github.com/decile-team/cords/pull/12
    • Added checkpoints to save the model and updated documentation by @dheerajnbhat in https://github.com/decile-team/cords/pull/15
    • test CI and dual tests by @noilreed in https://github.com/decile-team/cords/pull/29
    • Dual CI flow merge to main by @noilreed in https://github.com/decile-team/cords/pull/30
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/36
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/40
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/66

    New Contributors

    • @krishnatejakk made their first contribution in https://github.com/decile-team/cords/pull/9
    • @savan77 made their first contribution in https://github.com/decile-team/cords/pull/12
    • @dheerajnbhat made their first contribution in https://github.com/decile-team/cords/pull/15
    • @noilreed made their first contribution in https://github.com/decile-team/cords/pull/29

    Full Changelog: https://github.com/decile-team/cords/commits/v0.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
decile-team
DECILE: Data EffiCient machIne LEarning
decile-team
Commonsense Ability Tests

CATS Commonsense Ability Tests Dataset and script for paper Evaluating Commonsense in Pre-trained Language Models Use making_sense.py to run the exper

XUHUI ZHOU 28 Oct 19, 2022
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022
An intuitive library to extract features from time series

Time Series Feature Extraction Library Intuitive time series feature extraction This repository hosts the TSFEL - Time Series Feature Extraction Libra

Associação Fraunhofer Portugal Research 589 Jan 04, 2023
Deep-Learning-Image-Captioning - Implementing convolutional and recurrent neural networks in Keras to generate sentence descriptions of images

Deep Learning - Image Captioning with Convolutional and Recurrent Neural Nets ========================================================================

23 Apr 06, 2022
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

33 Mar 23, 2022
CaFM-pytorch ICCV ACCEPT Introduction of dataset VSD4K

CaFM-pytorch ICCV ACCEPT Introduction of dataset VSD4K Our dataset VSD4K includes 6 popular categories: game, sport, dance, vlog, interview and city.

96 Jul 05, 2022
a Pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021"

A pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in 2021" 1. Notes This is a pytorch easy re-implement of "YOLOX: Exceeding YOLO Series in

91 Dec 26, 2022
The story of Chicken for Club Bing

Chicken Story tl;dr: The time when Microsoft banned my entire country for cheating at Club Bing. (A lot of the details are from memory so I've recreat

Eyal 142 May 16, 2022
CONditionals for Ordinal Regression and classification in tensorflow

Condor Ordinal regression in Tensorflow Keras Tensorflow Keras implementation of CONDOR Ordinal Regression (aka ordinal classification) by Garrett Jen

9 Jul 31, 2022
Remote sensing change detection using PaddlePaddle

Change Detection Laboratory Developing and benchmarking deep learning-based remo

Lin Manhui 15 Sep 23, 2022
https://sites.google.com/cornell.edu/recsys2021tutorial

Counterfactual Learning and Evaluation for Recommender Systems (RecSys'21 Tutorial) Materials for "Counterfactual Learning and Evaluation for Recommen

yuta-saito 45 Nov 10, 2022
Complementary Patch for Weakly Supervised Semantic Segmentation, ICCV21 (poster)

CPN (ICCV2021) This is an implementation of Complementary Patch for Weakly Supervised Semantic Segmentation, which is accepted by ICCV2021 poster. Thi

Ferenas 20 Dec 12, 2022
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
efficient neural audio synthesis in the waveform domain

neural waveshaping synthesis real-time neural audio synthesis in the waveform domain paper • website • colab • audio by Ben Hayes, Charalampos Saitis,

Ben Hayes 169 Dec 23, 2022
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models

Molecular Sets (MOSES): A benchmarking platform for molecular generation models Deep generative models are rapidly becoming popular for the discovery

MOSES 656 Dec 29, 2022
A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching.

LPM_Python A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching. The code is established ac

AoxiangFan 11 Nov 07, 2022
[CVPR22] Official codebase of Semantic Segmentation by Early Region Proxy.

RegionProxy Figure 2. Performance vs. GFLOPs on ADE20K val split. Semantic Segmentation by Early Region Proxy Yifan Zhang, Bo Pang, Cewu Lu CVPR 2022

Yifan 54 Nov 29, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 01, 2023
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
African language Speech Recognition - Speech-to-Text

Swahili-Speech-To-Text Table of Contents Swahili-Speech-To-Text Overview Scenario Approach Project Structure data: models: notebooks: scripts tests: l

2 Jan 05, 2023