An end-to-end machine learning library to directly optimize AUC loss

Overview

LibAUC

An end-to-end machine learning library for AUC optimization.

Why LibAUC?

Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.

  • In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance.
  • Many real-world datasets are usually imbalanced . AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data

Links

Installation

$ pip install libauc

Usage

Official Tutorials:

  • 01.Creating Imbalanced Benchmark Datasets [Notebook][Script]
  • 02.Training ResNet20 with Imbalanced CIFAR10 [Notebook][Script]
  • 03.Training with Pytorch Learning Rate Scheduling [Notebook][Script]
  • 04.Training with Imbalanced Datasets on Distributed Setting [Coming soon]

Quickstart for beginner:

>>> #import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
...
>>> #define loss
>>> Loss = AUCMLoss(imratio=0.1)
>>> optimizer = PESG(imratio=0.1)
...
>>> #training
>>> model.train()    
>>> for data, targets in trainloader:
>>>	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
        loss = Loss(preds, targets) 
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()
...	
>>> #restart stage
>>> optimizer.update_regularizer()		
...   
>>> #evaluation
>>> model.eval()    
>>> for data, targets in testloader:
	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)

Please visit our website or github for more examples.

Citation

If you find LibAUC useful in your work, please cite the following paper:

@article{yuan2020robust,
title={Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
journal={arXiv preprint arXiv:2012.03173},
year={2020}
}

Contact

If you have any questions, please contact us @ Zhuoning Yuan [[email protected]] and Tianbao Yang [[email protected]] or please open a new issue in the Github.

Comments
  • Only compatible with Nvidia GPU

    Only compatible with Nvidia GPU

    I tried running the example tutorial but I got the following error. ''' AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx '''

    opened by Beckham45 2
  • Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Hi Zhuoning,

    This is an interesting work! I am wondering if the DAM method can be extended to a multi-class classification task with long-tailed imbalanced data. Intuitively, this should be possible as the famous sklearn tool provides auc score for multi-class setting by using one-versus-rest or one-versus-one technique.

    Besides, it seems that optimizer.update_regularizer() is called only when the learning rate is reduced, thus it would be more elegant to incorporate this functional call into a pytorch lr scheduler. E.g.,

    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
    scheduler.step()    # override the step to fulfill: optimizer.update_regularizer()
    
    

    For current libauc version, the PESG optimizer is not compatible with schedulers in torch.optim.lr_scheduler . It would be great if this feature can be supported in the future.

    Thanks for your work!

    opened by Cogito2012 2
  • When to use retain_graph=True?

    When to use retain_graph=True?

    Hi,

    When to use retain_graph=True in the loss backward function?

    In 2 examples (2 and 4), it is True. But not in the other examples.

    I appreciate your time.

    opened by dfrahman 1
  • Using AUCMLoss with imratio>1

    Using AUCMLoss with imratio>1

    I'm not very familiar with the maths in the paper so please forgive me if i'm asking something obvious.

    The AUCMLoss uses the "imbalance ratio" between positive and negative samples. The ratio is defined as

    the ratio of # of positive examples to the # of negative examples

    Or imratio=#pos/#neg

    When #pos<#neg, imratio is some value between 0 and 1. But when #pos>#neg, imratio>1

    Will this break the loss calculations? I have a feeling it would invalidate the many 1-self.p calculations in the LibAUC implementation, but as i'm not familiar with the maths I can't say for sure.

    Also, is there a problem (mathematically speaking) with calculating imratio=#pos/#total_samples, to avoid the problem above? When #pos<<#neg, #neg approximates #total_samples.

    opened by ayhyap 1
  • AUCMLoss does not use margin argument

    AUCMLoss does not use margin argument

    I noticed in the AUCMLoss class that the margin argument is not used. Following the formulation in the paper, the forward function should be changed in line 20 from 2*self.alpha*(self.p*(1-self.p) + \ to 2*self.alpha*(self.p*(1-self.p)*self.margin + \

    opened by ayhyap 1
  • How to train multi-label classification tasks? (like chexpert)

    How to train multi-label classification tasks? (like chexpert)

    I have started using this library and I've read your paper Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification, and I'm still not sure how to train a multi-label classification (MLC) model.

    Specifically, how did you fine-tune for the Chexpert multi-label classification task? (i.e. classify 5 diseases, where each image may have presence of 0, 1 or more diseases)

    • The first step pre-training with Cross-entropy loss seems clear to me
    • You mention: "In the second step of AUC maximization, we replace the last classifier layer trained in the first step by random weights and use our DAM method to optimize the last classifier layer and all previous layers.". The new classifier layer is a single or multi-label classifier?
    • In the Appendix I, figure 7 shows only one score as output for Deep AUC maximization (i.e. only one disease)
    • In the code, both AUCMLoss() and APLoss_SH() receive single-label outputs, not multi-label outputs, apparently

    How do you train for the 5 diseases? Train sequentially Cardiomegaly, then Edema, and so on? or with 5 losses added up? or something else?

    opened by pdpino 4
  • Example for tensorflow

    Example for tensorflow

    Thank you for the great library. Does it currently support tensorflow? If so, could you provide an example of how it can be used with tensorflow? Thank you very much

    opened by Kokkini 1
Releases(1.1.4)
  • 1.1.4(Jul 26, 2021)

    What's New

    • Added PyTorch dataloader for CheXpert dataset. Tutorial for training CheXpert is available here.
    • Added support for training AUC loss on CPU machines. Note that please remove lines with .cuda() from the code.
    • Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • 1.1.3(Jun 16, 2021)

  • 1.1.2(Jun 14, 2021)

    What's New

    1. Add SOAP optimizer contributed by @qiqi-helloworld @yzhuoning for optimizing AUPRC. Please check the tutorial here.
    2. Update ResNet18, ResNet34 with pretrained models on ImageNet1K
    3. Add new strategy for AUCM Loss: imratio is calculated over a mini-batch if initial value is not given
    4. Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • V1.1.0(May 10, 2021)

    What's New:

    • Fixed some bugs and improved the training stability
    • Changed default settings in loss function for binary labels to be 0 and 1
    • Added Pytorch dataloaders for CIFAR10, CIFAR100, CAT_vs_Dog, STL10
    • Enabled training DAM with Pytorch leanring scheduler, e.g., ReduceLROnPlateau, CosineAnnealingLR
    Source code(tar.gz)
    Source code(zip)
Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning

Circuit Training: An open-source framework for generating chip floor plans with distributed deep reinforcement learning. Circuit Training is an open-s

Google Research 479 Dec 25, 2022
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

Single Node Injection Attack against Graph Neural Networks This repository is our Pytorch implementation of our paper: Single Node Injection Attack ag

Shuchang Tao 18 Nov 21, 2022
Codebase for Diffusion Models Beat GANS on Image Synthesis.

Codebase for Diffusion Models Beat GANS on Image Synthesis.

Katherine Crowson 128 Dec 02, 2022
Code for the Higgs Boson Machine Learning Challenge organised by CERN & EPFL

A method to solve the Higgs boson challenge using Least Squares - Novae This project is the Project 1 of EPFL CS-433 Machine Learning. The project is

Giacomo Orsi 1 Nov 09, 2021
Implementation of PyTorch-based multi-task pre-trained models

mtdp Library containing implementation related to the research paper "Multi-task pre-training of deep neural networks for digital pathology" (Mormont

Romain Mormont 27 Oct 14, 2022
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
Learning Confidence for Out-of-Distribution Detection in Neural Networks

Learning Confidence Estimates for Neural Networks This repository contains the code for the paper Learning Confidence for Out-of-Distribution Detectio

235 Jan 05, 2023
CUDA Python Low-level Bindings

CUDA Python Low-level Bindings

NVIDIA Corporation 529 Jan 03, 2023
Scalable implementation of Lee / Mykland (2012) and Ait-Sahalia / Jacod (2012) Jump tests for noisy high frequency data

JumpDetectR Name of QuantLet : JumpDetectR Published in : 'To be published as "Jump dynamics in high frequency crypto markets"' Description : 'Scala

LvB 12 Jan 01, 2023
Keras Realtime Multi-Person Pose Estimation - Keras version of Realtime Multi-Person Pose Estimation project

This repository has become incompatible with the latest and recommended version of Tensorflow 2.0 Instead of refactoring this code painfully, I create

M Faber 769 Dec 08, 2022
Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021)

SPDNet Structure-Preserving Deraining with Residue Channel Prior Guidance (ICCV2021) Requirements Linux Platform NVIDIA GPU + CUDA CuDNN PyTorch == 0.

41 Dec 12, 2022
[ACM MM 2021] Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation)

Multiview Detection with Shadow Transformer (and View-Coherent Data Augmentation) [arXiv] [paper] @inproceedings{hou2021multiview, title={Multiview

Yunzhong Hou 27 Dec 13, 2022
Distributed DataLoader For Pytorch Based On Ray

Dpex——用户无感知分布式数据预处理组件 一、前言 随着GPU与CPU的算力差距越来越大以及模型训练时的预处理Pipeline变得越来越复杂,CPU部分的数据预处理已经逐渐成为了模型训练的瓶颈所在,这导致单机的GPU配置的提升并不能带来期望的线性加速。预处理性能瓶颈的本质在于每个GPU能够使用的C

Dalong 23 Nov 02, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
Package for working with hypernetworks in PyTorch.

Package for working with hypernetworks in PyTorch.

Christian Henning 71 Jan 05, 2023
TensorFlow implementation of ENet

TensorFlow-ENet TensorFlow implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation. This model was tested on th

Kwotsin 255 Oct 17, 2022
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 20.4k Dec 30, 2022
A 3D sparse LBM solver implemented using Taichi

taichi_LBM3D Background Taichi_LBM3D is a 3D lattice Boltzmann solver with Multi-Relaxation-Time collision scheme and sparse storage structure impleme

Jianhui Yang 121 Jan 06, 2023
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
PRTR: Pose Recognition with Cascade Transformers

PRTR: Pose Recognition with Cascade Transformers Introduction This repository is the official implementation for Pose Recognition with Cascade Transfo

mlpc-ucsd 133 Dec 30, 2022