Reading Group @mila-iqia on Computational Optimal Transport for Machine Learning Applications

Overview

Computational Optimal Transport for Machine Learning Reading Group

Over the last few years, optimal transport (OT) has quickly become a central topic in machine learning. OT is now routinely used in many areas of ML, ranging from the theoretical use of OT flow for controlling learning algorithms to the inference of high-dimensional cell trajectories in genomics. This reading group aims to keep participants up to date with the latest research happening in this area.

Logistics

For Winter 2022 term, meetings will be held weekly on Mondays from 14:00 to 15:00 EST via zoom (for now).

  • Zoom Link.

  • Password will be provided on slack before every meeting.

  • Meetings will be recorded by default. Recordings are available to Mila members at this link. Presenters can email [email protected] to opt out from being recorded.

  • Reading Group participates are expected to read each paper beforehand.

Schedule

Date Topic Presenters Slides
01/17/21 Introduction to Optimal Transport for Machine Learning Alex Tong
Ali Harakeh
Part 1
Part 2
01/24/21 Learning with minibatch Wasserstein : asymptotic and gradient properties Kilian Fatras --
01/31/21 -- -- --
02/7/21 -- -- --
02/14/21 -- -- --
02/21/21 -- -- --
02/28/21 -- -- --

Paper Presentation Instructions

Volunteer to Present

  • All participants are encouraged to volunteer to present at the reading group.

  • Volunteers can choose a paper from this list of suggested papers, or any other paper that is related to optimal transport in machine learning.

  • To volunteer, please send the paper title, link, and your preferred presentation date the Slack channel #volunteer-to-present or email [email protected].

Presentation Instructions

  • Presentations should be limited to 40 minutes at most. During the presentation, organizers will act as moderators and will read questions as they come up on the Zoom chat. The aim is to be done in 35-40 min to allow 15 min for general discussion.

  • Presentations should roughly adhere to the following outline:

    1. 5-10 minutes: Problem setup and position to literature.
    2. 10-15 minutes: Contributions/Novel technical points.
    3. 10-15 minutes: Weak points, open questions, and future directions.

Useful References

This is a list of useful references including code, text books, and presentations.

Code

  • POT: Python Optimal Transport: This open source Python library provide several solvers for optimization problems related to Optimal Transport for signal, image processing and machine learning. This library has the most efficient exact OT solvers.
  • GeomLoss: The GeomLoss library provides efficient GPU implementations for Kernel norms, Hausdorff divergences, and Debiased Sinkhorn divergences. This library has the most scalable duel OT solvers embedded within the Sinkhorn divergence computation.

Textbooks

@article{peyre2019computational,
  title={Computational optimal transport: With applications to data science},
  author={Peyr{\'e}, Gabriel and Cuturi, Marco and others},
  journal={Foundations and Trends{\textregistered} in Machine Learning},
  volume={11},
  number={5-6},
  pages={355--607},
  year={2019},
  publisher={Now Publishers, Inc.}}

Workshops and Presentations

Organizers

Modeled after the Causal Representation Learning Reading Group .

Owner
Ali Harakeh
Postdoctoral Research Fellow @mila-iqia
Ali Harakeh
Implementation of Gans

GAN Generative Adverserial Networks are an approach to generative data modelling using Deep learning methods. I have currently implemented : DCGAN on

Sibam Parida 5 Sep 07, 2021
Cross-platform CLI tool to generate your Github profile's stats and summary.

ghs Cross-platform CLI tool to generate your Github profile's stats and summary. Preview Hop on to examples for other usecases. Jump to: Installation

HackerRank 134 Dec 20, 2022
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"

Medical Image Segmentation with Guided Attention This repository contains the code of our paper: "'Multi-scale self-guided attention for medical image

Ashish Sinha 394 Dec 28, 2022
ToFFi - Toolbox for Frequency-based Fingerprinting of Brain Signals

ToFFi Toolbox This repository contains "before peer review" version of the software related to the preprint of the publication ToFFi - Toolbox for Fre

4 Aug 31, 2022
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

MINDs Lab 170 Jan 04, 2023
Algorithm to texture 3D reconstructions from multi-view stereo images

MVS-Texturing Welcome to our project that textures 3D reconstructions from images. This project focuses on 3D reconstructions generated using structur

Nils Moehrle 766 Jan 04, 2023
RANZCR-CLiP 7th Place Solution

RANZCR-CLiP 7th Place Solution This repository is WIP. (18 Mar 2021) Installation git clone https://github.com/analokmaus/kaggle-ranzcr-clip-public.gi

Hiroshechka Y 21 Oct 22, 2022
Python implementation of Project Fluent

Project Fluent This is a collection of Python packages to use the Fluent localization system. python-fluent consists of these packages: fluent.syntax

Project Fluent 155 Dec 28, 2022
LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021

LoFTR-with-train-script LoFTR:Detector-Free Local Feature Matching with Transformers CVPR 2021 (with train script --- unofficial ---). About Megadepth

Nan Xiaohu 15 Nov 04, 2022
A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching.

LPM_Python A Python implementation of the Locality Preserving Matching (LPM) method for pruning outliers in image matching. The code is established ac

AoxiangFan 11 Nov 07, 2022
Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems

Learning an Adaptive Meta Model-Generator for Incrementally Updating Recommender Systems This is our experimental code for RecSys 2021 paper "Learning

11 Jul 28, 2022
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab

VQGAN-CLIP-Video cat.mp4 policeman.mp4 schoolboy.mp4 forsenBOG.mp4

23 Oct 26, 2022
πŸ… Top 5% in 제2회 μ—°κ΅¬κ°œλ°œνŠΉκ΅¬ 인곡지λŠ₯ κ²½μ§„λŒ€νšŒ AI SPARK μ±Œλ¦°μ§€

AI_SPARK_CHALLENG_Object_Detection 제2회 μ—°κ΅¬κ°œλ°œνŠΉκ΅¬ 인곡지λŠ₯ κ²½μ§„λŒ€νšŒ AI SPARK μ±Œλ¦°μ§€ πŸ… Top 5% in mAP(0.75) (443λͺ… 쀑 13λ“±, mAP: 0.98116) λŒ€νšŒ μ„€λͺ… Edge ν™˜κ²½μ—μ„œμ˜ κ°€μΆ• Object Dete

3 Sep 19, 2022
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust.

Subspace Adversarial Training Single-step adversarial training (AT) has received wide attention as it proved to be both efficient and robust. However,

15 Sep 02, 2022
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"

Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection PyTorch code release of the paper "Attentive Prototypes for Sour

Deepti Hegde 23 Oct 17, 2022
VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations

VolumeGAN - 3D-aware Image Synthesis via Learning Structural and Textural Representations 3D-aware Image Synthesis via Learning Structural and Textura

GenForce: May Generative Force Be with You 116 Dec 26, 2022
HybridNets: End-to-End Perception Network

HybridNets: End2End Perception Network HybridNets Network Architecture. HybridNets: End-to-End Perception Network by Dat Vu, Bao Ngo, Hung Phan πŸ“§ FPT

Thanh Dat Vu 370 Dec 29, 2022
How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

How to Become More Salient? Surfacing Representation Biases of the Saliency Prediction Model

Bogdan Kulynych 49 Nov 05, 2022
Train the HRNet model on ImageNet

High-resolution networks (HRNets) for Image classification News [2021/01/20] Add some stronger ImageNet pretrained models, e.g., the HRNet_W48_C_ssld_

HRNet 866 Jan 04, 2023