Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations

Related tags

Deep Learningle_sde
Overview

Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations

This repo contains official code for the NeurIPS 2021 paper Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations by Jiayao Zhang, Hua Wang, Weijie J. Su.

Discussions welcome, please submit via Discussions. You can also read the reviews on OpenReview.

@misc{zhang2021imitating,
      title={Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations}, 
      author={Jiayao Zhang and Hua Wang and Weijie J. Su},
      year={2021},
      eprint={2110.05960},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Reproducing Experiments

Dependencies

We use Python 3.8 and pytorch for training neural nets, please use pip install -r requirements.txt (potentially in a virtual environment) to install dependencies.

Datasets

We use a dataset of geometric shapes (GeoMNIST) we constructed as well as CIFAR-10. GeoMNIST is lightweighted and will be generated when simulation runs; CIFAR-10 will be downloaded from torchvision.

Code Structure

After instsalling the dependencies, one may navigate through the two Jupyter notebooks for running experiments and producing plots and figures. Below we outline the code structure.

.
├── LICENSE                         # code license
├── README.md                       # this file
├── LE-SDE Data Analysis.ipynb      # reproducing plots and figures
├── LE-SDE Experiments.ipynb        # reproducing experiments
└── src                         # source code
    ├── data_analyzer.py            # processing experiment data
    ├── datasets.py                 # generating and loading datasets
    ├── models.py                   # definition of neural net models
    ├── plotter.py                  # generating plots and figures
    └── utils.py                    # utilities, including training pipelines
└── exp_data                    # experiment data
    ├── *.csv                       # dataframes from neural net training
    └── *.npy                       # numpy.ndarray storing LE-ODE simulations

More info regarding npy files can be found in the numpy documentation.

Reproducing Figures

Experiment Data

Although all simulations can be run on your machine, it is quite time-consuming. Data from our experiments can be downloaded from the following anonymous Dropbox links:

After downloading those tarballs, extract them into ./exp_data (or change the EXP_DIR variable in the notebooks accordingly).

Plotter

Once experiment data are ready, simply follow LE-SDE Data Analysis.ipynb for reproducing all figures.

Owner
Jiayao Zhang
Ph.D. Student at UPenn
Jiayao Zhang
TensorFlow CNN for fast style transfer

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! It takes 100ms on a 2015 Titan X to style t

1 Dec 14, 2021
Resources related to EMNLP 2021 paper "FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations"

FAME: Feature-based Adversarial Meta-Embeddings This is the companion code for the experiments reported in the paper "FAME: Feature-Based Adversarial

Bosch Research 11 Nov 27, 2022
A novel benchmark dataset for Monocular Layout prediction

AutoLay AutoLay: Benchmarking Monocular Layout Estimation Kaustubh Mani, N. Sai Shankar, J. Krishna Murthy, and K. Madhava Krishna Abstract In this pa

Kaustubh Mani 39 Apr 26, 2022
Re-implementation of the Noise Contrastive Estimation algorithm for pyTorch, following "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models." (Gutmann and Hyvarinen, AISTATS 2010)

Noise Contrastive Estimation for pyTorch Overview This repository contains a re-implementation of the Noise Contrastive Estimation algorithm, implemen

Denis Emelin 42 Nov 24, 2022
This project is the PyTorch implementation of our CVPR 2022 paper:

Requirements and Dependency Install PyTorch with CUDA (for GPU). (Experiments are validated on python 3.8.11 and pytorch 1.7.0) (For visualization if

Lei Huang 23 Nov 29, 2022
Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Surfaces Official code release for NGLOD. For technical details, please refer t

659 Dec 27, 2022
Image-Stitching - Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm

About The Project Panorama composition using SIFT Features and a custom implementaion of RANSAC algorithm (Random Sample Consensus). Author: Andreas P

Andreas Panayiotou 3 Jan 03, 2023
AFLFast (extends AFL with Power Schedules)

AFLFast Power schedules implemented by Marcel Böhme [email protected]

Marcel Böhme 380 Jan 03, 2023
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer

BalaGAN: Image Translation Between Imbalanced Domains via Cross-Modal Transfer Project Page | Paper | Video State-of-the-art image-to-image translatio

47 Dec 06, 2022
PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training”

A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased

Kaihua Tang 824 Jan 03, 2023
simple artificial intelligence utilities

Simple AI Project home: http://github.com/simpleai-team/simpleai This lib implements many of the artificial intelligence algorithms described on the b

921 Dec 08, 2022
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 01, 2023
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023
Code for the paper titled "Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks" (NeurIPS 2021 Spotlight).

Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks This repository contains the code and pre-trained

Hassan Dbouk 7 Dec 05, 2022
This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

This repository contains numerical implementation for the paper Intertemporal Pricing under Reference Effects: Integrating Reference Effects and Consumer Heterogeneity.

Hansheng Jiang 6 Nov 18, 2022
3D AffordanceNet is a 3D point cloud benchmark consisting of 23k shapes from 23 semantic object categories, annotated with 56k affordance annotations and covering 18 visual affordance categories.

3D AffordanceNet This repository is the official experiment implementation of 3D AffordanceNet benchmark. 3D AffordanceNet is a 3D point cloud benchma

49 Dec 01, 2022
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
SIMULEVAL A General Evaluation Toolkit for Simultaneous Translation

SimulEval SimulEval is a general evaluation framework for simultaneous translation on text and speech. Requirement python = 3.7.0 Installation git cl

Facebook Research 48 Dec 28, 2022
⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.

Optimized Einsum Optimized Einsum: A tensor contraction order optimizer Optimized einsum can significantly reduce the overall execution time of einsum

Daniel Smith 653 Dec 30, 2022