Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Related tags

Deep LearningBBI
Overview

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

This repository contains the code for the BBI optimizer, introduced in the paper Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization. 2201.11137. It is implemented using Pytorch.

The repository also includes the code needed to reproduce all the experiments presented in the paper. In particular:

  • The BBI optimizer is implemented in the file inflation.py.

  • The jupyter notebooks with the synthetic experiments are in the folder synthetic. All the notebooks already include the output, and text files with results are also included in the folder. In particular

    • The notebook ackley.ipynb can be used to reproduce the results in Sec. 4.1.
    • The notebook zakharov.ipynb can be used to reproduce the results in Sec. 4.2.
    • The notebook multi_basin.ipynb can be used to reproduce the results in Sec. 4.3.
  • The ML benchmarks described in Sec. 4.5 can be found in the folders CIFAR and MNIST. The notebooks already include some results that can be inspected, but not all the statistics that builds up the results in Table 2. In particular:

    • CIFAR : The notebook CIFAR-notebook.ipynb uses hyperopt to estimate the best hyperparameters for each optimizer and then runs a long run with the best estimated hyperparamers. The results can be analyzed with the notebook analysis-cifar.ipynb, which can also be used to generate more runs with the best hyperparameters to gather more statistics. The subfolder results already includes some runs that can be inspected.

    • MNIST: The notebooks mnist_scan_BBI.ipynb and mnist_scan_SGD.ipynb perform a grid scan using BBI and SGD, respectively and gather some small statistics. All the results are within the notebooks themselves.

  • The PDE experiments can be run by running the script script-PDE.sh as

    bash script-PDE.sh
    

    This will solve the PDE outlined in Sec. 4.4 and App. C multiple times with the same initialization. The hyperparameters are also kept fixed and can be obtained from the script itself. In particular:

    • feature 1 means that an L2 regularization is added to the loss.
    • seed specifies the seed, which fixes the initialization of the network. The difference between the different runs then is only due to the random bounces, which are not affected by this choice of the seed.

    The folder results already includes some runs. The runs performed in this way are not noisy, i.e. the set of points sampled from the domain is kept fixed. To randomly change the points every "epoch" (1000 iterations), edit the file experiments/PDE_PoissonD.py by changing line 134 to self.update_points = True.

The code has been tested with Python 3.9, Pytorch 1.10, hyperopt 0.2.5. We ran the synthetic experiments and MNIST on a six-core i7-9850H CPU with 16 GB of RAM, while we ran the CIFAR and PDE experiments on a pair of GPUs. We tested both on a pair of NVIDIA GeForce RTX 2080 Ti and on a pair of NVIDIA Tesla V100-SXM2-16GB GPUs, coupled with 32 GB of RAM and AMD EPYC 7502P CPUs.

The Resnet-18 code (in experiments/models) and the utils.py helper functions are adapted from https://github.com/kuangliu/pytorch-cifar (MIT License).

Owner
G. Bruno De Luca
G. Bruno De Luca
Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

MLP-Mixer: An all-MLP Architecture for Vision This repo contains PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision. Usage : impo

Rishikesh (ऋषिकेश) 175 Dec 23, 2022
Deep-learning-roadmap - All You Need to Know About Deep Learning - A kick-starter

Deep Learning - All You Need to Know Sponsorship To support maintaining and upgrading this project, please kindly consider Sponsoring the project deve

Instill AI 4.4k Dec 26, 2022
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
Semantically Contrastive Learning for Low-light Image Enhancement

Semantically Contrastive Learning for Low-light Image Enhancement Here, we propose an effective semantically contrastive learning paradigm for Low-lig

48 Dec 16, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
[CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers

TubeDETR: Spatio-Temporal Video Grounding with Transformers Website • STVG Demo • Paper This repository provides the code for our paper. This includes

Antoine Yang 108 Dec 27, 2022
Open Source Differentiable Computer Vision Library for PyTorch

Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer

kornia 7.6k Jan 04, 2023
These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations"

Few-shot-NLEs These are the materials for the paper "Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations". You can find the smal

Yordan Yordanov 0 Oct 21, 2022
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning

Rethinking the Value of Labels for Improving Class-Imbalanced Learning This repository contains the implementation code for paper: Rethinking the Valu

Yuzhe Yang 656 Dec 28, 2022
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
A Demo server serving Bert through ONNX with GPU written in Rust with <3

Demo BERT ONNX server written in rust This demo showcase the use of onnxruntime-rs on BERT with a GPU on CUDA 11 served by actix-web and tokenized wit

Xavier Tao 28 Jan 01, 2023
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

LibraNet This repository includes the official implementation of LibraNet for crowd counting, presented in our paper: Weighing Counts: Sequential Crow

Hao Lu 18 Nov 05, 2022
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Advanced Image Manipulation Lab @ Samsung AI Center Moscow 4.7k Dec 31, 2022
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries an

Ivy 8.2k Jan 02, 2023
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
LETR: Line Segment Detection Using Transformers without Edges

LETR: Line Segment Detection Using Transformers without Edges Introduction This repository contains the official code and pretrained models for Line S

mlpc-ucsd 157 Jan 06, 2023
Background Matting: The World is Your Green Screen

Background Matting: The World is Your Green Screen By Soumyadip Sengupta, Vivek Jayaram, Brian Curless, Steve Seitz, and Ira Kemelmacher-Shlizerman Th

Soumyadip Sengupta 4.6k Jan 04, 2023
NUANCED is a user-centric conversational recommendation dataset that contains 5.1k annotated dialogues and 26k high-quality user turns.

NUANCED: Natural Utterance Annotation for Nuanced Conversation with Estimated Distributions Overview NUANCED is a user-centric conversational recommen

Facebook Research 18 Dec 28, 2021
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022