Code for NeurIPS2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints"

Overview

This repository is the code for NeurIPS 2021 submission "A Surrogate Objective Framework for Prediction+Programming with Soft Constraints".

Edit 2021/8/30: KKT-based (Decision-focused) baseline is added to the first experiment.

Requirements

pytorch>=1.7.0

scipy

gurobipy (and Gurobi>=9.1 license - you can get Academic license for free at https://www.gurobi.com/downloads/end-user-license-agreement-academic/; download and install Gurobi first.)

Quandl

h5py

bs4

tqdm

sklearn

pandas

lxml

qpth

cvxpy

cvxpylayers

Running Experiments

You should be able to run all experiments by fulfilling the requirements and cloning this repo to your local machine.

Synthetic Linear Programming

The dataset for this problem is generated at runtime. To run a single problem instance, type the following command:

python run_main_synth.py --method=2 --dim_context=40 --dim_hard=40 --dim_soft=20 --seed=2006 --dim_features=80 --loss=l1 --K=0.2

The four methods (L1,L2,SPO+,ours) we used in the experiment are respectively

--method=0 --loss=l1 # L1
--method=0 --loss=l2 # L2
--method=1 --loss=l1 # SPO+
--method=2 --loss=l1 # ours
--method=3 --loss=l1 # decision-focused (KKT-based)

The other parameters can be seen in run_script.py and run_main_synth.py. To get multiple data for a single method, modify with the parameters listed above, and then run run_script.py. The outcome containing prediction error and regret is in the result folder. See dataprocess.py for a reference on how to interpret the data; the data with suffix "...test.txt" is used for evaluation. Also, to change batch size and training set size, alter the default parameters in run_main_synth.py.

Portfolio Optimization

The dataset for this problem will be automatically downloaded when you first run this code, as Wilder et al.'s code does[1]. It is the daily price data of SP500 from 2004 to 2017 downloaded by Quandl API. To run a single problem instance, type the following command:

python main.py --method=3 --n=50 --seed=471298479

The four methods (L1, DF, L2, ours) are labeled as method 0, 1, 2 and 3. To get multiple data for a single method, run run_script.py.

The result is in the res/K100 folder.

Resource Provisioning

The dataset of this problem is attached in the github repository, which are the eight csv file, one for each region. It is the ERCOT dataset taken from (...to be filled...), and is processed by resource_provisioning/data_energy/data_loader.py at runtime. When you first run this code, it will generate several large .npy file as the cached feature, which will accelerate the preprocessing of the following runs. This experiment requires large memory and is recommended to run on a server. To run a single problem instance, type the following command:

python run_main_newnet.py --method=1 --seed=16900000 --loss=l1

The four methods (L1, L2, weighted L1, ours) are respectively

--method=0 --loss=l1 # L1
--method=0 --loss=l2 # L2
--method=0 --loss=l3 # weighted L1
--method=1 --loss=l1 # ours

To run different ratio of alpha1/alpha2, modify line 157-158 in synthesize.py

 alpha1 = torch.ones(dim_context, 1) * 50
 alpha2 = torch.ones(dim_context, 1) * 0.5

to a desired ratio. Furthermore, modify line 174 in main_newnet.py

netname = "50to0.5"

to "5to0.5"/"1to1"/"0.5to5"/"0.5to50", and line 199 in main_newnet.py

self.alpha1, self.alpha2 = 0.5, 50

to (0.5, 5)/(1, 1)/(5, 0.5)/(50, 0.5) respectively.

run run_script.py to get multiple data. The result is in the result/2013to18_+str(netname)+newnet folder. The interpretation of output data is similar to synthetic linear programming.

[1] Automatically Learning Compact Quality-aware Surrogates for Optimization Problems, Wilder et al., 2020 (https://arxiv.org/abs/2006.10815)

Empirical Evaluation of Lambda_max in Theorem 6

run test.py directly to get results (note it takes a long time to finish the whole run, especially for the option of beta distribution). The results for uniform, Gaussian and beta are respectively in test1.txt, test2.txt and test3.txt.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX.

Python scripts for performing object detection with the 1000 labels of the ImageNet dataset in ONNX. The repository combines a class agnostic object localizer to first detect the objects in the image

Ibai Gorordo 24 Nov 14, 2022
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 08, 2023
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path

Keyhole Imaging Code & Dataset Code associated with the paper "Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Singl

Stanford Computational Imaging Lab 20 Feb 03, 2022
Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado financeiro.

Tutoriais Públicos Tutoriais publicados nas nossas redes sociais para obtenção de dados, análises simples e outras tarefas relevantes no mercado finan

Trading com Dados 68 Oct 15, 2022
MLP-Numpy - A simple modular implementation of Multi Layer Perceptron in pure Numpy.

MLP-Numpy A simple modular implementation of Multi Layer Perceptron in pure Numpy. I used the Iris dataset from scikit-learn library for the experimen

Soroush Omranpour 1 Jan 01, 2022
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation

ENet in Caffe Execution times and hardware requirements Network 1024x512 1280x720 Parameters Model size (fp32) ENet 20.4 ms 32.9 ms 0.36 M 1.5 MB SegN

Timo Sämann 561 Jan 04, 2023
ML powered analytics engine for outlier detection and root cause analysis.

Website • Docs • Blog • LinkedIn • Community Slack ML powered analytics engine for outlier detection and root cause analysis ✨ What is Chaos Genius? C

Chaos Genius 523 Jan 04, 2023
Repository of the paper Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models at ML4AD @ NeurIPS 2021.

Compressing Sensor Data for Remote Assistance of Autonomous Vehicles using Deep Generative Models Code and supplementary materials Repository of the p

Daniel Bogdoll 4 Jul 13, 2022
Attention-guided gan for synthesizing IR images

SI-AGAN Attention-guided gan for synthesizing IR images This repository contains the Tensorflow code for "Pedestrian Gender Recognition by Style Trans

1 Oct 25, 2021
Application of the L2HMC algorithm to simulations in lattice QCD.

l2hmc-qcd 📊 Slides Recent talk on Training Topological Samplers for Lattice Gauge Theory from the Machine Learning for High Energy Physics, on and of

Sam Foreman 37 Dec 14, 2022
Code for the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness"

DU-VAE This is the pytorch implementation of the paper "Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness" Acknowledgement

Dazhong Shen 4 Oct 19, 2022
AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation

AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation AniGAN: Style-Guided Generative Adversarial Networks for U

Bing Li 81 Dec 14, 2022
A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image.

Minimal Body A very simple baseline to estimate 2D & 3D SMPL-compatible keypoints from a single color image. The model file is only 51.2 MB and runs a

Yuxiao Zhou 49 Dec 05, 2022
PyTorch implementation for NED. It can be used to manipulate the facial emotions of actors in videos based on emotion labels or reference styles.

Neural Emotion Director (NED) - Official Pytorch Implementation Example video of facial emotion manipulation while retaining the original mouth motion

Foivos Paraperas 89 Dec 23, 2022
Roger Labbe 13k Dec 29, 2022
Fast and simple implementation of RL algorithms, designed to run fully on GPU.

RSL RL Fast and simple implementation of RL algorithms, designed to run fully on GPU. This code is an evolution of rl-pytorch provided with NVIDIA's I

Robotic Systems Lab - Legged Robotics at ETH Zürich 68 Dec 29, 2022
Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt)

Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt) Task Training huge unsupervised deep neural networks yields to strong progress in

Oliver Hahn 1 Jan 26, 2022
A Deep learning based streamlit web app which can tell with which bollywood celebrity your face resembles.

Project Name: Which Bollywood Celebrity You look like A Deep learning based streamlit web app which can tell with which bollywood celebrity your face

BAPPY AHMED 20 Dec 28, 2021