Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Overview

0. Introduction

This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Notes

The network topologies and the trained models used in the paper are not open-sourced. One can create synthetic topologies according to the problem formulation in the paper or modify the code for their own use case.

1. Environment config

AWS instance configurations

  • AMI image: "Deep Learning AMI (Ubuntu 16.04) Version 43.0 - ami-0774e48892bd5f116"
  • for First-stage: g4dn.4xlarge; Threads 16 in gurobi.env
  • for others (ILP, ILP-heur, Second-stage): m5zn.12xlarge; Threads 8 in gurobi.env

Step 0: download the git repo

Step 1: install Linux dependencies

sudo apt-get update
sudo apt-get install build-essential libopenmpi-dev libboost-all-dev

Step 2: install Gurobi

cd 
   
    /
./gurobi.sh
source ~/.bashrc

   

Step 3: setup && start conda environment with python3.7.7

If you use the AWS Deep Learning AMI, conda is preinstalled.

conda create --name 
   
     python=3.7.7
conda activate 
    

    
   

Step 4: install python dependencies in the conda env

cd 
   
    /spinninup
pip install -e .
pip install networkx pulp pybind11 xlrd==1.2.0

   

Step 5: compile C++ program with pybind11

cd 
   
    /source/c_solver
./compile.sh

   

2. Content

  • source
    • c_solver: C++ implementation with Gurobi APIs for ILP solver and network plan evaluator
    • planning: ILP and ILP-heur implementation
    • results: store the provided trained models and solutions, and the training log
    • rl: the implementations of Critic-Actor, RL environment and RL solver
    • simulate: python classes of flow, spof, and traffic matrix
    • topology: python classes of network topology (both optical layer and IP layer)
    • test.py: the main script used to reproduce results
  • spinningup
  • gurobi.sh
    • used to install Gurobi solver

3. Reproduce results (for SIGCOMM'21 artifact evaluation)

Notes

  • Some data points are time-consuming to get (i.e., First-stage for A-0, A-0.25, A-0.5, A-0.75 in Figure 8 and B, C, D, E in Figure 9). We provide pretrained models in /source/results/trained/ / , which will be loaded by default.
  • We recommend distributing different data points and differetnt experiments on multiple AWS instances to run simultaneously.
  • The default epoch_num for Figure 10, 11 and 12 is set to be 1024, to guarantee the convergence. The training process can be terminated manually if convergence is observed.

How to reproduce

  • cd /source
  • Figure 7: python test.py fig_7 , epoch_num can be set smaller than 10 (e.g. 2) to get results faster.
  • Figure 8: python test.py single_dp_fig8 produces one data point at a time (the default adjust_factor is 1).
    • For example, python test.py single_dp_fig8 ILP 0.0 runs ILP algorithm for A-0.
    • Pretrained models will be loaded by default if provided in source/results/trained/. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig8 False
  • Figure 9&13: python test.py single_dp_fig9 produces one data point at a time.
    • For example, python test.py single_dp_fig9 E NeuroPlan runs NeuroPlan (First-stage) for topology E with the pretrained model. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig9 E NeuroPlan False.
    • python test.py second_stage can load the solution from the first stage in and run second-stage with relax_factor= on topo . For example, python test.py second_stage D "results/ /opt_topo/***.txt" 1.5
    • we also provide our results of First-stage in results/trained/ / .txt , which can be used to run second-stage directly. For example, python test.py second_stage C "results/trained/C/C.txt" 1.5
  • Figure 10: python test.py fig_10 .
    • adjust_factor={0.0, 0.5, 1.0}, num_gnn_layer={0, 2, 4}
    • For example, python test.py fig_10 0.5 2 runs NeuroPlan with 2-layer GNNs for topology A-0.5
  • Figure 11: python test.py fig_11 .
    • adjust_factor={0.0, 0.5, 1.0}, mlp_hidden_size={64, 256, 512}
    • For example, python test.py fig_11 0.0 512 runs NeuroPlan with hidden_size=512 for topology A-0
  • Figure 12: python test.py fig_12 .
    • adjust_factor={0.0, 0.5, 1.0}, max_unit_per_step={1, 4, 16}
    • For example, python test.py fig_11 1.0 4 runs NeuroPlan with max_unit_per_step=4 for topology A-1

4. Contact

For any question, please contact hzhu at jhu dot edu.

Owner
NetX Group
Computer Systems Research Group at PKU
NetX Group
Open source repository for the code accompanying the paper 'Non-Rigid Neural Radiance Fields Reconstruction and Novel View Synthesis of a Deforming Scene from Monocular Video'.

Non-Rigid Neural Radiance Fields This is the official repository for the project "Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synt

Facebook Research 296 Dec 29, 2022
Lab course materials for IEMBA 8/9 course "Coding and Artificial Intelligence"

IEMBA 8/9 - Coding and Artificial Intelligence Dear IEMBA 8/9 students, welcome to our IEMBA 8/9 elective course Coding and Artificial Intelligence, t

Artificial Intelligence & Machine Learning (AI:ML Lab) @ HSG 1 Jan 11, 2022
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Oral)

Pixel-Perfect Structure-from-Motion (ICCV 2021 Oral) We introduce a framework that improves the accuracy of Structure-from-Motion by refining keypoint

Computer Vision and Geometry Lab 831 Dec 29, 2022
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models Jonathan Ho, Ajay Jain, Pieter Abbeel Paper: https://arxiv.org/abs/2006.11239 Website: https://hojonathanho.g

Jonathan Ho 1.5k Jan 08, 2023
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Official PyTorch Implementation of Convolutional Hough Matching Networks, CVPR 2021 (oral)

Convolutional Hough Matching Networks This is the implementation of the paper "Convolutional Hough Matching Network" by J. Min and M. Cho. Implemented

Juhong Min 70 Nov 22, 2022
Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021)

T2Net Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021) [Paper][Code] Dependencies numpy==1.18.5 scikit_image==

64 Nov 23, 2022
Dataset and Code for ICCV 2021 paper "Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme"

Dataset and Code for RealVSR Real-world Video Super-resolution: A Benchmark Dataset and A Decomposition based Learning Scheme Xi Yang, Wangmeng Xiang,

Xi Yang 92 Jan 04, 2023
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
A configurable, tunable, and reproducible library for CTR prediction

FuxiCTR This repo is the community dev version of the official release at huawei-noah/benchmark/FuxiCTR. Click-through rate (CTR) prediction is an cri

XUEPAI 397 Dec 30, 2022
ReGAN: Sequence GAN using RE[INFORCE|LAX|BAR] based PG estimators

Sequence Generation with GANs trained by Gradient Estimation Requirements: PyTorch v0.3 Python 3.6 CUDA 9.1 (For GPU) Origin The idea is from paper Se

40 Nov 03, 2022
VQGAN+CLIP Colab Notebook with user-friendly interface.

VQGAN+CLIP and other image generation system VQGAN+CLIP Colab Notebook with user-friendly interface. Latest Notebook: Mse regulized zquantize Notebook

Justin John 227 Jan 05, 2023
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
Motion planning algorithms commonly used on autonomous vehicles. (path planning + path tracking)

Overview This repository implemented some common motion planners used on autonomous vehicles, including Hybrid A* Planner Frenet Optimal Trajectory Hi

Huiming Zhou 1k Jan 09, 2023
Pytorch implementation of the Variational Recurrent Neural Network (VRNN).

VariationalRecurrentNeuralNetwork Pytorch implementation of the Variational RNN (VRNN), from A Recurrent Latent Variable Model for Sequential Data. Th

emmanuel 251 Dec 17, 2022
Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GanFormer and TransGan paper

TransGanFormer (wip) Implementation of TransGanFormer, an all-attention GAN that combines the finding from the recent GansFormer and TransGan paper. I

Phil Wang 146 Dec 06, 2022
PURE: End-to-End Relation Extraction

PURE: End-to-End Relation Extraction This repository contains (PyTorch) code and pre-trained models for PURE (the Princeton University Relation Extrac

Princeton Natural Language Processing 657 Jan 09, 2023
.NET bindings for the Pytorch engine

TorchSharp TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is a work in progress, but already provides a .NET

Matteo Interlandi 17 Aug 30, 2021