Code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Overview

0. Introduction

This repository contains the source code for our SIGCOMM'21 paper "Network Planning with Deep Reinforcement Learning".

Notes

The network topologies and the trained models used in the paper are not open-sourced. One can create synthetic topologies according to the problem formulation in the paper or modify the code for their own use case.

1. Environment config

AWS instance configurations

  • AMI image: "Deep Learning AMI (Ubuntu 16.04) Version 43.0 - ami-0774e48892bd5f116"
  • for First-stage: g4dn.4xlarge; Threads 16 in gurobi.env
  • for others (ILP, ILP-heur, Second-stage): m5zn.12xlarge; Threads 8 in gurobi.env

Step 0: download the git repo

Step 1: install Linux dependencies

sudo apt-get update
sudo apt-get install build-essential libopenmpi-dev libboost-all-dev

Step 2: install Gurobi

cd 
   
    /
./gurobi.sh
source ~/.bashrc

   

Step 3: setup && start conda environment with python3.7.7

If you use the AWS Deep Learning AMI, conda is preinstalled.

conda create --name 
   
     python=3.7.7
conda activate 
    

    
   

Step 4: install python dependencies in the conda env

cd 
   
    /spinninup
pip install -e .
pip install networkx pulp pybind11 xlrd==1.2.0

   

Step 5: compile C++ program with pybind11

cd 
   
    /source/c_solver
./compile.sh

   

2. Content

  • source
    • c_solver: C++ implementation with Gurobi APIs for ILP solver and network plan evaluator
    • planning: ILP and ILP-heur implementation
    • results: store the provided trained models and solutions, and the training log
    • rl: the implementations of Critic-Actor, RL environment and RL solver
    • simulate: python classes of flow, spof, and traffic matrix
    • topology: python classes of network topology (both optical layer and IP layer)
    • test.py: the main script used to reproduce results
  • spinningup
  • gurobi.sh
    • used to install Gurobi solver

3. Reproduce results (for SIGCOMM'21 artifact evaluation)

Notes

  • Some data points are time-consuming to get (i.e., First-stage for A-0, A-0.25, A-0.5, A-0.75 in Figure 8 and B, C, D, E in Figure 9). We provide pretrained models in /source/results/trained/ / , which will be loaded by default.
  • We recommend distributing different data points and differetnt experiments on multiple AWS instances to run simultaneously.
  • The default epoch_num for Figure 10, 11 and 12 is set to be 1024, to guarantee the convergence. The training process can be terminated manually if convergence is observed.

How to reproduce

  • cd /source
  • Figure 7: python test.py fig_7 , epoch_num can be set smaller than 10 (e.g. 2) to get results faster.
  • Figure 8: python test.py single_dp_fig8 produces one data point at a time (the default adjust_factor is 1).
    • For example, python test.py single_dp_fig8 ILP 0.0 runs ILP algorithm for A-0.
    • Pretrained models will be loaded by default if provided in source/results/trained/. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig8 False
  • Figure 9&13: python test.py single_dp_fig9 produces one data point at a time.
    • For example, python test.py single_dp_fig9 E NeuroPlan runs NeuroPlan (First-stage) for topology E with the pretrained model. To train from scratch which is NOT RECOMMENDED, run python test.py single_dp_fig9 E NeuroPlan False.
    • python test.py second_stage can load the solution from the first stage in and run second-stage with relax_factor= on topo . For example, python test.py second_stage D "results/ /opt_topo/***.txt" 1.5
    • we also provide our results of First-stage in results/trained/ / .txt , which can be used to run second-stage directly. For example, python test.py second_stage C "results/trained/C/C.txt" 1.5
  • Figure 10: python test.py fig_10 .
    • adjust_factor={0.0, 0.5, 1.0}, num_gnn_layer={0, 2, 4}
    • For example, python test.py fig_10 0.5 2 runs NeuroPlan with 2-layer GNNs for topology A-0.5
  • Figure 11: python test.py fig_11 .
    • adjust_factor={0.0, 0.5, 1.0}, mlp_hidden_size={64, 256, 512}
    • For example, python test.py fig_11 0.0 512 runs NeuroPlan with hidden_size=512 for topology A-0
  • Figure 12: python test.py fig_12 .
    • adjust_factor={0.0, 0.5, 1.0}, max_unit_per_step={1, 4, 16}
    • For example, python test.py fig_11 1.0 4 runs NeuroPlan with max_unit_per_step=4 for topology A-1

4. Contact

For any question, please contact hzhu at jhu dot edu.

Owner
NetX Group
Computer Systems Research Group at PKU
NetX Group
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.

ProSelfLC: CVPR 2021 ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks For any specific discussion or potential fu

amos_xwang 57 Dec 04, 2022
Simulation-based inference for the Galactic Center Excess

Simulation-based inference for the Galactic Center Excess Siddharth Mishra-Sharma and Kyle Cranmer Abstract The nature of the Fermi gamma-ray Galactic

Siddharth Mishra-Sharma 3 Jan 21, 2022
Pretrained Cost Model for Distributed Constraint Optimization Problems

Pretrained Cost Model for Distributed Constraint Optimization Problems Requirements PyTorch 1.9.0 PyTorch Geometric 1.7.1 Directory structure baseline

2 Aug 28, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
A framework for the elicitation, specification, formalization and understanding of requirements.

A framework for the elicitation, specification, formalization and understanding of requirements.

NASA - Software V&V 161 Jan 03, 2023
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
TensorFlow CNN for fast style transfer

Fast Style Transfer in TensorFlow Add styles from famous paintings to any photo in a fraction of a second! It takes 100ms on a 2015 Titan X to style t

1 Dec 14, 2021
Baseline and template code for node21 detection track

Nodule Detection Algorithm This codebase implements a baseline model, Faster R-CNN, for the nodule detection track in NODE21. It contains all necessar

node21challenge 11 Jan 15, 2022
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
Source code for "Progressive Transformers for End-to-End Sign Language Production" (ECCV 2020)

Progressive Transformers for End-to-End Sign Language Production Source code for "Progressive Transformers for End-to-End Sign Language Production" (B

58 Dec 21, 2022
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs

Few-shot Relation Extraction via Bayesian Meta-learning on Relation Graphs This is an implemetation of the paper Few-shot Relation Extraction via Baye

MilaGraph 36 Nov 22, 2022
UNet model with VGG11 encoder pre-trained on Kaggle Carvana dataset

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation By Vladimir Iglovikov and Alexey Shvets Introduction TernausNet is

Vladimir Iglovikov 1k Dec 28, 2022
PyTorch implementations of neural network models for keyword spotting

Honk: CNNs for Keyword Spotting Honk is a PyTorch reimplementation of Google's TensorFlow convolutional neural networks for keyword spotting, which ac

Castorini 475 Dec 15, 2022
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
More than a hundred strange attractors

dysts Analyze more than a hundred chaotic systems. Basic Usage Import a model and run a simulation with default initial conditions and parameter value

William Gilpin 185 Dec 23, 2022
Deep learning PyTorch library for time series forecasting, classification, and anomaly detection

Deep learning for time series forecasting Flow forecast is an open-source deep learning for time series forecasting framework. It provides all the lat

AIStream 1.2k Jan 04, 2023
A modular domain adaptation library written in PyTorch.

A modular domain adaptation library written in PyTorch.

Kevin Musgrave 225 Dec 29, 2022
网络协议2天集训

网络协议2天集训 抓包工具安装 Wireshark wireshark下载地址 Tcpdump CentOS yum install tcpdump -y Ubuntu apt-get install tcpdump -y k8s抓包测试环境 查看虚拟网卡veth pair 查看

120 Dec 12, 2022