AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

Overview

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. AutoPentest-DRL can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit. This framework is intended for educational purposes, so that users can study the penetration testing attack mechanisms. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (CROND) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (JAIST) in Ishikawa, Japan.

An overview of AutoPentest-DRL is shown below. The framework receives user input regarding the logical target network, including vulnerability information; alternatively, the framework can use Nmap for network scanning to find actual vulnerabilities in a real target network with known topology. The MulVAL attack-graph generator is then used to determine potential attack trees, which are fed in a simplified form into the DQN Decision Engine. The attack path that is produced as output can be used to study the attack mechanisms on a large number of logical networks. Alternatively, the framework can use the attack path with penetration testing tools, such as Metasploit, making it possible for the user to study how the attack can be carried out on a real target network.

Overview of AutoPentest-DRL

Next we provide brief information on how to setup and use AutoPentest-DRL. For details about its operation, please refer to the User Guide that we also make available.

Prerequisites

Several external tools are required in order to use AutoPentest-DRL; for the basic functionality (DQN training and attacks on logical networks), you'll need:

  • MulVAL: Attack-graph generator used by AutoPentest-DRL to produce possible attack paths for a given network. See the MulVAL page for installation instructions and dependencies. MulVAL should be installed in the directory repos/mulval in the AutoPentest-DRL folder. You also need to configure the /etc/profile file as discussed here. On some systems the tool epstopdf may also need to be installed, for instance by using the command below:
    sudo apt install texlive-font-utils
    

If you plan to use AutoPentest-DRL with real networks, you'll also need:

  • Nmap: Network scanner used by AutoPentest-DRL to determine vulnerabilities in a given real network. The command needed to install nmap on Ubuntu is given below:
    sudo apt install nmap
    
  • Metasploit: Penetration testing tools used by AutoPentest-DRL to actually conduct the attack proposed by the DQN engine on the real target network. To install Metasploit, you can use the installers made available on the Metasploit website. In addition, we use pymetasploit3 as RPC API to communicate with Metasploit, and this tool needs to be installed in the directory Penetration_tools/pymetasploit3 by following its author's instructions.

Setup

AutoPentest-DRL has been developed mainly on the Ubuntu 18.04 LTS operating system; other OSes may work, but have not been tested. In order to set up AutoPentest-DRL, use the releases page to download the latest version, and extract the source code archive into a directory of your choice (for instance, your home directory) on the host on which you intend to use it.

AutoPentest-DRL is implemented in Python, and it requires several packages to run. The file requirements.txt included with the distribution can be used to install the necessary packages via the following command that should be run from the AutoPentest-DRL/ directory:

$ sudo -H pip install -r requirements.txt

Quick Start

AutoPentest-DRL includes a trained DQN model, so you can use it out-of-the-box on a sample logical network topology by running the following command in a terminal from the AutoPentest-DRL/ directory:

$ python3 ./AutoPentest-DRL.py logical_attack

In this logical attack mode no actual attack is conducted, and AutoPentest-DRL will only determine the optimal attack path for the logical network topology that is described in the file MulVal_P/logical_attack_v1.P. By comparing the output path with the visualization of the attack graph that is generated by MulVAL in the file mulval_results/AttackGraph.pdf you can study in detail the attack steps.

For more information about the operation modes of AutoPentest-DRL, including the real attack mode and the training mode, see our User Guide.

References

For a research background regarding AutoPentest-DRL, please refer to the following references:

  • Z. Hu, R. Beuran, Y. Tan, "Automated Penetration Testing Using Deep Reinforcement Learning", IEEE European Symposium on Security and Privacy Workshops (EuroS&PW 2020), Workshop on Cyber Range Applications and Technologies (CACOE'20), Genova, Italy, September 7, 2020, pp. 2-10.
  • Z. Hu, "Automated Penetration Testing Using Deep Reinforcement Learning", Master's thesis, March 2021. https://hdl.handle.net/10119/17095

For a list of contributors to this project, see the file CONTRIBUTORS included in the distribution.

Comments
  • mulval topology template

    mulval topology template

    Hello, I just want to ask if I change the configuration of topology generator then I also have to change the topo_gen_template.P file content or is it a generic template. Thanks.

    opened by shoaib5261 7
  • Evaluating the model

    Evaluating the model

    Thank you for your support, but I have one more question. In the paper you wrote that this model has an accuracy of 0.86. I quite don't understand the method of evaluating, the data used for evaluating and whether that data is in this repo or not.

    Also, can you explain why the model has to train multiple times and the reward increases gradually? I think the simplified matrix holds all the possible paths so the model just need to loop through all paths and print out the desired one. Sorry for my weak understandings.

    Looking forward to your reply. Thank you!

    opened by QuynhNguyen269 5
  • FileNotFound error

    FileNotFound error

    Hi, I'm trying to run the code but it gives me multiple FileNotFound errors. Please help. Thank you!

    The output is:

    ################################################################################ AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning ################################################################################ AutoPentest-DRL: Operation mode: Attack on logical network AutoPentest-DRL: Target topology: MulVAL_P/logical_topology_1.P

    AutoPentest-DRL: Compute attack path for logical network... Generate attack graph using MulVAL... sh: 1: ../repos/mulval/utils/graph_gen.sh: not found Process attack graph into attack matrix... Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/./confirm_path.py", line 9, in MAP = generateMapClass.sendMap File "./learn/generateMap.py", line 108, in sendMap self.x = self.createMatrix() File "./learn/generateMap.py", line 20, in createMatrix self.csvfile = open('../mulval_result/VERTICES.CSV', 'r') FileNotFoundError: [Errno 2] No such file or directory: '../mulval_result/VERTICES.CSV' Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 235, in make return registry.make(id, **kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 129, in make env = spec.make(**kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 89, in make cls = load(self.entry_point) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 27, in load mod = importlib.import_module(mod_name) File "/usr/lib/python3.9/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 12, in class dqnEnvironment(gym.Env): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 14, in dqnEnvironment MAP = np.loadtxt('../processdata/newmap.txt') File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 961, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 195, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 535, in open raise IOError("%s not found." % path) OSError: ../processdata/newmap.txt not found.

    opened by QuynhNguyen269 3
  • AssertionError: The environment must specify an observation space

    AssertionError: The environment must specify an observation space

    hi everyone, Please help. Thank you!


    The output is:


    Process attack graph into attack matrix... Traceback (most recent call last): File "./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.7/dist-packages/gym/envs/registration.py", line 685, in make env = PassiveEnvChecker(env) File "/usr/local/lib/python3.7/dist-packages/gym/wrappers/env_checker.py", line 26, in init ), "The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/" AssertionError: The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/

    opened by VisaCai 2
  • about article

    about article

    in the article《Automated Penetration Testing Using Deep Reinforcement Learning》 ,we find a index about the Accuracy, i have a Confuse。the accuracy is between the best DQN penetration path and true path. or others?

    opened by lixiaohaao 1
  • target drone

    target drone

    Sorry to bother you frequently,Regarding the construction of a multi-level network, like the network in your experiment, can you elaborate on how to build it?

    Looking forward to your reply LIxiao

    opened by lixiaohaao 1
Releases(1.0)
  • 1.0(Jun 1, 2021)

    First release of AutoPentest-DRL, an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. The framework can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit.

    Source code(tar.gz)
    Source code(zip)
Owner
Cyber Range Organization and Design Chair
Cyber Range Organization and Design (CROND) NEC-endowed chair at JAIST conducts R&D on cybersecurity education and training
Cyber Range Organization and Design Chair
competitions-v2

Codabench (formerly Codalab Competitions v2) Installation $ cp .env_sample .env $ docker-compose up -d $ docker-compose exec django ./manage.py migrat

CodaLab 21 Dec 02, 2022
MarcoPolo is a clustering-free approach to the exploration of bimodally expressed genes along with group information in single-cell RNA-seq data

MarcoPolo is a method to discover differentially expressed genes in single-cell RNA-seq data without depending on prior clustering Overview MarcoPolo

Chanwoo Kim 13 Dec 18, 2022
Using deep actor-critic model to learn best strategies in pair trading

Deep-Reinforcement-Learning-in-Stock-Trading Using deep actor-critic model to learn best strategies in pair trading Abstract Partially observed Markov

281 Dec 09, 2022
[SIGIR22] Official PyTorch implementation for "CORE: Simple and Effective Session-based Recommendation within Consistent Representation Space".

CORE This is the official PyTorch implementation for the paper: Yupeng Hou, Binbin Hu, Zhiqiang Zhang, Wayne Xin Zhao. CORE: Simple and Effective Sess

RUCAIBox 26 Dec 19, 2022
这是一个facenet-pytorch的库,可以用于训练自己的人脸识别模型。

Facenet:人脸识别模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download 预测步骤 How2predict 训练步骤 How2train 参考资料 Reference 性能情况 训练数据

Bubbliiiing 210 Jan 06, 2023
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
Recommendation algorithms for large graphs

Fast recommendation algorithms for large graphs based on link analysis. License: Apache Software License Author: Emmanouil (Manios) Krasanakis Depende

Multimedia Knowledge and Social Analytics Lab 27 Jan 07, 2023
Unofficial PyTorch code for BasicVSR

Dependencies and Installation The code is based on BasicSR, Please install the BasicSR framework first. Pytorch=1.51 Training cd ./code CUDA_VISIBLE_

Long 59 Dec 06, 2022
Implementation of "Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner"

Meta-rPPG: Remote Heart Rate Estimation Using a Transductive Meta-Learner This repository is the official implementation of Meta-rPPG: Remote Heart Ra

Eugene Lee 137 Dec 13, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022
Repository for "Space-Time Correspondence as a Contrastive Random Walk" (NeurIPS 2020)

Space-Time Correspondence as a Contrastive Random Walk This is the repository for Space-Time Correspondence as a Contrastive Random Walk, published at

A. Jabri 239 Dec 27, 2022
Athena is the only tool that you will ever need to optimize your portfolio.

Athena Portfolio optimization is the process of selecting the best portfolio (asset distribution), out of the set of all portfolios being considered,

Indrajit 1 Mar 25, 2022
This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018

Learning-to-See-in-the-Dark This is a Tensorflow implementation of Learning to See in the Dark in CVPR 2018, by Chen Chen, Qifeng Chen, Jia Xu, and Vl

5.3k Jan 01, 2023
Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" (NeurIPS'20)

IGNN Code repo for "Cross-Scale Internal Graph Neural Network for Image Super-Resolution" [paper] [supp] Prepare datasets 1 Download training dataset

Shangchen Zhou 278 Jan 03, 2023
A curated list of awesome neural radiance fields papers

Awesome Neural Radiance Fields A curated list of awesome neural radiance fields papers, inspired by awesome-computer-vision. How to submit a pull requ

Yen-Chen Lin 3.9k Dec 27, 2022
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022
Automatic packaging of the open-composite libs for OvGME

OvGME Packager for OpenXR – OpenComposite for DCS Note This repository is currently unsupported and needs to be migrated to the upstream OpenComposite

12 Nov 03, 2022
Distance Encoding for GNN Design

Distance-encoding for GNN design This repository is the official PyTorch implementation of the DEGNN and DEAGNN framework reported in the paper: Dista

172 Nov 08, 2022
This repository introduces a short project about Transfer Learning for Classification of MRI Images.

Transfer Learning for MRI Images Classification This repository introduces a short project made during my stay at Neuromatch Summer School 2021. This

Oscar Guarnizo 3 Nov 15, 2022
Implementation of the GBST block from the Charformer paper, in Pytorch

Charformer - Pytorch Implementation of the GBST (gradient-based subword tokenization) module from the Charformer paper, in Pytorch. The paper proposes

Phil Wang 105 Dec 26, 2022