A symbolic-model-guided fuzzer for TLS

Overview

tlspuffin

Logo with Penguin

TLS Protocol Under FuzzINg
A symbolic-model-guided fuzzer for TLS

Disclaimer: The term "symbolic-model-guided" should not be confused with symbolic execution or concolic fuzzing.

Description

Fuzzing implementations of cryptographic protocols is challenging. In contrast to traditional fuzzing of file formats, cryptographic protocols require a specific flow of cryptographic and mutually dependent messages to reach deep protocol states. The specification of the TLS protocol describes sound flows of messages and cryptographic operations.

Although the specification has been formally verified multiple times with significant results, a gap has emerged from the fact that implementations of the same protocol have not undergone the same logical analysis. Because the development of cryptographic protocols is error-prone, multiple security vulnerabilities have already been discovered in implementations in TLS which are not present in its specification.

Inspired by symbolic protocol verification, we present a reference implementation of a fuzzer named tlspuffin which employs a concrete semantic to execute TLS 1.2 and 1.3 symbolic traces. In fact attacks which mix \TLS versions are in scope of this implementation. This method allows us to utilize a genetic fuzzing algorithm to fuzz protocol flows, which is described by the following three stages.

  • By mutating traces we can deviate from the specification to test logical flaws.
  • Selection of interesting protocol flows advance the fuzzing procedure.
  • A security violation oracle supervises executions for the absence of vulnerabilities.

The novel approach allows rediscovering known vulnerabilities, which are out-of-scope for classical bit-level fuzzers. This proves that it is capable of reaching critical protocol states. In contrast to the promising methodology no new vulnerabilities were found by tlspuffin. This can can be explained by the fact that the implementation effort of TLS protocol primitives and extensions is high and not all features of the specification have been implemented. Nonetheless, the innovating approach is promising in terms of quickly reaching high edge coverage, expressiveness of executable protocol traces and stable and extensible implementation.

Features

  • Uses the LibAFL fuzzing framework
  • Fuzzer which is inspired by the Dolev-Yao symbolic model used in protocol verification
  • Domain specific mutators for Protocol Fuzzing!
  • Supported Libraries Under Test: OpenSSL 1.0.1f, 1.0.2u, 1.1.1k and LibreSSL 3.3.3
  • Reproducible for each LUT. We use Git submodules to link to forks this are in the tlspuffin organisation
  • 70% Test Coverage
  • Writtin in Rust!

Building

Now, to build the project:

git clone [email protected]/tlspuffin/tlspuffin
git submodule update --init --recursive
cargo build

Running

Fuzz using three clients:

cargo run --bin tlspuffin -- --cores 0-3

Note: After switching the Library Under Test or its version do a clean rebuild (cargo clean). For example when switching from OpenSSL 1.0.1 to 1.1.1.

Testing

cargo test

Command-line Interface

The syntax for the command-line of is:

      tlspuffin [⟨options] [⟨sub-commands⟩]

Global Options

Before we explain each sub-command, we first go over the options in the following.

  • -c, --cores ⟨spec⟩

    This option specifies on which cores the fuzzer should assign its worker processes. It can either be specified as a list by using commas "0,1,2,7" or as a range "0-7". By default, it runs just on core 0.

  • -i, --max-iters ⟨i⟩

    This option allows to bound the amount of iterations the fuzzer does. If omitted, then infinite iterations are done.

  • -p, --port ⟨n⟩

    As specified in [sec:design-multiprocessing] the initial communication between the fuzzer broker and workers happens over TCP/IP. Therefore, the broker requires a port allocation. The default port is 1337.

  • -s, --seed ⟨n⟩

    Defines an initial seed for the prng used for mutations. Note that this does not make the fuzzing deterministic, because of randomness introduced by the multiprocessing (see [sec:design-multiprocessing]).

Sub-commands

Now we will go over the sub-commands execute, plot, experiment, and seed.

  • execute ⟨input⟩

    This sub-command executes a single trace persisted in a file. The path to the file is provided by the ⟨input⟩ argument.

  • plot ⟨input⟩ ⟨format⟩ ⟨output_prefix⟩

    This sub-command plots the trace stored at ⟨input⟩ in the format specified by ⟨format⟩. The created graphics are stored at a path provided by ⟨output_prefix⟩. The option --multiple can be provided to create for each step in the trace a separate file. If the option --tree is given, then only a single graphic which contains all steps is produced.

  • experiment

    This sub-command initiates an experiment. Experiments are stored in a directory named experiments/ in the current working directory. An experiment consists of a directory which contains . The title and description of the experiment can be specified with --title ⟨t⟩ and --description ⟨d⟩ respectively. Both strings are persisted in the metadata of the experiment, together with the current commit hash of , the version and the current date and time.

  • seed

    This sub-command serializes the default seed corpus in a directory named corpus/ in the current working directory. The default corpus is defined in the source code of using the trace dsl.

Rust Setup

Install rustup.

The toolchain will be automatically downloaded when building this project. See ./rust-toolchain.toml for more details about the toolchain.

Make sure that you have the clang compiler installed. Optionally, also install llvm to have additional tools like sancov available. Also make sure that you have the usual tools for building it like make, gcc etc. installed. They may be needed to build OpenSSL.

Advanced Features

Running with ASAN

ASAN_OPTIONS=abort_on_error=1 \
    cargo run --bin tlspuffin --features asan -- --cores 0-3

It is important to enable abort_on_error, else the fuzzer workers fail to restart on crashes.

Generate Corpus Seeds

cargo run --bin tlspuffin -- seed

Plot Symbolic Traces

To plot SVGs do the following:

cargo run --bin tlspuffin -- plot corpus/seed_client_attacker12.trace svg ./plots/seed_client_attacker12

Note: This requires that the dot binary is in on your path. Note: The utility tools/plot-corpus.sh plots a whole directory

Execute a Symbolic Trace (with ASAN)

To analyze crashes you can also execute a trace which crashes the testing harness using ASAN:

cargo run --bin tlspuffin -- execute test.trace

To do the same with ASAN enabled:

ASAN_OPTIONS=detect_leaks=0 \
      cargo run --bin tlspuffin --features asan -- execute test.trace

Crash Deduplication

Creates log files for each crash and parses ASAN crashes to group crashes together.

tools/analyze-crashes.sh

Benchmarking

There is a benchmark which compares the execution of the dynamic functions to directly executing them in benchmark.rs. You can run them using:

cargo bench
xdg-open target/criterion/report/index.html

Documentation

This generates the documentation for this crate and opens the browser. This also includes the documentation of every dependency like LibAFL or rustls.

cargo doc --open

You can also view the up-to-date documentation here.

You might also like...
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

Source code and Dataset creation for the paper "Neural Symbolic Regression That Scales"

NeuralSymbolicRegressionThatScales Pytorch implementation and pretrained models for the paper "Neural Symbolic Regression That Scales", presented at I

Data and Code for ACL 2021 Paper
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Comments
  • Support for WolfSSL through a new rust-wolfssl crate

    Support for WolfSSL through a new rust-wolfssl crate

    I tried another approach as I struggled with #136 to debug some SEGFAULT.

    Here the approach is to clone rust-openssl, minimize the exposed interface while exposing what we need for wolfssl_binding.rs, then plug in wolfssl-sys instead of openssl-sys.

    opened by LCBH 3
  • Rediscover wolfSSL vulnerabilities

    Rediscover wolfSSL vulnerabilities

    • CVE-2020-12457 in <4.5.0 https://nvd.nist.gov/vuln/detail/CVE-2020-12457 (DDOS against server, needs change_cipher_spec (CCS) message mutations)
    • CVE 2020-24613 in <4.5.0 https://nvd.nist.gov/vuln/detail/CVE-2020-24613 in < 4.5.0, TLS 1.3 server auth. bypass
    • CVE-2021-3336 in < 4.7.0 https://nvd.nist.gov/vuln/detail/CVE-2021-3336, TLS 1.3 server auth. bypass
    • CVE 2022-25638 in < 5.2.0 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25638, TLS 1.3 server auth. bypass
    • CVE-2022-25640 in <5.2.0 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25640 (attack on client authentication, needs client authentication through cert.)
    opened by maxammann 1
Releases(evaluation)
Spearmint Bayesian optimization codebase

Spearmint Spearmint is a software package to perform Bayesian optimization. The Software is designed to automatically run experiments (thus the code n

Formerly: Harvard Intelligent Probabilistic Systems Group -- Now at Princeton 1.5k Dec 29, 2022
Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFlow 2

DreamerPro Official implementation of DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations in TensorFl

22 Nov 01, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

tianyuluan 3 Jun 18, 2022
KaziText is a tool for modelling common human errors.

KaziText KaziText is a tool for modelling common human errors. It estimates probabilities of individual error types (so called aspects) from grammatic

ÚFAL 3 Nov 24, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
EGNN - Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch

EGNN - Pytorch Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch. May be eventually used for Alphafold2 replication. This

Phil Wang 259 Jan 04, 2023
Underwater image enhancement

LANet Our work proposes an adaptive learning attention network (LANet) to solve the problem of color casts and low illumination in underwater images.

LiuShiBen 7 Sep 14, 2022
The devkit of the nuPlan dataset.

The devkit of the nuPlan dataset.

Motional 264 Jan 03, 2023
Modified prey-predator system - Modified prey–predator model describes the rate of change for each species by adding coupling terms.

Modified prey-predator system We aim to study the behaviors of the modified prey–predator model and establish the effects of several parameters that p

Seoyoung Oh 1 Jan 02, 2022
AQP is a modular pipeline built to enable the comparison and testing of different quality metric configurations.

Audio Quality Platform - AQP An Open Modular Python Platform for Objective Speech and Audio Quality Metrics AQP is a highly modular pipeline designed

Jack Geraghty 24 Oct 01, 2022
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration Stefan Abi-Karam*, Yuqi He*, Rishov Sarkar*, Lakshmi Sathidevi, Zihang Qiao, Co

Sharc-Lab 19 Dec 15, 2022
Easily Process a Batch of Cox Models

ezcox: Easily Process a Batch of Cox Models The goal of ezcox is to operate a batch of univariate or multivariate Cox models and return tidy result. ⏬

Shixiang Wang 15 May 23, 2022
The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation

BiMix The code for Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic Segmentation arxiv Framework: visualization results: Requiremen

stanley 18 Sep 18, 2022
This is my research project for the Irving Center for Cancer Dynamics/Azizi Lab, Columbia University.

bayesian_uncertainty This is my research project for the Irving Center for Cancer Dynamics/Azizi Lab, Columbia University. In this project I build a s

Max David Gupta 1 Feb 13, 2022
Tensorflow implementation of soft-attention mechanism for video caption generation.

SA-tensorflow Tensorflow implementation of soft-attention mechanism for video caption generation. An example of soft-attention mechanism. The attentio

Paul Chen 153 Nov 14, 2022
Codes accompanying the paper "Believe What You See: Implicit Constraint Approach for Offline Multi-Agent Reinforcement Learning" (NeurIPS 2021 Spotlight

Implicit Constraint Q-Learning This is a pytorch implementation of ICQ on Datasets for Deep Data-Driven Reinforcement Learning (D4RL) and ICQ-MA on SM

42 Dec 23, 2022
The code for paper Efficiently Solve the Max-cut Problem via a Quantum Qubit Rotation Algorithm

Quantum Qubit Rotation Algorithm Single qubit rotation gates $$ U(\Theta)=\bigotimes_{i=1}^n R_x (\phi_i) $$ QQRA for the max-cut problem This code wa

SheffieldWang 0 Oct 18, 2021
Implementation of Kalman Filter in Python

Kalman Filter in Python This is a basic example of how Kalman filter works in Python. I do plan on refactoring and expanding this repo in the future.

Enoch Kan 35 Sep 11, 2022
PECOS - Prediction for Enormous and Correlated Spaces

PECOS - Predictions for Enormous and Correlated Output Spaces PECOS is a versatile and modular machine learning (ML) framework for fast learning and i

Amazon 387 Jan 04, 2023
HGCAE Pytorch implementation. CVPR2021 accepted.

Hyperbolic Graph Convolutional Auto-Encoders Accepted to CVPR2021 🎉 Official PyTorch code of Unsupervised Hyperbolic Representation Learning via Mess

Junho Cho 37 Nov 13, 2022