Ankou: Guiding Grey-box Fuzzing towards Combinatorial Difference

Related tags

Deep Learningfuzzer
Overview

Ankou

Ankou is a source-based grey-box fuzzer. It intends to use a more rich fitness function by going beyond simple branch coverage and considering the combination of branches during program execution. The details of the technique can be found in our paper "Ankou: Guiding Grey-box Fuzzing towards Combinatorial Difference", which is published in ICSE 2020.

Dependencies.

Go

Ankou is written solely in Go and thus requires its installation. Be sure to configure this GOPATH environment variable, for example to ~/go directory.

AFL

Ankou relies on AFL instrumentation: fuzzed targets needs to compiled using afl-gcc or afl-clang. To install AFL:

wget http://lcamtuf.coredump.cx/afl/releases/afl-latest.tgz
tar xf afl-latest.tgz
cd afl-2.52b
make
# The last command is optional, but you'll need to provide the absolute path to
# the compiler in the configure step below if you don't install AFL compiler.
sudo make install

GDB

For the triaging gdb is required, and ASLR needs to be deactivated:

sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space

Note that when using docker containers, this needs to be run in the host.

Installation

Once Go and AFL are installed, you can get Ankou by:

go get github.com/SoftSec-KAIST/Ankou   # Clone Ankou and its dependencies
go build github.com/SoftSec-KAIST/Ankou # Compile Ankou
Note: If getting Ankou from another location, this needs to be done manually:
mkdir -p $GOPATH/src/github.com/SoftSec-KAIST
cd $GOPATH/src/github.com/SoftSec-KAIST
git clone REPO  # By default REPO is https://github.com/SoftSec-KAIST/Ankou
cd Ankou
go get .    # Get dependencies
go build .  # Compile

Usage

Now we are ready to fuzz. We first to compile any target we want with afl-gcc or afl-clang. Let's take the classical starting example for fuzzing, binutils:

wget https://mirror.ibcp.fr/pub/gnu/binutils/binutils-2.33.1.tar.xz
tar xf binutils-2.33.1.tar.xz
cd binutils-2.33.1
CC=afl-gcc CXX=afl-g++ ./configure --prefix=`pwd`/install
make -j
make install

Now we are ready to run Ankou:

cd install/bin
mkdir seeds; cp elfedit seeds/ # Put anything in the seeds folder.
go run github.com/SoftSec-KAIST/Ankou -app ./readelf -args "-a @@" -i seeds -o out
# Or use the binary we compiled above:
/path/to/Ankou -app ./readelf -args "-a @@" -i seeds -o out

Evaluation Reproduction

Once Ankou is installed, in order to reproduce the Ankou evaluation:

  1. Compile the 24 packages mentioned in the paper at the same version or commit using afl-gcc. All the packages' source can be found with the same version used in Ankou evaluation at https://github.com/SoftSec-KAIST/Ankou-Benchmark. Additionnally, this repository includes the seeds used to initialize the evalution fuzzing campaigns.
  2. Run the produced subjects with the commands found in benchmark/configuration.json. benchmark/rq1_rq3.json only contains the 24 subjets used for Research Question 1 and 3 of the paper.
  3. Analyze Ankou output directory for results. Crashes are listed in $OUTPUT_DIR/crashes-* and found seeds in $OUTPUT_DIR/seeds-*. Statistics of the fuzzing campaign can be found in the $OUTPUT_DIR/status* directory CSV files. The edge_n value of receiver.csv represents the branch coverage. And the execN column of seed_manager.csv represents the total number of test cases executed so far. Divide it by the time column to obtain the throughout.

There are too many programs in our benchmark, so we will use only one package in this example: cflow.

  1. Compilation.
git clone https://github.com/SoftSec-KAIST/Ankou-Benchmark
cd Ankou-Benchmark
tar xf seeds.tar.xz
cd sources
tar xf cflow-1.6.tar.xz
cd cflow-1.6
CC=afl-gcc CXX=afl-g++ ./configure --prefix=`pwd`/build
make -j
make install
cd ../../..
  1. Preparation of the fuzzing campaign.
mkdir fuzzrun
cp Ankou-Benchmark/sources/cflow-1.6/build/bin/cflow fuzzrun
cp -r Ankou-Benchmark/seeds/cflow fuzzrun/seeds
  1. Run the campaign. The above starts a 24 hours fuzzing campaign. The '-dur' option can be adjusted, or Ankou interrupted earlier. In this version of cflow, and initialized with these seeds, a crash should be found in less than an hour.
cd fuzzrun
go run github.com/SoftSec-KAIST/Ankou -app cflow -args "-o /dev/null @@" \
    -i seeds -threads 1 -o cflow_out -dur 24h
  1. Results analysis
cd cflow_out/status_*
# Print the final branch coverage:
python -c "print(open('receiver.csv').readlines()[-1].split(',')[0])"
# Print the overall throughput:
python -c "last = open('seed_manager.csv').readlines()[-1].split(','); print(float(last[5])/int(last[6]))"
# Print effectiveness of the dynamic PCA (see RQ2):
python -c "last = open('receiver.csv').readlines()[-1].split(','); print('{}%'.format(100-100*float(last[2])/float(last[1])))"

Safe Stack Hash Triaging

Once the environment is setup, the scripts works in two steps:

  1. Run the binary on the crashing input to produce a core file. Using ulimit -c unlimited ensures the core to be dumped.
  2. Use the scripts in the triage folder of this repository:
cd $GOPATH/src/github.com/SoftSec-KAIST/Ankou/triage
gdb -x triage.py -x triage.gdb -batch -c /path/to/core /path/to/binary
cat hash.txt # The stack hashes are found in this text file.
Owner
SoftSec Lab
SoftSec Lab @ KAIST
SoftSec Lab
The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

Chenyu 109 Dec 23, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

4 Jul 12, 2021
Kalidokit is a blendshape and kinematics solver for Mediapipe/Tensorflow.js face, eyes, pose, and hand tracking models

Blendshape and kinematics solver for Mediapipe/Tensorflow.js face, eyes, pose, and hand tracking models.

Rich 4.5k Jan 07, 2023
Dcf-game-infrastructure-public - Contains all the components necessary to run a DC finals (attack-defense CTF) game from OOO

dcf-game-infrastructure All the components necessary to run a game of the OOO DC

Order of the Overflow 46 Sep 13, 2022
The implementation of ICASSP 2020 paper "Pixel-level self-paced learning for super-resolution"

Pixel-level Self-Paced Learning for Super-Resolution This is an official implementaion of the paper Pixel-level Self-Paced Learning for Super-Resoluti

Elon Lin 41 Dec 15, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
Any-to-any voice conversion using synthetic specific-speaker speeches as intermedium features

MediumVC MediumVC is an utterance-level method towards any-to-any VC. Before that, we propose SingleVC to perform A2O tasks(Xi → Ŷi) , Xi means utter

谷下雨 47 Dec 25, 2022
DSL for matching Python ASTs

py-ast-rule-engine This library provides a DSL (domain-specific language) to match a pattern inside a Python AST (abstract syntax tree). The library i

1 Dec 18, 2021
Madanalysis5 - A package for event file analysis and recasting of LHC results

Welcome to MadAnalysis 5 Outline What is MadAnalysis 5? Requirements Downloading

MadAnalysis 15 Jan 01, 2023
68 keypoint annotations for COFW test data

68 keypoint annotations for COFW test data This repository contains manually annotated 68 keypoints for COFW test data (original annotation of CFOW da

31 Dec 06, 2022
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation in TensorFlow 2

Mask2Former: Masked-attention Mask Transformer for Universal Image Segmentation in TensorFlow 2 Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexan

Phan Nguyen 1 Dec 16, 2021
Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification (NeurIPS 2021)

Graph Posterior Network This is the official code repository to the paper Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classifica

Maximilian Stadler 30 Dec 05, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

Jia Research Lab 115 Dec 23, 2022
A robust pointcloud registration pipeline based on correlation.

PHASER: A Robust and Correspondence-Free Global Pointcloud Registration Ubuntu 18.04+ROS Melodic: Overview Pointcloud registration using correspondenc

ETHZ ASL 101 Dec 01, 2022
Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.

ICON Lab 22 Dec 22, 2022
Transfer SemanticKITTI labeles into other dataset/sensor formats.

LiDAR-Transfer Transfer SemanticKITTI labeles into other dataset/sensor formats. Content Convert datasets (NUSCENES, FORD, NCLT) to KITTI format Minim

Photogrammetry & Robotics Bonn 64 Nov 21, 2022
pytorch implementation of ABC : Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning

ABC:Auxiliary Balanced Classifier for Class-imbalanced Semi-supervised Learning, NeurIPS 2021 pytorch implementation of ABC : Auxiliary Balanced Class

Hyuck Lee 25 Dec 22, 2022
Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnet)

Hierarchical Cross-modal Talking Face Generation with Dynamic Pixel-wise Loss (ATVGnet) By Lele Chen , Ross K Maddox, Zhiyao Duan, Chenliang Xu. Unive

Lele Chen 218 Dec 27, 2022