An AFL implementation with UnTracer (our coverage-guided tracer)

Overview

UnTracer-AFL

This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuzzer AFL. Coverage-guided tracing employs two versions of the target binary: (1) a forkserver-only oracle binary modified with basic block-level software interrupts on unseen basic blocks for quickly identifying coverage-increasing testcases and (2) a fully-instrumented tracer binary for tracing the coverage of all coverage-increasing testcases.

In UnTracer, both the oracle and tracer binaries use the AFL-inspired forkserver execution model. For oracle instrumentation we require all target binaries be compiled with untracer-cc -- our "forkserver-only" modification of AFL's assembly-time instrumenter afl-cc. For tracer binary instrumentation we utilize Dyninst with much of our code based-off AFL-Dyninst. We plan to incorporate a purely binary-only ("black-box") instrumentation approach in the near future. Our current implementation of UnTracer supports basic block coverage.

Presented in our paper Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing
(2019 IEEE Symposium on Security and Privacy).
Citing this repository: @inproceedings{nagy:fullspeedfuzzing,
title = {Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing},
author = {Stefan Nagy and Matthew Hicks},
booktitle = {{IEEE} Symposium on Security and Privacy (Oakland)},
year = {2019},}
Developers: Stefan Nagy ([email protected]) and Matthew Hicks ([email protected])
License: MIT License
Disclaimer: This software is strictly a research prototype.

INSTALLATION

1. Download and build Dyninst (we used v9.3.2)

sudo apt-get install cmake m4 zlib1g-dev libboost-all-dev libiberty-dev
wget https://github.com/dyninst/dyninst/archive/v9.3.2.tar.gz
tar -xf v9.3.2.tar.gz dyninst-9.3.2/
mkdir dynBuildDir
cd dynBuildDir
cmake ../dyninst-9.3.2/ -DCMAKE_INSTALL_PREFIX=`pwd`
make
make install

2. Download UnTracer-AFL (this repo)

git clone https://github.com/FoRTE-Research/UnTracer-AFL

3. Configure environment variables

export DYNINST_INSTALL=/path/to/dynBuildDir
export UNTRACER_AFL_PATH=/path/to/Untracer-AFL

export DYNINSTAPI_RT_LIB=$DYNINST_INSTALL/lib/libdyninstAPI_RT.so
export LD_LIBRARY_PATH=$DYNINST_INSTALL/lib:$UNTRACER_AFL_PATH
export PATH=$PATH:$UNTRACER_AFL_PATH

4. Build UnTracer-AFL

Update DYN_ROOT in UnTracer-AFL/Makefile to your Dyninst install directory. Then, run the following commands:

make clean && make all

USAGE

First, compile all target binaries using "forkserver-only" instrumentation. As with AFL, you will need to manually set the C compiler (untracer-clang or untracer-gcc) and/or C++ compiler (untracer-clang++ or untracer-g++). Note that only non-position-independent target binaries are supported, so compile all target binaries with CFLAG -no-pie (unnecessary for Clang). For example:

NOTE: We provide a set of fuzzing-ready benchmarks available here: https://github.com/FoRTE-Research/FoRTE-FuzzBench.

$ CC=/path/to/afl/untracer-clang ./configure --disable-shared
$ CXX=/path/to/afl/untracer-clang++.
$ make clean all
Instrumenting in forkserver-only mode...

Then, run untracer-afl as follows:

untracer-afl -i [/path/to/seed/dir] -o [/path/to/out/dir] [optional_args] -- [/path/to/target] [target_args]

Status Screen

  • calib execs and trim execs - Number of testcase calibration and trimming executions, respectively. Tracing is done for both.
  • block coverage - Percentage of total blocks found (left) and the number of total blocks (right).
  • traced / queued - Ratio of traced versus queued testcases. This ratio should (ideally) be 1:1 but will increase as trace timeouts occur.
  • trace tmouts (discarded) - Number of testcases which timed out during tracing. Like AFL, we do not queue these.
  • no new bits (discarded) - Number of testcases which were marked coverage-increasing by the oracle but did not actually increase coverage. This should (ideally) be 0.

DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.

Differentiable Model Compression via Pseudo Quantization Noise DiffQ performs differentiable quantization using pseudo quantization noise. It can auto

Facebook Research 145 Dec 30, 2022
Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norén 21.8k Jan 09, 2023
Face recognize and crop them

Face Recognize Cropping Module Source 아이디어 Face Alignment with OpenCV and Python Requirement 필요 라이브러리 imutil dlib python-opence (cv2) Usage 사용 방법 open

Cho Moon Gi 1 Feb 15, 2022
Pytorch implementation of few-shot semantic image synthesis

Few-shot Semantic Image Synthesis Using StyleGAN Prior Our method can synthesize photorealistic images from dense or sparse semantic annotations using

40 Sep 26, 2022
Deep Learning for Time Series Forecasting.

nixtlats:Deep Learning for Time Series Forecasting [nikstla] (noun, nahuatl) Period of time. State-of-the-art time series forecasting for pytorch. Nix

Nixtla 5 Dec 06, 2022
Delta Conformity Sociopatterns Analysis - Delta Conformity Sociopatterns Analysis

Delta_Conformity_Sociopatterns_Analysis ∆-Conformity is a local homophily measur

2 Jan 09, 2022
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Jan 04, 2023
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

A Jupyter notebook to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.

Eugenio Herrera 175 Dec 29, 2022
Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition

Similarity-based Gray-box Adversarial Attack Against Deep Face Recognition Introduction Run attack: SGADV.py Objective function: foolbox/attacks/gradi

1 Jul 18, 2022
Flexible time series feature extraction & processing

tsflex is a toolkit for flexible time series processing & feature extraction, that is efficient and makes few assumptions about sequence data. Useful

PreDiCT.IDLab 206 Dec 28, 2022
Official code for "Stereo Waterdrop Removal with Row-wise Dilated Attention (IROS2021)"

Stereo-Waterdrop-Removal-with-Row-wise-Dilated-Attention This repository includes official codes for "Stereo Waterdrop Removal with Row-wise Dilated A

29 Oct 01, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Code repository for paper `Skeleton Merger: an Unsupervised Aligned Keypoint Detector`.

Skeleton Merger Skeleton Merger, an Unsupervised Aligned Keypoint Detector. The paper is available at https://arxiv.org/abs/2103.10814. A map of the r

北海若 48 Nov 14, 2022
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
Source-to-Source Debuggable Derivatives in Pure Python

Tangent Tangent is a new, free, and open-source Python library for automatic differentiation. Existing libraries implement automatic differentiation b

Google 2.2k Jan 01, 2023
Consensus score for tripadvisor

ContripScore ContripScore is essentially a score that combines an Internet platform rating and a consensus rating from sentiment analysis (For instanc

Pepe 1 Jan 13, 2022
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 07, 2023
Wandb-predictions - WANDB Predictions With Python

WANDB API CI/CD Below we capture the CI/CD scenarios that we would expect with o

Anish Shah 6 Oct 07, 2022