Hydra: an Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Related tags

Deep Learninghydra
Overview

Hydra: An Extensible Fuzzing Framework for Finding Semantic Bugs in File Systems

Paper

Overview

Hydra is a state-of-the-art fuzzing framework for file systems. It provides building blocks for file system fuzzing, including multi-dimensional input mutators, feedback engines, a libOS-based executor, and a bug reproducer with test case minimizer. Developers only need to focus on writing (or bringing in) a checker which defines the core logic for finding the types of bugs of their own interests. Along with the framework, this repository includes our in-house developed crash consistency checker (SymC3), with which 11 new crash consistency bugs were revealed from ext4, Btrfs, F2FS, and from two verified file systems: FSCQ and Yxv6.

Contents

  • General code base

    • src/combined: Hydra input mutator
    • src/lkl/tools/lkl/{FS}-combined-consistency: Hydra LibOS-based Executor (will be downloaded and compiled during setup)
  • Checkers

    • src/emulator: Hydra's in-house crash consistency checker, SymC3

Setup

1. All setup should be done under src

$ cd src

2. Install dependencies

./dep.sh

3. Compile for each file system

$ make build-btrfs-imgwrp
  • We can do the same for other file systems:
$ make build-ext4-imgwrp
$ make build-f2fs-imgwrp
$ make build-xfs-imgwrp
  • (Skip if you want to test the latest kernel) To reproduce bugs presented in the SOSP'19 paper, do the following to back-port LKL to kernel 4.16.
$ cd lkl (pwd: proj_root/src/lkl) # assuming that you are in the src directory
$ make mrproper
$ git pull
$ git checkout v4.16-backport
$ ./compile -t btrfs
$ cd .. (pwd: proj_root/src)

4. Set up environments

$ sudo ./prepare_fuzzing.sh
$ ./prepare_env.sh

5. Run fuzzing (single / multiple instance)

  • Single instance
$ ./run.py -t [fstype] -c [cpu_id] -l [tmpfs_id] -g [fuzz_group]

-t: choose from btrfs, f2fs, ext4, xfs
-c: cpu id to run this fuzzer instance
-l: tmpfs id to store logs (choose one from /tmp/mosbench/tmpfs-separate/)
-g: specify group id for parallel fuzzing, default: 0

e.g., ./run.py -t btrfs -c 4 -l 10 -g 1
Runs btrfs fuzzer, and pins the instance to Core #4.
Logs will be accumulated under /tmp/mosbench/tmpfs-separate/10/log/ .
  • You can also run multiple fuzzers in parallel by doing:
[Terminal 1] ./run.py -t btrfs -c 1 -l 10 -g 1
[Terminal 2] ./run.py -t btrfs -c 2 -l 10 -g 1
[Terminal 3] ./run.py -t btrfs -c 3 -l 10 -g 1
[Terminal 4] ./run.py -t btrfs -c 4 -l 10 -g 1
// all btrfs bug logs will be under /tmp/mosbench/tmpfs-separate/10/log/

[Terminal 5] ./run.py -t f2fs -c 5 -l 11 -g 2
[Terminal 6] ./run.py -t f2fs -c 6 -l 11 -g 2
[Terminal 7] ./run.py -t f2fs -c 7 -l 11 -g 2
[Terminal 8] ./run.py -t f2fs -c 8 -l 11 -g 2
// all f2fs bug logs will be under /tmp/mosbench/tmpfs-separate/11/log/

6. Important note

It is highly encouraged that you use separate input, output, log directories for each file system, unless you are running fuzzers in parallel. If you reuse the same directories from previous testings of other file systems, it won't work properly.

7. Experiments

Please refer to EXPERIMENTS.md for detailed experiment information.

Contacts

Owner
gts3.org ([email protected])
https://gts3.org
gts3.org (<a href=[email protected])">
R interface to fast.ai

R interface to fastai The fastai package provides R wrappers to fastai. The fastai library simplifies training fast and accurate neural nets using mod

113 Dec 20, 2022
This repository is an implementation of paper : Improving the Training of Graph Neural Networks with Consistency Regularization

CRGNN Paper : Improving the Training of Graph Neural Networks with Consistency Regularization Environments Implementing environment: GeForce RTX™ 3090

THUDM 28 Dec 09, 2022
Zeyuan Chen, Yangchao Wang, Yang Yang and Dong Liu.

Principled S2R Dehazing This repository contains the official implementation for PSD Framework introduced in the following paper: PSD: Principled Synt

zychen 78 Dec 30, 2022
Full Stack Deep Learning Labs

Full Stack Deep Learning Labs Welcome! Project developed during lab sessions of the Full Stack Deep Learning Bootcamp. We will build a handwriting rec

Full Stack Deep Learning 1.2k Dec 31, 2022
Improving Machine Translation Systems via Isotopic Replacement

CAT (Improving Machine Translation Systems via Isotopic Replacement) Machine translation plays an essential role in people’s daily international commu

Zeyu Sun 10 Nov 30, 2022
Adversarial Learning for Modeling Human Motion

Adversarial Learning for Modeling Human Motion This repository contains the open source code which reproduces the results for the paper: Adversarial l

wangqi 6 Jun 15, 2021
Official codebase for Pretrained Transformers as Universal Computation Engines.

universal-computation Overview Official codebase for Pretrained Transformers as Universal Computation Engines. Contains demo notebook and scripts to r

Kevin Lu 210 Dec 28, 2022
Context Axial Reverse Attention Network for Small Medical Objects Segmentation

CaraNet: Context Axial Reverse Attention Network for Small Medical Objects Segmentation This repository contains the implementation of a novel attenti

401 Dec 23, 2022
Tesla Light Show xLights Guide With python

Tesla Light Show xLights Guide Welcome to the Tesla Light Show xLights guide! You can create and run your own light shows on Tesla vehicles. Running a

Tesla, Inc. 2.5k Dec 29, 2022
Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT)

Semi-Supervised Semantic Segmentation with Cross-Consistency Training (CCT) Paper, Project Page This repo contains the official implementation of CVPR

Yassine 344 Dec 29, 2022
Official Pytorch implementation of C3-GAN

Official pytorch implemenation of C3-GAN Contrastive Fine-grained Class Clustering via Generative Adversarial Networks [Paper] Authors: Yunji Kim, Jun

NAVER AI 114 Dec 02, 2022
Demonstration of the Model Training as a CI/CD System in Vertex AI

Model Training as a CI/CD System This project demonstrates the machine model training as a CI/CD system in GCP platform. You will see more detailed wo

Chansung Park 19 Dec 28, 2022
The official repo of the CVPR2021 oral paper: Representative Batch Normalization with Feature Calibration

Representative Batch Normalization (RBN) with Feature Calibration The official implementation of the CVPR2021 oral paper: Representative Batch Normali

Open source projects of ShangHua-Gao 76 Nov 09, 2022
Official implementation of SIGIR'2021 paper: "Sequential Recommendation with Graph Neural Networks".

SURGE: Sequential Recommendation with Graph Neural Networks This is our TensorFlow implementation for the paper: Sequential Recommendation with Graph

FIB LAB, Tsinghua University 53 Dec 26, 2022
Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation

STCN Rethinking Space-Time Networks with Improved Memory Coverage for Efficient Video Object Segmentation Ho Kei Cheng, Yu-Wing Tai, Chi-Keung Tang [a

Rex Cheng 456 Dec 12, 2022
What can linearized neural networks actually say about generalization?

What can linearized neural networks actually say about generalization? This is the source code to reproduce the experiments of the NeurIPS 2021 paper

gortizji 11 Dec 09, 2022
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
Code for paper "Which Training Methods for GANs do actually Converge? (ICML 2018)"

GAN stability This repository contains the experiments in the supplementary material for the paper Which Training Methods for GANs do actually Converg

Lars Mescheder 885 Jan 01, 2023