HyDiff: Hybrid Differential Software Analysis

Related tags

Deep Learninghydiff
Overview

DOI

HyDiff: Hybrid Differential Software Analysis

This repository provides the tool and the evaluation subjects for the paper HyDiff: Hybrid Differential Software Analysis accepted for the technical track at ICSE'2020. A pre-print of the paper is available here.

Authors: Yannic Noller, Corina S. Pasareanu, Marcel Böhme, Youcheng Sun, Hoang Lam Nguyen, and Lars Grunske.

The repository includes:

A pre-built version of HyDiff is also available as Docker image:

docker pull yannicnoller/hydiff
docker run -it --rm yannicnoller/hydiff

Tool

HyDiff's technical framework is built on top of Badger, DifFuzz, and the Symbolic PathFinder. We provide a complete snapshot of all tools and our extensions.

Requirements

  • Git, Ant, Build-Essentials, Gradle
  • Java JDK = 1.8
  • Python3, Numpy Package
  • recommended: Ubuntu 18.04.1 LTS

Folder Structure

The folder tool contains 2 subfolders: fuzzing and symbolicexecution, representing the both components of HyDiff.

fuzzing

  • afl-differential: The fuzzing component is built on top of DifFuzz and KelinciWCA (the fuzzing part of Badger). Both use AFL as the underlying fuzzing engine. In order to make it easy for the users, we provide our complete modified AFL variant in this folder. Our modifications are based on afl-2.52b.

  • kelinci-differential: Kelinci leverages a server-client architecture to make AFL applicable to Java applications, please refer to the Kelinci poster-paper for more details. We modified it to make usable in a general differential analysis. It includes an interface program to connect the Kelinci server to the AFL fuzzer and the instrumentor project, which is used to instrument the Java bytecode. The instrumentation handles the coverage reporting and the collection of our differential metrics. The Kelinci server handles requests from AFL to execute a mutated input on the application.

symbolicexecution

  • jpf-core: Our symbolic execution is built on top of Symbolic PathFinder (SPF), which is an extension of Java PathFinder (JPF), which makes it necessary to include the core implementation of JPF.

  • jpf-symbc-differential: In order to make SPF applicable to a differential analysis, we modified in several locations and added the ability to perform some sort of shadow symbolic execution (cf. Complete Shadow Symbolic Execution with Java PathFinder). This folder includes the modified SPF project.

  • badger-differential: HyDiff performs a hybrid analysis by running fuzzing and symbolic execution in parallel. This concept is based on Badger, which provides the technical basis for our implementation. This folder includes the modified Badger project, which enables the differential hybrid analysis, incl. the differential dynamic symbolic execution.

How to install the tool and run our evaluation

Be aware that the instructions have been tested for Unix systems only.

  1. First you need to build the tool and the subjects. We provide a script setup.sh to simply build everything. Note: the script may override an existing site.properties file, which is required for JPF/SPF.

  2. Test the installation: the best way to test the installation is to execute the evaluation of our example program (cf. Listing 1 in our paper). You can execute the script run_example.sh. As it is, it will run each analysis (just differential fuzzing, just differential symbolic execution, and the hybrid analysis) once. The values presented in our paper in Section 2.2 are averaged over 30 runs. In order to perform 30 runs each, you can easily adapt the script, but for some first test runs you can leave it as it is. The script should produce three folders:

    • experiments/subjects/example/fuzzer-out-1: results for differential fuzzing
    • experiments/subjects/example/symexe-out-1: results for differential symbolic execution
    • experiments/subjects/example/hydiff-out-1: results for HyDiff (hybrid combination) It will also produce three csv files with the summarized statistics for each experiment:
    • experiments/subjects/example/fuzzer-out-results-n=1-t=600-s=30.csv
    • experiments/subjects/example/symexe-out-results-n=1-t=600-s=30.csv
    • experiments/subjects/example/hydiff-out-results-n=1-t=600-s=30-d=0.csv
  3. After finishing the building process and testing the installation, you can use the provided run scripts (experiments/scripts) to replay HyDiff's evaluation or to perform your own differential analysis. HyDiff's evaluation contains three types of differential analysis. For each of them you will find a separate run script:

In the beginning of each run script you can define the experiment parameters:

  • number_of_runs: N, the number of evaluation runs for each subject (30 for all experiments)
  • time_bound: T, the time bound for the analysis (regression: 600sec, side-channel: 1800sec, and dnn: 3600sec)
  • step_size_eval: S, the step size for the evaluation (30sec for all experiments)
  • [time_symexe_first: D, the delay with which fuzzing gets started after symexe for the DNN subjects] (only DNN)

Each run script first executes differential fuzzing, then differential symbolic execution and then the hybrid analysis. Please adapt our scripts to perform your own analysis.

For each subject, analysis_type, and experiment repetition i the scripts will produce folders like: experiments/subjects/ / -out- , and will summarize the experiments in csv files like: experiments/subjects/ / -out-results-n= -t= -s= -d= .csv .

Complete Evaluation Reproduction

In order to reproduce our evaluation completely, you need to run the three mentioned run scripts. They include the generation of all statistics. Be aware that the mere runtime of all analysis parts is more than 53 days because of the high runtimes and number of repetitions. So it might be worthwhile to run it only for some specific subjects or to run the analysis on different machines in parallel or to modify the runtime or to reduce the number of repetitions. Feel free to adjust the script or reuse it for your own purpose.

Statistics

As mentioned earlier, the statistics will be automatically generated by our run script, which execute the python scripts from the scripts folder to aggregate the several experiment runs. They will generate csv files with the information about the average result values.

For the regression analysis and the DNN analysis we use the scripts:

For the side-channel analysis we use the scripts:

All csv files for our experiments are included in experiments/results.

Feel free to adapt these evaluation scripts for your own purpose.

Maintainers

  • Yannic Noller (yannic.noller at acm.org)

License

This project is licensed under the MIT License - see the LICENSE file for details

You might also like...
Python framework for Stochastic Differential Equations modeling

SDElearn: a Python package for SDE modeling This package implements functionalities for working with Stochastic Differential Equations models (SDEs fo

Differential rendering based motion capture blender project.
Differential rendering based motion capture blender project.

TraceArmature Summary TraceArmature is currently a set of python scripts that allow for high fidelity motion capture through the use of AI pose estima

BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search

BossNAS This repository contains PyTorch evaluation code, retraining code and pretrained models of our paper: BossNAS: Exploring Hybrid CNN-transforme

Hybrid Neural Fusion for Full-frame Video Stabilization

FuSta: Hybrid Neural Fusion for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations
Code for Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations

Implementation for Iso-Points (CVPR 2021) Official code for paper Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid Representations paper |

The official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averaging Approach

Graph Optimizer This repo contains the official implementation of our CVPR 2021 paper - Hybrid Rotation Averaging: A Fast and Robust Rotation Averagin

A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.
A library for preparing, training, and evaluating scalable deep learning hybrid recommender systems using PyTorch.

collie_recs Collie is a library for preparing, training, and evaluating implicit deep learning hybrid recommender systems, named after the Border Coll

:hot_pepper: R²SQL: "Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing." (AAAI 2021)

R²SQL The PyTorch implementation of paper Dynamic Hybrid Relation Network for Cross-Domain Context-Dependent Semantic Parsing. (AAAI 2021) Requirement

Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Releases(v1.0.0)
  • v1.0.0(Jan 26, 2020)

    First official release for HyDiff. We added all parts of our tool and all evaluation subjects to support the reproduction of our results. This release is submitted to the ICSE 2020 Artifact Evaluation.

    Source code(tar.gz)
    Source code(zip)
Owner
Yannic Noller
Yannic Noller
GemNet model in PyTorch, as proposed in "GemNet: Universal Directional Graph Neural Networks for Molecules" (NeurIPS 2021)

GemNet: Universal Directional Graph Neural Networks for Molecules Reference implementation in PyTorch of the geometric message passing neural network

Data Analytics and Machine Learning Group 124 Dec 30, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability an

Liang HUANG 87 Dec 28, 2022
ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021)

ESTDepth: Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks (CVPR 2021) Project Page | Video | Paper | Data We present a novel metho

65 Nov 28, 2022
Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays

Numbering permanent and deciduous teeth via deep instance segmentation in panoramic X-rays In this repo, you will find the instructions on how to requ

Intelligent Vision Research Lab 4 Jul 21, 2022
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021]

Neural Material Official code repository for the paper: Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021] Henzler, Deschai

Philipp Henzler 80 Dec 20, 2022
A PyTorch Implementation of the Luna: Linear Unified Nested Attention

Unofficial PyTorch implementation of Luna: Linear Unified Nested Attention The quadratic computational and memory complexities of the Transformer’s at

Soohwan Kim 32 Nov 07, 2022
Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neurons learned with Gradient descent or LeLevenberg–Marquardt algorithm

Neuron class provides LNU (Linear Neural Unit), QNU (Quadratic Neural Unit), RBF (Radial Basis Function), MLP (Multi Layer Perceptron), MLP-ELM (Multi Layer Perceptron - Extreme Learning Machine) neu

Filip Molcik 38 Dec 17, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
MassiveSumm: a very large-scale, very multilingual, news summarisation dataset

MassiveSumm: a very large-scale, very multilingual, news summarisation dataset This repository contains links to data and code to fetch and reproduce

Daniel Varab 19 Dec 16, 2022
Download from Onlyfans.com.

OnlySave: Onlyfans downloader Getting Started: Download the setup executable from the latest release. Install and run. Only works on Windows currently

4 May 30, 2022
realsense d400 -> jpg + csv

Realsense-capture realsense d400 - jpg + csv Requirements RealSense sdk : Installation Python3 pyrealsense2 (RealSense SDK) Numpy OpenCV Tkinter Run

Ar-Ray 2 Mar 22, 2022
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
classification task on dataset-CIFAR10,by using Tensorflow/keras

CIFAR10-Tensorflow classification task on dataset-CIFAR10,by using Tensorflow/keras 在这一个库中,我使用Tensorflow与keras框架搭建了几个卷积神经网络模型,针对CIFAR10数据集进行了训练与测试。分别使

3 Oct 17, 2021
Official Pytorch implementation for video neural representation (NeRV)

NeRV: Neural Representations for Videos (NeurIPS 2021) Project Page | Paper | UVG Data Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser-Nam Lim, Abhinav S

hao 214 Dec 28, 2022
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification

Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification This repository is the official implementation of [Dealing With Misspeci

0 Oct 25, 2021
TAug :: Time Series Data Augmentation using Deep Generative Models

TAug :: Time Series Data Augmentation using Deep Generative Models Note!!! The package is under development so be careful for using in production! Fea

35 Dec 06, 2022