dyld_shared_cache processing / Single-Image loading for BinaryNinja

Overview

Dyld Shared Cache Parser

Author: cynder (kat)

Dyld Shared Cache Support for BinaryNinja

BinaryNinja Screenshot

BinaryNinja Screenshot

Without any of the fuss of requiring manually loading several unrelated images, or the awful off-image addresses, and with better output than IDA, Hopper, or any other disassembler on the market.

Installation + Usage

  1. Open the plugin manager
  2. Search for "Dyld" and install this plugin

Usage:

  1. Open Dyld Shared Cache file with BN
  2. Select the Image you would like to disassemble
  3. Congrats, you are now Reverse Engineering the Mach-O

Description:

This project acts as an interface for two seperate projects; DyldExtractor, and ktool. Mainly DyldExtractor.

DyldExtractor is a project written primarily by 'arandomdev' designed for CLI standalone dyld_shared_cache extraction. It is the best tool for the job, and reverses the majority of "optimizations" that make DSC reverse engineering ugly and painful. Utilizing this plugin, Binja's processing should outperform IDAs, and wont require IDA's need for repeatedly right clicking and manually loading tons of modules.

This version of DyldExtractor has a lot of modifications (read: a lot of commented out lines) from the original designed to make it function better in the binja environment.

ktool is a multifaceted project I wrote for, primarily, MachO + ObjC Parsing.

It is mainly used for super basic parsing of the output, as we need to properly write the segments to the VM (and scrap all the dsc data that was originally in this file) so the Mach-O View knows how to parse it.

License

This plugin, along with ktool and dyldextractor are released under an MIT license. Both of these plugins are vendored within this project to make installation slightly simpler.

You might also like...
《Single Image Reflection Removal Beyond Linearity》(CVPR 2019)

Single-Image-Reflection-Removal-Beyond-Linearity Paper Single Image Reflection Removal Beyond Linearity. Qiang Wen, Yinjie Tan, Jing Qin, Wenxi Liu, G

Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)
Official PyTorch code of Holistic 3D Scene Understanding from a Single Image with Implicit Representation (CVPR 2021)

Implicit3DUnderstanding (Im3D) [Project Page] Holistic 3D Scene Understanding from a Single Image with Implicit Representation Cheng Zhang, Zhaopeng C

Learning to Reconstruct 3D Manhattan Wireframes from a Single Image
Learning to Reconstruct 3D Manhattan Wireframes from a Single Image

Learning to Reconstruct 3D Manhattan Wireframes From a Single Image This repository contains the PyTorch implementation of the paper: Yichao Zhou, Hao

Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)
Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation (RA-L/ICRA 2020)

Aerial Depth Completion This work is described in the letter "Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation", by Lucas

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions. Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image
Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image

NonCuboidRoom Paper Learning to Reconstruct 3D Non-Cuboid Room Layout from a Single RGB Image Cheng Yang*, Jia Zheng*, Xili Dai, Rui Tang, Yi Ma, Xiao

Selective Wavelet Attention Learning for Single Image Deraining

SWAL Code for Paper "Selective Wavelet Attention Learning for Single Image Deraining" Prerequisites Python 3 PyTorch Models We provide the models trai

PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network"

HAN PyTorch code for our ECCV 2020 paper "Single Image Super-Resolution via a Holistic Attention Network" This repository is for HAN introduced in the

Code for generating a single image pretraining dataset
Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Comments
  • TypeError: cannot unpack non-iterable NoneType object

    TypeError: cannot unpack non-iterable NoneType object

    Tried this just now, and got this, trying to extract the macOS 13.1 x86_64h cache:

    Successfully installed: Dyld Shared Cache Processor
    Loaded python3 plugin 'cxnder_bndyldsharedcache'
    Traceback (most recent call last):
      File "/Applications/Binary Ninja.app/Contents/MacOS/plugins/../../Resources/python/binaryninja/binaryview.py", line 2818, in _init
        return self.init()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/dsc.py", line 101, in init
        stub_fixer.fixStubs(extraction_ctx)
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 1681, in fixStubs
        _StubFixer(extractionCtx).run()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 1011, in run
        self._symbolizer = _Symbolizer(self._extractionCtx)
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 59, in __init__
        self._enumerateExports()
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 101, in _enumerateExports
        if depInfo := self._getDepInfo(dylib, self._machoCtx):
      File "/Users/torarne/Library/Application Support/Binary Ninja/repositories/community/plugins/cxnder_bndyldsharedcache/DyldExtractor/converter/stub_fixer.py", line 179, in _getDepInfo
        imageOff, dyldCtx = self._dyldCtx.convertAddr(imageAddr)
    TypeError: cannot unpack non-iterable NoneType object
    BinaryView of type 'DyldSharedCache' failed to initialize!
    No available/valid debug info parsers for `Raw` view
    Found more than 'analysis.limits.stringSearch' (0x100000) strings aborting search for range: 0 - 0x33be0000
    Analysis update took 12.239 seconds
    
    
    opened by torarnv 1
  • prep for plugin manager

    prep for plugin manager

    Looks like only two changes are required to get this added to the BN plugin manager. The first is to add a requirements.txt -- while ktool and DyldExtractor are versioned, capstone is still a requirement of DyldExtractor so it would be nice to expose that.

    Or, better yet, replace the disassembler with BN's own disassembly to remove the dependency entirely. That also means there's no need to hack around the lack of PAC instructions as BN can disassemble those just fine.

    The other step is to make a release, then we can add the plugin directly to the plugin manager which would be really handy!

    opened by psifertex 1
  • fix relative imports for built-in BN Py 3.8.9 on MacOS

    fix relative imports for built-in BN Py 3.8.9 on MacOS

    I'm not sure whether it's the exact python version or the fact that I'm using the BN shipped Python versus homebrew / ports but I'm unable to use the plugin as-is on MacOS without this change. I don't know how much this versioned DyldExtractor has differed, happy to test/submit upstream in the parent repo if you prefer.

    opened by psifertex 0
Releases(1.0.0)
Owner
cynder
macOS/iOS development @ reverse engineering chick. // maintainer of the iPhone Dev Wiki (https://iphonedev.wiki)
cynder
All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

All course materials for the Zero to Mastery Deep Learning with TensorFlow course.

Daniel Bourke 3.4k Jan 07, 2023
Related resources for our EMNLP 2021 paper

Plan-then-Generate: Controlled Data-to-Text Generation via Planning Authors: Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier Code

Yixuan Su 61 Jan 03, 2023
Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)

Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching Official pytorch implementation of "Show, Attend and Distill: Kn

Clova AI Research 80 Dec 16, 2022
Intel® Neural Compressor is an open-source Python library running on Intel CPUs and GPUs

Intel® Neural Compressor targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep l

Intel Corporation 846 Jan 04, 2023
The code for Expectation-Maximization Attention Networks for Semantic Segmentation (ICCV'2019 Oral)

EMANet News The bug in loading the pretrained model is now fixed. I have updated the .pth. To use it, download it again. EMANet-101 gets 80.99 on the

Xia Li 李夏 663 Nov 30, 2022
nnFormer: Interleaved Transformer for Volumetric Segmentation

nnFormer: Interleaved Transformer for Volumetric Segmentation Code for paper "nnFormer: Interleaved Transformer for Volumetric Segmentation ". Please

jsguo 610 Dec 28, 2022
Object detection (YOLO) with pytorch, OpenCV and python

Real Time Object/Face Detection Using YOLO-v3 This project implements a real time object and face detection using YOLO algorithm. You only look once,

1 Aug 04, 2022
CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary.

CUP-DNN CUP-DNN is a deep neural network model used to predict tissues of origin for cancers of unknown of primary. The model was trained on the expre

1 Oct 27, 2021
Transformer Tracking (CVPR2021)

TransT - Transformer Tracking [CVPR2021] Official implementation of the TransT (CVPR2021) , including training code and trained models. We are revisin

chenxin 465 Jan 06, 2023
Code to reproduce the results in the paper "Tensor Component Analysis for Interpreting the Latent Space of GANs".

Tensor Component Analysis for Interpreting the Latent Space of GANs [ paper | project page ] Code to reproduce the results in the paper "Tensor Compon

James Oldfield 4 Jun 17, 2022
Official Pytorch implementation of "Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video", CVPR 2021

TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video Qualtitative result Paper teaser video Introduction This r

Hongsuk Choi 215 Jan 06, 2023
The codes and models in 'Gaze Estimation using Transformer'.

GazeTR We provide the code of GazeTR-Hybrid in "Gaze Estimation using Transformer". We recommend you to use data processing codes provided in GazeHub.

65 Dec 27, 2022
Fast image augmentation library and an easy-to-use wrapper around other libraries

Albumentations Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to inc

11.4k Jan 09, 2023
ROS Basics and TurtleSim

Waypoint Follower Anna Garverick This package draws given waypoints, then waits for a service call with a start position to send the turtle to each wa

Anna Garverick 1 Dec 13, 2021
Analyses of the individual electric field magnitudes with Roast.

Aloi Davide - PhD Student (UoB) Analysis of electric field magnitudes (wp2a dataset only at the moment) and correlation analysis with Dynamic Causal M

Davide Aloi 7 Dec 15, 2022
Code and project page for ICCV 2021 paper "DisUnknown: Distilling Unknown Factors for Disentanglement Learning"

DisUnknown: Distilling Unknown Factors for Disentanglement Learning See introduction on our project page Requirements PyTorch = 1.8.0 torch.linalg.ei

Sitao Xiang 24 May 16, 2022
Codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

DominoSearch This is repository for codes and models of NeurIPS2021 paper - DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense n

11 Sep 10, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
Generative Exploration and Exploitation - This is an improved version of GENE.

GENE This is an improved version of GENE. In the original version, the states are generated from the decoder of VAE. We have to check whether the gere

33 Mar 23, 2022