a tool that compiles a csv of all h1 program stats

Related tags

Data Analysish1stats
Overview

h1stats - h1 Program Stats Scraper

This python3 script will call out to HackerOne's graphql API and scrape all currently active programs for information and stats on every h1 program. All programs and their stats get tabulated into a generated CSV file. From here you can compare and contrast all program stats to pick high fidelity targets. Furthermore, you can supply your h1 session cookie to the script to also compile in all private programs to the CSV.

Data Collected:

  • Program Name
  • Program URL
  • Program Type (Public or Private)
  • Clear Program (Yes/No)
  • Offers Bounties (Yes/No)
  • Max Critical (USD)
  • Max High (USD)
  • Max Medium (USD)
  • Max Low (USD)
  • Average Bounty Max (USD)
  • Average Bounty Min (USD)
  • Top Bounty Max (USD)
  • Top Bounty Min (USD)
  • Resolved Reports
  • Reports Received in 90 Days
  • Total Bounties Paid (USD)
  • Total Bounties Paid in 90 Days (USD)
  • Avg Time to First Response (Hours)
  • Avg Time to Triage (Hours)
  • Avg Time to Bounty (Hours)
  • Avg Time to Resolution (Hours)
  • Progam Age (Months)
  • Days Since Last Report

Usage

normal usage (public programs): python3 h1stats

authenticated usage (public and private programs): python3 h1stats [<Your HackerOne __Host-session Token>]

WARNING (Authenticated Usage)

THIS SCRIPT HANDLES YOUR H1 SESSION TOKEN WHICH CONTAINS YOUR HACKERONE PRIVATE DATA AND THE PRIVATE DATA OF YOUR HACKERONE PROGRAMS. BECAREFUL WHEN HANDLING THIS TOKEN. THE AUTHORS ARE NOT LIABLE FOR ANY MISUSE OF THIS SCRIPT OR YOUR HACKERONE SESSION TOKEN. PLEASE USE AT YOUR OWN RISK. DO NOT PUBLISH ANY CSVs WITH HACKERONE PRIVATE PROGRAM DATA.

For authenticated usage It is suggested that you assign your token into a variable once using export and pushing the env variable into the script's argument list (as shown in the examples).

Examples

Normal Flow (Public Only):

bash> python3 h1stats
  _     _ ____  _        _
 | |__ / / ___|| |_ __ _| |_ ___
 | '_ \| \___ \| __/ _` | __/ __|
 | | | | |___) | || (_| | |_\__ \
 |_| |_|_|____/ \__\__,_|\__|___/

                      defparam

[+] No session cookie specified
[+] Collecting public data...
[+] Please wait... (this may take several minutes)
[+] Collecting... (350 programs)
[+] Wrote all data to: h1stats-2021-4-24.csv
[+] Done!

Authenticated Flow (Public and Private):

bash> export H1CRED="JGH92kd9...b5e" # HackerOne session cookie
bash> python3 h1stats $H1CRED
  _     _ ____  _        _
 | |__ / / ___|| |_ __ _| |_ ___
 | '_ \| \___ \| __/ _` | __/ __|
 | | | | |___) | || (_| | |_\__ \
 |_| |_|_|____/ \__\__,_|\__|___/

                      defparam

[+] Using specified session cookie
[+] Collecting public and private data...
[+] Please wait... (this may take several minutes)
[+] Collecting... (400 programs)
[+] Wrote all data to: h1stats-PRIVATE-2021-4-24.csv
[+] Warning: this data contains private information under NDA, do not publish!
[+] Done!
Owner
Evan
Architect, Hacker, FPGA Whisperer, Fuzzerer
Evan
Display the behaviour of a realtime program with a scope or logic analyser.

1. A monitor for realtime MicroPython code This library provides a means of examining the behaviour of a running system. It was initially designed to

Peter Hinch 17 Dec 05, 2022
Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Codes for the collection and predictive processing of bitcoin from the API of coinmarketcap

Teo Calvo 5 Apr 26, 2022
Includes all files needed to satisfy hw02 requirements

HW 02 Data Sets Mean Scale Score for Asian and Hispanic Students, Grades 3 - 8 This dataset provides insights into the New York City education system

7 Oct 28, 2021
TheMachineScraper 🐱‍👤 is an Information Grabber built for Machine Analysis

TheMachineScraper 🐱‍👤 is a tool made purely for analysing machine data for any reason.

doop 5 Dec 01, 2022
Fancy data functions that will make your life as a data scientist easier.

WhiteBox Utilities Toolkit: Tools to make your life easier Fancy data functions that will make your life as a data scientist easier. Installing To ins

WhiteBox 3 Oct 03, 2022
Pipeline to convert a haploid assembly into diploid

HapDup (haplotype duplicator) is a pipeline to convert a haploid long read assembly into a dual diploid assembly. The reconstructed haplotypes

Mikhail Kolmogorov 50 Jan 05, 2023
A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

TennisBusinessIntelligenceProject - A project consists in a set of assignements corresponding to a BI process: data integration, construction of an OLAP cube, qurying of a OPLAP cube and reporting.

carlo paladino 1 Jan 02, 2022
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day.

Analyse the limit order book in seconds. Zoom to tick level or get yourself an overview of the trading day. Correlate the market activity with the Apple Keynote presentations.

2 Jan 04, 2022
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python

Sentiment analysis on streaming twitter data using Spark Structured Streaming & Python This project is a good starting point for those who have little

Himanshu Kumar singh 2 Dec 04, 2021
Common bioinformatics database construction

biodb Common bioinformatics database construction 1.taxonomy (Substance classification database) Download the database wget -c https://ftp.ncbi.nlm.ni

sy520 2 Jan 04, 2022
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

🧪📈 🐍. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python a

Marc Skov Madsen 97 Dec 08, 2022
Geospatial data-science analysis on reasons behind delay in Grab ride-share services

Grab x Pulis Detailed analysis done to investigate possible reasons for delay in Grab services for NUS Data Analytics Competition 2022, to be found in

Keng Hwee 6 Jun 07, 2022
Binance Kline Data With Python

Binance Kline Data by seunghan(gingerthorp) reference https://github.com/binance/binance-public-data/ All intervals are supported: 1m, 3m, 5m, 15m, 30

shquant 5 Jul 13, 2022
Toolchest provides APIs for scientific and bioinformatic data analysis.

Toolchest Python Client Toolchest provides APIs for scientific and bioinformatic data analysis. It allows you to abstract away the costliness of runni

Toolchest 11 Jun 30, 2022
Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era.

Overview docs tests package Hangar is version control for tensor data. Commit, branch, merge, revert, and collaborate in the data-defined software era

Tensorwerk 193 Nov 29, 2022
collect training and calibration data for gaze tracking

Collect Training and Calibration Data for Gaze Tracking This tool allows collecting gaze data necessary for personal calibration or training of eye-tr

Pascal 5 Dec 17, 2022
PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

Burn Research 4 Oct 13, 2022
First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we want to understand column level lineage and automate impact analysis.

dbt-osmosis First and foremost, we want dbt documentation to retain a DRY principle. Every time we repeat ourselves, we waste our time. Second, we wan

Alexander Butler 150 Jan 06, 2023