Yet Another Workflow Parser for SecurityHub

Related tags

Data Analysisyawps
Overview

YAWPS

Yet Another Workflow Parser for SecurityHub

"Screaming pepper" by Rum Bucolic Ape is licensed with CC BY-ND 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/2.0/

Purpose

Currently SecurityHub has a ChatBot integration thats a bit lacking. All of securityhub goes to chatbot, which means a singular flooding channel of alerts.

With cloud-custodians recent support for securityhub and organizations we have a good way to send all alerts for an entire org to slack. But that means every account goes to a single channel.

This repo is part of a multi-series talk/demo on how to intelligently route account messages to differing Slack channels.

In the scenario where a team owns an account it would be nice to let cloud-custodian generate meaningful securityhub notifications that go to specific team channels.

For this talk we will simply tag AWS accounts with 2 tags account_name (a human readable name) and slack_channel (a slack channel to direct those security hub notifications to).

A blog post and KubeCon talk will be coming soon with more information

Prerequisites

The only real pre-requisite here is a working multi-account SecurityHub

Configuration

Environment Variable Description
SLACK_FALLBACK_CHANNEL Channel to fallback to if the slack_channel tag is not provided on the account
SLACK_TOKEN the path in SSM to the slack token`
SLACK_TOKEN_SSM_PATH if a SLACK_TOKEN is not found, this is where to grab it from the EC2 Param store
LOGGING_LEVEL the logging level to use. Default is INFO
ENABLE_FORK_COPY_SEVERITY Enable the ability to fork some messages to another channel by severity. Value can be True or False. Default is False
FORK_COPY_SEVERITY_VALUE If ENABLE_FORK_COPY_SEVERITY is True, what severity level to fork by. Should be an integer between 0 and 100. Default is 90
ENABLE_FORK_ONLY_SEVERITY Enable the ability to fork some messages to only another channel by severity. Value can be True or False. Default is False
FORK_ONLY_SEVERITY_VALUE If ENABLE_FORK_ONLY_SEVERITY is True, what severity level to fork by. Should be an integer between 0 and 100. Default is 100

Forking

There are a few use cases for forking.

In general (all defaults) YAWPS will only send to the channel found in the tag or the SLACK_FALLBACK_CHANNEL (because it's required).

This is great until you have rules that you want a second team (lets say security) to also see and follow up with.

Using ENABLE_FORK_COPY_SEVERITY and FORK_COPY_SEVERITY_VALUE lets you also send to that second slack channel. Lets say you set FORK_COPY_SEVERITY_VALUE to 90. This means that anything rated 90 will send to both.

Another use-case exists: not sending team specific alerts. Lets say that an alert is not actionable by the configured team, but is purely for security visibility (like failed IAM logins etc). You can use ENABLE_FORK_ONLY_SEVERITY set to, say 100, in this scenario so that custom rules can set severity to 100 and send it only to security and bypass the primary team. This is good for noise filtration and helping to keep things actionable by a singular source.

Deploy

ServerLess

TODO

Terraform

  1. Download this repository (or a released artifact)
  2. Run make zip to produce a fully deployable s3 artifact
  3. Deploy something similar to this terraform

Testing

$ poetry install
$ poetry run tox
Owner
myoung34
Cloud security engineer, tinkerer, tomato farmer
myoung34
Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List.

tldextract Python Module tldextract accurately separates the gTLD or ccTLD (generic or country code top-level domain) from the registered domain and s

John Kurkowski 1.6k Jan 03, 2023
CubingB is a timer/analyzer for speedsolving Rubik's cubes, with smart cube support

CubingB is a timer/analyzer for speedsolving Rubik's cubes (and related puzzles). It focuses on supporting "smart cubes" (i.e. bluetooth cubes) for recording the exact moves of a solve in real time.

Zach Wegner 5 Sep 18, 2022
Stream-Kafka-ELK-Stack - Weather data streaming using Apache Kafka and Elastic Stack.

Streaming Data Pipeline - Kafka + ELK Stack Streaming weather data using Apache Kafka and Elastic Stack. Data source: https://openweathermap.org/api O

Felipe Demenech Vasconcelos 2 Jan 20, 2022
This is a repo documenting the best practices in PySpark.

Spark-Syntax This is a public repo documenting all of the "best practices" of writing PySpark code from what I have learnt from working with PySpark f

Eric Xiao 447 Dec 25, 2022
Option Pricing Calculator using the Binomial Pricing Method (No Libraries Required)

Binomial Option Pricing Calculator Option Pricing Calculator using the Binomial Pricing Method (No Libraries Required) Background A derivative is a fi

sammuhrai 1 Nov 29, 2021
Demonstrate a Dataflow pipeline that saves data from an API into BigQuery table

Overview dataflow-mvp provides a basic example pipeline that pulls data from an API and writes it to a BigQuery table using GCP's Dataflow (i.e., Apac

Chris Carbonell 1 Dec 03, 2021
A library to create multi-page Streamlit applications with ease.

A library to create multi-page Streamlit applications with ease.

Jackson Storm 107 Jan 04, 2023
PyClustering is a Python, C++ data mining library.

pyclustering is a Python, C++ data mining library (clustering algorithm, oscillatory networks, neural networks). The library provides Python and C++ implementations (C++ pyclustering library) of each

Andrei Novikov 1k Jan 05, 2023
A highly efficient and modular implementation of Gaussian Processes in PyTorch

GPyTorch GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian

3k Jan 02, 2023
Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Gabriele 3 Jul 05, 2022
A notebook to analyze Amazon Recommendation Review Dataset.

Amazon Recommendation Review Dataset Analyzer A notebook to analyze Amazon Recommendation Review Dataset. Features Calculates distinct user count, dis

isleki 3 Aug 22, 2022
PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

PCAfold is an open-source Python library for generating, analyzing and improving low-dimensional manifolds obtained via Principal Component Analysis (PCA).

Burn Research 4 Oct 13, 2022
Single-Cell Analysis in Python. Scales to >1M cells.

Scanpy – Single-Cell Analysis in Python Scanpy is a scalable toolkit for analyzing single-cell gene expression data built jointly with anndata. It inc

Theis Lab 1.4k Jan 05, 2023
Snakemake workflow for converting FASTQ files to self-contained CRAM files with maximum lossless compression.

Snakemake workflow: name A Snakemake workflow for description Usage The usage of this workflow is described in the Snakemake Workflow Catalog. If

Algorithms for reproducible bioinformatics (Koesterlab) 1 Dec 16, 2021
PyChemia, Python Framework for Materials Discovery and Design

PyChemia, Python Framework for Materials Discovery and Design PyChemia is an open-source Python Library for materials structural search. The purpose o

Materials Discovery Group 61 Oct 02, 2022
Clean and reusable data-sciency notebooks.

KPACUBO KPACUBO is a set Jupyter notebooks focused on the best practices in both software development and data science, namely, code reuse, explicit d

Matvey Morozov 1 Jan 28, 2022
Binance Kline Data With Python

Binance Kline Data by seunghan(gingerthorp) reference https://github.com/binance/binance-public-data/ All intervals are supported: 1m, 3m, 5m, 15m, 30

shquant 5 Jul 13, 2022
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

2 Nov 20, 2021
follow-analyzer helps GitHub users analyze their following and followers relationship

follow-analyzer follow-analyzer helps GitHub users analyze their following and followers relationship by providing a report in html format which conta

Yin-Chiuan Chen 2 May 02, 2022