AB-test-analyzer - Python class to perform AB test analysis

Overview

AB-test-analyzer

Python class to perform AB test analysis

Overview

This repo contains a Python class to perform an A/B/C… test analysis with proportion-based metrics (including posthoc test). In practice, the class can be used along with any appropriate RDBMS retrieval tool (e.g. google.cloud.bigquery module for BigQuery) so that, together, they result in an end-to-end analysis process, i.e. from querying the experiment data stored originally in SQL to arriving at the complete analysis results.

The ABTest Class

The class is named ABTest. It is written on top of several well-known libraries (numpy, pandas, scipy, and statsmodels). The class' main functionality is to consume an experiment results data frame (experiment_df), metric information (nominator_metric, denominator_metric), and meta-information about the platform being experimented (platform) to perform two layers of statistical tests.

First, it will perform a Chi-square test on the aggregate data level. If this test is significant, the function will continue to perform a posthoc test that consists of testing each pair of experimental groups to report their adjusted p-values, as well as their absolute lift (difference) confidence intervals. Moreover, the class also has a method to calculate the statistical power of the experiment.

Class Init

To create an instance of ABTest class, we need to pass the following parameters--that also become the class instance attributes:

  1. experiment_df: pandas dataframe that contains the experiment data to be analyzed. The data contained must form a proportion based metric (nominator_metric/denominator_metric <= 1). More on this parameter can be found in a later section.
  2. nominator_metric: string representing the name of the nominator metric, one constituent of the proportion-based metric in experiment_df, e.g. "transaction"
  3. denominator_metric: string representing the name of the denominator metric, another constituent of the proportion-based metric in experiment_df, e.g. "visit"
  4. platform: string representing the platform represented by the experiment data, e.g. "android", "ios"

Methods

get_reporting_df

This function has one parameter called metric_level (string, default value is None) that specifies the metric level of the experiment data whose reporting dataframe is to be derived. Two common values for this parameter are "user" and "event".

Below is the output example from calling self.get_reporting_df(metric_level='user')

|    | experiment_group   | metric_level   |   targeted |   redeemed |   conversion |
|---:|:-------------------|:---------------|-----------:|-----------:|-------------:|
|  0 | control            | user           |       8333 |       1062 |     0.127445 |
|  1 | variant1           | user           |       8002 |        825 |     0.103099 |
|  2 | variant2           | user           |       8251 |       1289 |     0.156223 |
|  3 | variant3           | user           |       8275 |       1228 |     0.148399 |

posthoc_test

This function is the engine under the hood of the analyze method. It has three parameters:

  1. reporting_df: pandas dataframe, output of get_reporting_df method
  2. metric_level: string, the metric level of the experiment data whose reporting dataframe is to be derived
  3. alpha: float, the used alpha in the analysis

analyze

The main function to analyze the AB test. It has two parameters:

  1. metric_level: string, the metric level of the experiment data whose reporting dataframe is to be derived (default value is None). Two common values for this parameter are "user" and "event"
  2. alpha: float, the used alpha in the analysis (default value is 0.05)

The output of this method is a pandas dataframe with the following columns:

  1. metric_level: optional, only if metric_level parameter is not None
  2. pair: the segment pair being individually tested using z-proportion test
  3. raw_p_value: the raw p-value from the individual z-proportion test
  4. adj_p_value: the adjusted p-value (using Benjamini-Hochberg method) from z-proportion tests. Note that significant result is marked with *
  5. mean_ci: the mean (center value) of the metrics delta confidence interval at 1-alpha
  6. lower_ci: the lower bound of the metrics delta confidence interval at 1-alpha
  7. upper_ci: the upper bound of the metrics delta confidence interval at 1-alpha

Sample output:

|    | metric_level   | pair                 |   raw_p_value | adj_p_value             |     mean_ci |    lower_ci |    upper_ci |
|---:|:---------------|:---------------------|--------------:|:------------------------|------------:|------------:|------------:|
|  0 | user           | control vs variant1  |   1.13731e-06 | 1.592240591875927e-06*  |  -0.0243459 |  -0.0341516 |  -0.0145402 |
|  1 | user           | control vs variant2  |   1.08192e-07 | 1.8933619380632198e-07* |   0.0287784 |   0.0181608 |   0.0393959 |
|  2 | user           | control vs variant3  |   9.00223e-05 | 0.00010502606726165857* |   0.0209537 |   0.0104664 |   0.031441  |
|  3 | user           | variant1 vs variant2 |   7.82096e-24 | 2.737334684573585e-23*  |   0.0531243 |   0.0427802 |   0.0634683 |
|  4 | user           | variant1 vs variant3 |   3.23786e-18 | 7.554997289146693e-18*  |   0.0452996 |   0.0350976 |   0.0555015 |
|  5 | user           | variant2 vs variant1 |   7.82096e-24 | 2.737334684573585e-23*  |  -0.0531243 |  -0.0634683 |  -0.0427802 |
|  6 | user           | variant2 vs variant3 |   0.161595    | 0.16159493454321772     | nan         | nan         | nan         |

calculate_power

This function calculates the experiment’s statistical power for the supplied experiment_df. It has three parameters:

  1. practical_lift: float, the metrics lift that perceived meaningful
  2. alpha: float, the used alpha in the analysis (default value is 0.05)
  3. metric_level: string, the metric level of the experiment data whose reporting dataframe is to be derived (default value is None). Two common values for this parameter are "user" and "event"

Sample output:

The experiment's statistical power is 0.2680540196528648

Data Format

This section is dedicated to explaining the details of the format of experiment_df , i.e. the main data supply for the ABTest class.
experiment_df must at least have three columns with the following names:

  1. experiment_group: self-explanatory
  2. denominator_metric: the name of the denominator metric, one constituent of the proportion-based metric in experiment_df, e.g. "visit"
  3. nominator_metric: the name of the nominator metric, one constituent of the proportion-based metric in experiment_df, e.g. "transaction"
  4. (optional) metric_level: the metric level of the data (usually either "user" or "event")

In practice, this dataframe is derived by querying SQL tables using an appropriate retrieval tool.

Sample experiment_df

|    | experiment_group   | metric_level   |   targeted |   redeemed |
|---:|:-------------------|:---------------|-----------:|-----------:|
|  0 | control            | user           |       8333 |       1062 |
|  1 | variant1           | user           |       8002 |        825 |
|  2 | variant2           | user           |       8251 |       1289 |
|  3 | variant3           | user           |       8275 |       1228 |

Usage Guideline

The general steps:

  1. Prepare experiment_df (via anything you’d prefer)
  2. Create an ABTest class instance
  3. To get reporting dataframe, call get_reporting_df method
  4. To analyze end-to-end, call analyze method
  5. To calculate experiment’s statistical power, call calculate_power method

See the sample usage notebook for more details.

A Jupyter - Leaflet.js bridge

ipyleaflet A Jupyter / Leaflet bridge enabling interactive maps in the Jupyter notebook. Usage Selecting a basemap for a leaflet map: Loading a geojso

Jupyter Widgets 1.3k Dec 27, 2022
A data visualization curriculum of interactive notebooks.

A data visualization curriculum of interactive notebooks, using Vega-Lite and Altair. This repository contains a series of Python-based Jupyter notebooks.

UW Interactive Data Lab 1.2k Dec 30, 2022
Tools for writing, submitting, debugging, and monitoring Storm topologies in pure Python

Petrel Tools for writing, submitting, debugging, and monitoring Storm topologies in pure Python. NOTE: The base Storm package provides storm.py, which

AirSage 247 Dec 18, 2021
基于python爬虫爬取COVID-19爆发开始至今全球疫情数据并利用Echarts对数据进行分析与多样化展示。

COVID-19-Epidemic-Map 基于python爬虫爬取COVID-19爆发开始至今全球疫情数据并利用Echarts对数据进行分析与多样化展示。 觉得项目还不错的话欢迎给一个star! 项目的源码可以正常运行,各个库的版本、数据库的建表语句、运行过程中遇到的坑以及解决方式在笔记.md中都

31 Dec 15, 2022
Some problems of SSLC ( High School ) before outputs and after outputs

Some problems of SSLC ( High School ) before outputs and after outputs 1] A Python program and its output (output1) while running the program is given

Fayas Noushad 3 Dec 01, 2021
Schema validation just got Pythonic

Schema validation just got Pythonic schema is a library for validating Python data structures, such as those obtained from config-files, forms, extern

Vladimir Keleshev 2.7k Jan 06, 2023
A filler visualizer built using python

filler-visualizer 42 filler のログをビジュアライズしてスポーツさながら楽しむことができます! Usage (標準入力でvisualizer.pyに渡せばALL OK) 1. 既にあるログをビジュアライズする $ ./filler_vm -t 3 -p1 john_fill

Takumi Hara 1 Nov 04, 2021
daily report of @arkinvest ETF activity + data collection

ark_invest daily weekday report of @arkinvest ETF activity + data collection This script was created to: Extract and save daily csv's from ARKInvest's

T D 27 Jan 02, 2023
Simulation du problème de Monty Hall avec Python et matplotlib

Le problème de Monty Hall C'est un jeu télévisé où il y a trois portes sur le plateau de jeu. Seule une de ces portes cache un trésor. Il n'y a rien d

ETCHART YANG 1 Jan 06, 2022
This project is created to visualize the system statistics such as memory usage, CPU usage, memory accessible by process and much more using Kibana Dashboard with Elasticsearch.

System Stats Visualizer This project is created to visualize the system statistics such as memory usage, CPU usage, memory accessible by process and m

Vishal Teotia 5 Feb 06, 2022
Here I plotted data for the average test scores across schools and class sizes across school districts.

HW_02 Here I plotted data for the average test scores across schools and class sizes across school districts. Average Test Score by Race This graph re

7 Oct 27, 2021
Interactive chemical viewer for 2D structures of small molecules

👀 mols2grid mols2grid is an interactive chemical viewer for 2D structures of small molecules, based on RDKit. ➡️ Try the demo notebook on Google Cola

Cédric Bouysset 154 Dec 26, 2022
Visualizing weather changes across the world using third party APIs and Python.

WEATHER FORECASTING ACROSS THE WORLD Overview Python scripts were created to visualize the weather for over 500 cities across the world at varying di

G Johnson 0 Jun 12, 2021
A custom qq-plot for two sample data comparision

QQ-Plot 2 Sample Just a gist to include the custom code to draw a qq-plot in python when dealing with a "two sample problem". This means when u try to

1 Dec 20, 2021
A TileDB backend for xarray.

TileDB-xarray This library provides a backend engine to xarray using the TileDB Storage Engine. Example usage: import xarray as xr dataset = xr.open_d

TileDB, Inc. 14 Jun 02, 2021
The windML framework provides an easy-to-use access to wind data sources within the Python world, building upon numpy, scipy, sklearn, and matplotlib. Renewable Wind Energy, Forecasting, Prediction

windml Build status : The importance of wind in smart grids with a large number of renewable energy resources is increasing. With the growing infrastr

Computational Intelligence Group 125 Dec 24, 2022
An adaptable Snakemake workflow which uses GATKs best practice recommendations to perform germline mutation calling starting with BAM files

Germline Mutation Calling This Snakemake workflow follows the GATK best-practice recommandations to call small germline variants. The pipeline require

12 Dec 24, 2022
Pglive - Pglive package adds support for thread-safe live plotting to pyqtgraph

Live pyqtgraph plot Pglive package adds support for thread-safe live plotting to

Martin Domaracký 15 Dec 10, 2022
Example scripts for generating plots of Bohemian matrices

Bohemian Eigenvalue Plotting Examples This repository contains examples of generating plots of Bohemian eigenvalues. The examples in this repository a

Bohemian Matrices 5 Nov 12, 2022
Generate a roam research like Network Graph view from your Notion pages.

Notion Graph View Export Notion pages to a Roam Research like graph view.

Steve Sun 214 Jan 07, 2023