Bayesian A/B testing

Overview

Tests Codecov PyPI

Bayesian A/B testing

bayesian_testing is a small package for a quick evaluation of A/B (or A/B/C/...) tests using Bayesian approach.

The package currently supports these data inputs:

  • binary data ([0, 1, 0, ...]) - convenient for conversion-like A/B testing
  • normal data with unknown variance - convenient for normal data A/B testing
  • delta-lognormal data (lognormal data with zeros) - convenient for revenue-like A/B testing

The core evaluation metric of the approach is Probability of Being Best (i.e. "being larger" from data point of view) which is calculated using simulations from posterior distributions (considering given data).

Installation

bayesian_testing can be installed using pip:

pip install bayesian_testing

Alternatively, you can clone the repository and use poetry manually:

cd bayesian_testing
pip install poetry
poetry install
poetry shell

Basic Usage

The primary features are BinaryDataTest, NormalDataTest and DeltaLognormalDataTest classes.

In all cases, there are two methods to insert data:

  • add_variant_data - adding raw data for a variant as a list of numbers (or numpy 1-D array)
  • add_variant_data_agg - adding aggregated variant data (this can be practical for large data, as the aggregation can be done on a database level)

Both methods for adding data are allowing specification of prior distribution using default parameters (see details in respective docstrings). Default prior setup should be sufficient for most of the cases (e.g. in cases with unknown priors or large amounts of data).

To get the results of the test, simply call method evaluate, or probabs_of_being_best for returning just the probabilities.

Probabilities of being best are approximated using simulations, hence evaluate can return slightly different values for different runs. To stabilize it, you can set sim_count parameter of evaluate to higher value (default value is 20K), or even use seed parameter to fix it completely.

BinaryDataTest

Class for Bayesian A/B test for binary-like data (e.g. conversions, successes, etc.).

import numpy as np
from bayesian_testing.experiments import BinaryDataTest

# generating some random data
rng = np.random.default_rng(52)
# random 1x1500 array of 0/1 data with 5.2% probability for 1:
data_a = rng.binomial(n=1, p=0.052, size=1500)
# random 1x1200 array of 0/1 data with 6.7% probability for 1:
data_b = rng.binomial(n=1, p=0.067, size=1200)

# initialize a test
test = BinaryDataTest()

# add variant using raw data (arrays of zeros and ones):
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# priors can be specified like this (default for this test is a=b=1/2):
# test.add_variant_data("B", data_b, a_prior=1, b_prior=20)

# add variant using aggregated data (same as raw data with 950 zeros and 50 ones):
test.add_variant_data_agg("C", totals=1000, positives=50)

# evaluate test
test.evaluate()
[{'variant': 'A',
  'totals': 1500,
  'positives': 80,
  'conv_rate': 0.05333,
  'prob_being_best': 0.06625},
 {'variant': 'B',
  'totals': 1200,
  'positives': 80,
  'conv_rate': 0.06667,
  'prob_being_best': 0.89005},
 {'variant': 'C',
  'totals': 1000,
  'positives': 50,
  'conv_rate': 0.05,
  'prob_being_best': 0.0437}]

NormalDataTest

Class for Bayesian A/B test for normal data.

import numpy as np
from bayesian_testing.experiments import NormalDataTest

# generating some random data
rng = np.random.default_rng(21)
data_a = rng.normal(7.2, 2, 1000)
data_b = rng.normal(7.1, 2, 800)
data_c = rng.normal(7.0, 4, 500)

# initialize a test
test = NormalDataTest()

# add variant using raw data:
test.add_variant_data("A", data_a)
test.add_variant_data("B", data_b)
# test.add_variant_data("C", data_c)

# add variant using aggregated data:
test.add_variant_data_agg("C", len(data_c), sum(data_c), sum(np.square(data_c)))

# evaluate test
test.evaluate(sim_count=20000, seed=52)
[{'variant': 'A',
  'totals': 1000,
  'sum_values': 7294.67901,
  'avg_values': 7.29468,
  'prob_being_best': 0.1707},
 {'variant': 'B',
  'totals': 800,
  'sum_values': 5685.86168,
  'avg_values': 7.10733,
  'prob_being_best': 0.00125},
 {'variant': 'C',
  'totals': 500,
  'sum_values': 3736.91581,
  'avg_values': 7.47383,
  'prob_being_best': 0.82805}]

DeltaLognormalDataTest

Class for Bayesian A/B test for delta-lognormal data (log-normal with zeros). Delta-lognormal data is typical case of revenue per session data where many sessions have 0 revenue but non-zero values are positive numbers with possible log-normal distribution. To handle this data, the calculation is combining binary Bayes model for zero vs non-zero "conversions" and log-normal model for non-zero values.

0 for x in data_b), sum_values=sum(data_b), sum_logs=sum([np.log(x) for x in data_b if x > 0]), sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0]) ) test.evaluate(seed=21)">
import numpy as np
from bayesian_testing.experiments import DeltaLognormalDataTest

test = DeltaLognormalDataTest()

data_a = [7.1, 0.3, 5.9, 0, 1.3, 0.3, 0, 0, 0, 0, 0, 1.5, 2.2, 0, 4.9, 0, 0, 0, 0, 0]
data_b = [4.0, 0, 3.3, 19.3, 18.5, 0, 0, 0, 12.9, 0, 0, 0, 0, 0, 0, 0, 0, 3.7, 0, 0]

# adding variant using raw data
test.add_variant_data("A", data_a)

# alternatively, variant can be also added using aggregated data:
test.add_variant_data_agg(
    name="B",
    totals=len(data_b),
    positives=sum(x > 0 for x in data_b),
    sum_values=sum(data_b),
    sum_logs=sum([np.log(x) for x in data_b if x > 0]),
    sum_logs_2=sum([np.square(np.log(x)) for x in data_b if x > 0])
)

test.evaluate(seed=21)
[{'variant': 'A',
  'totals': 20,
  'positives': 8,
  'sum_values': 23.5,
  'avg_values': 1.175,
  'avg_positive_values': 2.9375,
  'prob_being_best': 0.18915},
 {'variant': 'B',
  'totals': 20,
  'positives': 6,
  'sum_values': 61.7,
  'avg_values': 3.085,
  'avg_positive_values': 10.28333,
  'prob_being_best': 0.81085}]

Development

To set up development environment use Poetry and pre-commit:

pip install poetry
poetry install
poetry run pre-commit install

Roadmap

Test classes to be added:

  • PoissonDataTest
  • ExponentialDataTest

Metrics to be added:

  • Expected Loss
  • Potential Value Remaining

References

You might also like...
Language-agnostic HTTP API Testing Tool
Language-agnostic HTTP API Testing Tool

Dredd — HTTP API Testing Framework Dredd is a language-agnostic command-line tool for validating API description document against backend implementati

Web testing library for Robot Framework

SeleniumLibrary Contents Introduction Keyword Documentation Installation Browser drivers Usage Extending SeleniumLibrary Community Versions History In

✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠
✅ Python web automation and testing. 🚀 Fast, easy, reliable. 💠

Build fast, reliable, end-to-end tests. SeleniumBase is a Python framework for web automation, end-to-end testing, and more. Tests are run with "pytes

A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

One-stop solution for HTTP(S) testing.
One-stop solution for HTTP(S) testing.

HttpRunner HttpRunner is a simple & elegant, yet powerful HTTP(S) testing framework. Enjoy! ✨ 🚀 ✨ Design Philosophy Convention over configuration ROI

Declarative HTTP Testing for Python and anything else

Gabbi Release Notes Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest

A modern API testing tool for web applications built with Open API and GraphQL specifications.
A modern API testing tool for web applications built with Open API and GraphQL specifications.

Schemathesis Schemathesis is a modern API testing tool for web applications built with Open API and GraphQL specifications. It reads the application s

A framework-agnostic library for testing ASGI web applications

async-asgi-testclient Async ASGI TestClient is a library for testing web applications that implements ASGI specification (version 2 and 3). The motiva

A Modular Penetration Testing Framework
A Modular Penetration Testing Framework

fsociety A Modular Penetration Testing Framework Install pip install fsociety Update pip install --upgrade fsociety Usage usage: fsociety [-h] [-i] [-

Comments
  • Results are different from online tool

    Results are different from online tool

    Hi,

    I tested your library and cross-checked against this online calculator: Here is the result from your library:

    [{'variant': 'True True True False False False False',
      'totals': 1172,
      'positives': 461,
      'positive_rate': 0.39334,
      'prob_being_best': 0.7422,
      'expected_loss': 0.0582635},
     {'variant': 'False True True False False False False',
      'totals': 222,
      'positives': 27,
      'positive_rate': 0.12162,
      'prob_being_best': 0.0,
      'expected_loss': 0.3280173},
     {'variant': 'False False True False False False False',
      'totals': 1363,
      'positives': 63,
      'positive_rate': 0.04622,
      'prob_being_best': 0.0,
      'expected_loss': 0.4051768},
     {'variant': 'False False False False False False False',
      'totals': 1052,
      'positives': 0,
      'positive_rate': 0.0,
      'prob_being_best': 0.0,
      'expected_loss': 0.4512031},
     {'variant': 'True False True False False False False',
      'totals': 1,
      'positives': 0,
      'positive_rate': 0.0,
      'prob_being_best': 0.2578,
      'expected_loss': 0.1997566}]
    

    So the best variant has 74% probability to be the winner. On the online calculator it is 63.48% instead (last variant is 36.52% instead of 25.78%).

    I used the BinaryDataTest() without any priors.

    I did not dig deeper on what might be right here, but wanted to drop this as feedback.

    opened by ThomasMeissnerDS 6
  • Minimum sample size

    Minimum sample size

    First, this package is great! I wanted to know if the probability estimates rely on a minimum sample size or how one might go about determining minimum sample size for a Binary test, for example.

    opened by abrunner94 5
  • Bump jupyter-server from 1.13.5 to 1.15.4

    Bump jupyter-server from 1.13.5 to 1.15.4

    Bumps jupyter-server from 1.13.5 to 1.15.4.

    Release notes

    Sourced from jupyter-server's releases.

    v1.15.3

    1.15.3

    (Full Changelog)

    Bugs fixed

    Maintenance and upkeep improvements

    Contributors to this release

    (GitHub contributors page for this release)

    @​blink1073 | @​codecov-commenter | @​minrk

    v1.15.2

    1.15.2

    (Full Changelog)

    Bugs fixed

    Maintenance and upkeep improvements

    Contributors to this release

    (GitHub contributors page for this release)

    @​blink1073 | @​minrk | @​Zsailer

    v1.15.1

    1.15.1

    (Full Changelog)

    ... (truncated)

    Changelog

    Sourced from jupyter-server's changelog.

    Changelog

    All notable changes to this project will be documented in this file.

    1.16.0

    (Full Changelog)

    New features added

    Enhancements made

    Bugs fixed

    Maintenance and upkeep improvements

    Other merged PRs

    Contributors to this release

    (GitHub contributors page for this release)

    @​andreyvelich | @​blink1073 | @​codecov-commenter | @​divyansshhh | @​dleen | @​fcollonval | @​jhamet93 | @​meeseeksdev | @​minrk | @​rccern | @​welcome | @​Zsailer

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 2
Releases(v0.5.3)
Owner
Matus Baniar
Data data data
Matus Baniar
It helps to use fixtures in pytest.mark.parametrize

pytest-lazy-fixture Use your fixtures in @pytest.mark.parametrize. Installation pip install pytest-lazy-fixture Usage import pytest @pytest.fixture(p

Marsel Zaripov 299 Dec 24, 2022
Turn any OpenAPI2/3 and Postman Collection file into an API server with mocking, transformations and validations.

Prism is a set of packages for API mocking and contract testing with OpenAPI v2 (formerly known as Swagger) and OpenAPI v3.x. Mock Servers: Life-like

Stoplight 3.3k Jan 05, 2023
catsim - Computerized Adaptive Testing Simulator

catsim - Computerized Adaptive Testing Simulator Quick start catsim is a computerized adaptive testing simulator written in Python 3.4 (with modificat

Nguyễn Văn Anh Tuấn 1 Nov 29, 2021
Tattoo - System for automating the Gentoo arch testing process

Naming origin Well, naming things is very hard. Thankfully we have an excellent

Arthur Zamarin 4 Nov 07, 2022
Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

Python 3 wrapper of Microsoft UIAutomation. Support UIAutomation for MFC, WindowsForm, WPF, Modern UI(Metro UI), Qt, IE, Firefox, Chrome ...

yin kaisheng 1.6k Dec 29, 2022
A simple tool to test internet stability.

pingtest Description A personal project for testing internet stability, intended for use in Linux and Windows.

chris 0 Oct 17, 2021
Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

airspeed velocity airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. It is primarily designed to benchmark a sing

745 Dec 28, 2022
PyAutoEasy is a extension / wrapper around the famous PyAutoGUI, a cross-platform GUI automation tool to replace your boooring repetitive tasks.

PyAutoEasy PyAutoEasy is a extension / wrapper around the famous PyAutoGUI, a cross-platform GUI automation tool to replace your boooring repetitive t

Dingu Sagar 7 Oct 27, 2022
Webscreener is a tool for mass web domains pentesting.

Webscreener is a tool for mass web domains pentesting. It is used to take snapshots for domains that is generated by a tool like knockpy or Sublist3r. It cuts out most of the pentesting time by scree

Seekurity 3 Jun 07, 2021
Lightweight, scriptable browser as a service with an HTTP API

Splash - A javascript rendering service Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implem

Scrapinghub 3.8k Jan 03, 2023
PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interactive.

penbud - Penetration Tester Buddy PENBUD is penetration testing buddy which helps you in penetration testing by making various important tools interac

Himanshu Shukla 15 Feb 01, 2022
nose is nicer testing for python

On some platforms, brp-compress zips man pages without distutils knowing about it. This results in an error when building an rpm for nose. The rpm bui

1.4k Dec 12, 2022
WomboAI Art Generator

WomboAI Art Generator Automate AI art generation using wombot.art. Also integrated into SnailBot for you to try out. Setup Install Python Go to the py

nbee 7 Dec 03, 2022
A testing system for catching visual regressions in Web applications.

Huxley Watches you browse, takes screenshots, tells you when they change Huxley is a test-like system for catching visual regressions in Web applicati

Facebook Archive 4.1k Nov 30, 2022
Plugin for generating HTML reports for pytest results

pytest-html pytest-html is a plugin for pytest that generates a HTML report for test results. Resources Documentation Release Notes Issue Tracker Code

pytest-dev 548 Dec 28, 2022
Multi-asset backtesting framework. An intuitive API lets analysts try out their strategies right away

Multi-asset backtesting framework. An intuitive API lets analysts try out their strategies right away. Fast execution of profit-take/loss-cut orders is built-in. Seamless with Pandas.

Epymetheus 39 Jan 06, 2023
🏃💨 For when you need to fill out feedback in the last minute.

BMSCE Auto Feedback For when you need to fill out feedback in the last minute. 🏃 💨 Setup Clone the repository Run pip install selenium Set the RATIN

Shaan Subbaiah 10 May 23, 2022
A collection of testing examples using pytest and many other libreris

Effective testing with Python This project was created for PyConEs 2021 Check out the test samples at tests Check out the slides at slides (markdown o

Héctor Canto 10 Oct 23, 2022
Subprocesses for Humans 2.0.

Delegator.py — Subprocesses for Humans 2.0 Delegator.py is a simple library for dealing with subprocesses, inspired by both envoy and pexpect (in fact

Amit Tripathi 1.6k Jan 04, 2023
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 01, 2023