vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

Overview

python   MIT license  

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models, such as:

  • T-test: verify if mean of distribution is zero;
  • Kupiec Test (1995): verify if the number of violations is consistent with the violations predicted by the model;
  • Berkowitz Test (2001): verify if conditional distributions of returns "GARCH(1,1)" used in the VaR Model is adherent to the data. In this specific test, we do not observe the whole data, only the tail;
  • Christoffersen and Pelletier Test (2004): also known as Duration Test. Duration is time between violations of VaR. It tests if VaR Model has quickly response to market movements by consequence the violations do not form volatility clusters. This test verifies if violations has no memory i.e. should be independent.

Installation

Using pip

You can install using the pip package manager by running:

pip install vartests

Alternatively, you could install the latest version directly from Github:

pip install https://github.com/rafa-rod/vartests/archive/refs/heads/main.zip

Why vartests is important?

After VaR calculation, it is necessary to perform statistic tests to evaluate the VaR Models. To select the best model, they should be validated by backtests.

Example

First of all, lets read a file with a PnL (distribution of profit and loss) of a portfolio in which also contains the VaR and its violations.

import pandas as pd

data = pd.read_excel("Example.xlsx", index_col=0)
violations = data["Violations"]
pnl = data["PnL"] 
data.sample(5)

The dataframe looks like:

' |     PnL       |      VaR        |   Violations |
  | -889.003707   | -2554.503872    |            0 |
  | -2554.503872  | -2202.221691    |            1 | 
  | -887.527423   | -2193.692570    |            0 |  
  | -274.344126   | -2160.290746    |            0 | 
  | 1376.018638   | -5719.833100    |            0 |'

Not all tests should be applied to the VaR Model. Some of them its applied whether the VaR Model has assumption of zero mean or follow a specific distribution. So you should test the data:

import vartests

vartests.zero_mean_test(pnl.values, conf_level=0.95)

This assumption is commom used in parametric VaR like EWMA and GARCH Models. Besides that, is necessary check assumption of distribution. So you should test with Berkowitz (2001):

import vartests

vartests.berkowtiz_tail_test(pnl, volatility_window=252, var_conf_level=0.99, conf_level=0.95)

The following tests should be used to any kind of VaR Models.

import vartests

vartests.kupiec_test(violations, var_conf_level=0.99, conf_level=0.95)

vartests.duration_test(violations, conf_level=0.95)

If you want to see the failure ratio of the VaR Model, just type:

import vartests

vartests.failure_rate(violations)
Owner
RAFAEL RODRIGUES
Quantitative Finance, data science, optimisation, Python, julia, R.
RAFAEL RODRIGUES
Investigating EV charging data

Investigating EV charging data Introduction: Got an opportunity to work with a home monitoring technology company over the last 6 months whose goal wa

Yash 2 Apr 07, 2022
A set of tools to analyse the output from TraDIS analyses

QuaTradis (Quadram TraDis) A set of tools to analyse the output from TraDIS analyses Contents Introduction Installation Required dependencies Bioconda

Quadram Institute Bioscience 2 Feb 16, 2022
Python library for creating data pipelines with chain functional programming

PyFunctional Features PyFunctional makes creating data pipelines easy by using chained functional operators. Here are a few examples of what it can do

Pedro Rodriguez 2.1k Jan 05, 2023
An extension to pandas dataframes describe function.

pandas_summary An extension to pandas dataframes describe function. The module contains DataFrameSummary object that extend describe() with: propertie

Mourad 450 Dec 30, 2022
This is an analysis and prediction project for house prices in King County, USA based on certain features of the house

This is a project for analysis and estimation of House Prices in King County USA The .csv file contains the data of the house and the .ipynb file con

Amit Prakash 1 Jan 21, 2022
An ETL framework + Monitoring UI/API (experimental project for learning purposes)

Fastlane An ETL framework for building pipelines, and Flask based web API/UI for monitoring pipelines. Project structure fastlane |- fastlane: (ETL fr

Dan Katz 2 Jan 06, 2022
DaCe is a parallel programming framework that takes code in Python/NumPy and other programming languages

aCe - Data-Centric Parallel Programming Decoupling domain science from performance optimization. DaCe is a parallel programming framework that takes c

SPCL 330 Dec 30, 2022
Tools for working with MARC data in Catalogue Bridge.

catbridge_tools Tools for working with MARC data in Catalogue Bridge. Borrows heavily from PyMarc

1 Nov 11, 2021
Analyzing Earth Observation (EO) data is complex and solutions often require custom tailored algorithms.

eo-grow Earth observation framework for scaled-up processing in Python. Analyzing Earth Observation (EO) data is complex and solutions often require c

Sentinel Hub 18 Dec 23, 2022
INFO-H515 - Big Data Scalable Analytics

INFO-H515 - Big Data Scalable Analytics Jacopo De Stefani, Giovanni Buroni, Théo Verhelst and Gianluca Bontempi - Machine Learning Group Exercise clas

Yann-Aël Le Borgne 58 Dec 11, 2022
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
ETL pipeline on movie data using Python and postgreSQL

Movies-ETL ETL pipeline on movie data using Python and postgreSQL Overview This project consisted on a automated Extraction, Transformation and Load p

Juan Nicolas Serrano 0 Jul 07, 2021
Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences

Synthetic data need to preserve the statistical properties of real data in terms of their individual behavior and (inter-)dependences. Copula and functional Principle Component Analysis (fPCA) are st

32 Dec 20, 2022
Data processing with Pandas.

Processing-data-with-python This is a simple example showing how to use Pandas to create a dataframe and the processing data with python. The jupyter

1 Jan 23, 2022
Data science/Analysis Health Care Portfolio

Health-Care-DS-Projects Data Science/Analysis Health Care Portfolio Consists Of 3 Projects: Mexico Covid-19 project, analyze the patient medical histo

Mohamed Abd El-Mohsen 1 Feb 13, 2022
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

DAGsHub 359 Dec 22, 2022
Performance analysis of predictive (alpha) stock factors

Alphalens Alphalens is a Python Library for performance analysis of predictive (alpha) stock factors. Alphalens works great with the Zipline open sour

Quantopian, Inc. 2.5k Jan 09, 2023
Hydrogen (or other pure gas phase species) depressurization calculations

HydDown Hydrogen (or other pure gas phase species) depressurization calculations This code is published under an MIT license. Install as simple as: pi

Anders Andreasen 13 Nov 26, 2022
Developed for analyzing the covariance for OrcVIO

about This repo is developed for analyzing the covariance for OrcVIO environment setup platform ubuntu 18.04 using conda conda env create --file envir

Sean 1 Dec 08, 2021
VHub - An API that permits uploading of vulnerability datasets and return of the serialized data

VHub - An API that permits uploading of vulnerability datasets and return of the serialized data

André Rodrigues 2 Feb 14, 2022