Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

Overview

No-Transaction Band Network:
A Neural Network Architecture for Efficient Deep Hedging

Open In Colab

Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

Hedging and pricing financial derivatives while taking into account transaction costs is a tough task. Since the hedging optimization is computationally expensive or even inaccessible, risk premiums of derivatives are often overpriced. This problem prevents the liquid offering of financial derivatives.

Our proposal, "No-Transaction Band Network", enables precise hedging with much fewer simulations. This improvement leads to the offering of cheaper risk premiums and thus liquidizes the derivative market. We believe that our proposal brings the data-driven derivative business via "Deep Hedging" much closer to practical applications.

Summary

  • Deep Hedging is a deep learning-based framework to hedge financial derivatives.
  • However, a hedging strategy is hard to train due to the action dependence, i.e., an appropriate hedging action at the next step depends on the current action.
  • We propose a "No-Transaction Band Network" to overcome this issue.
  • This network circumvents the action-dependence and facilitates quick and precise hedging.

Motivation and Result

Hedging financial derivatives (exotic options in particular) in the presence of transaction cost is a hard task.

In the absence of transaction cost, the perfect hedge is accessible based on the Black-Scholes model. The real market, in contrast, always involves transaction cost and thereby makes hedging optimization much more challenging. Since the analytic formulas (such as the Black-Scholes formula of European option) are no longer available in such a market, human traders may hedge and then price derivatives based on their experiences.

Deep Hedging is a ground-breaking framework to automate and optimize such operations. In this framework, a neural network is trained to hedge derivatives so that it minimizes a proper risk measure. However, training in deep hedging suffers difficulty of action dependence since an appropriate action at the next step depends on the current action.

So, we propose "No-Transaction Band Network" for efficient deep hedging. This architecture circumvents the complication to facilitate quick training and better hedging.

loss_lookback

The learning histories above demonstrate that the no-transaction band network can be trained much quicker than the ordinary feed-forward network (See our paper for details).

price_lookback

The figure above plots the derivative price (technically derivative price spreads, which are prices subtracted by that without transaction cost) as a function of the transaction cost. The no-transaction-band network attains cheaper prices than the ordinary network and an approximate analytic formula.

Proposed Architecture: No-Transaction Band Network

The following figures show the schematic diagrams of the neural network which was originally proposed in Deep Hedging (left) and the no-transaction band network (right).

nn

  • The original network:
    • The input of the neural network uses the current hedge ratio (δ_ti) as well as other information (I_ti).
    • Since the input includes the current action δ_ti, this network suffers the complication of action-dependence.
  • The no-transaction band network:
    • This architecture computes "no-transaction band" [b_l, b_u] by a neural network and then gets the next hedge ratio by clamping the current hedge ratio inside this band.
    • Since the input of the neural network does not use the current action, this architecture can circumvent the action-dependence and facilitate training.

Give it a Try!

Open In Colab

You can try out the efficacy of No-Transaction Band Network on a Jupyter Notebook: main.ipynb.

As you can see there, the no-transaction-band can be implemented by simply adding one special layer to an arbitrary neural network.

A comprehensive library for Deep Hedging, pfhedge, is available on PyPI.

References

  • Shota Imaki, Kentaro Imajo, Katsuya Ito, Kentaro Minami and Kei Nakagawa, "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging". arXiv:2103.01775 [q-fin.CP].
  • 今木翔太, 今城健太郎, 伊藤克哉, 南賢太郎, 中川慧, "効率的な Deep Hedging のためのニューラルネットワーク構造", 人工知能学 金融情報学研究会(SIG-FIN)第 26 回研究会.
  • Hans Bühler, Lukas Gonon, Josef Teichmann and Ben Wood, "Deep hedging". Quantitative Finance, 2019, 19, 1271–1291. arXiv:1609.05213 [q-fin.CP].
Code for ACL2021 long paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

LANKA This is the source code for paper: Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases (ACL 2021, long paper) Referen

Boxi Cao 30 Oct 24, 2022
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering

RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering Authors: Xi Ye, Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou and

Salesforce 72 Dec 05, 2022
POCO: Point Convolution for Surface Reconstruction

POCO: Point Convolution for Surface Reconstruction by: Alexandre Boulch and Renaud Marlet Abstract Implicit neural networks have been successfully use

valeo.ai 93 Dec 29, 2022
Implementation of Squeezenet in pytorch, pretrained models on Cifar 10 data to come

Pytorch Squeeznet Pytorch implementation of Squeezenet model as described in https://arxiv.org/abs/1602.07360 on cifar-10 Data. The definition of Sque

gaurav pathak 86 Oct 28, 2022
🎃 Core identification module of AI powerful point reading system platform.

ppReader-Kernel Intro Core identification module of AI powerful point reading system platform. Usage 硬件: Windows10、GPU:nvdia GTX 1060 、普通RBG相机 软件: con

CrashKing 1 Jan 11, 2022
Photographic Image Synthesis with Cascaded Refinement Networks - Pytorch Implementation

Photographic Image Synthesis with Cascaded Refinement Networks-Pytorch (https://arxiv.org/abs/1707.09405) This is a Pytorch implementation of cascaded

Soumya Tripathy 63 Mar 27, 2022
School of Artificial Intelligence at the Nanjing University (NJU)School of Artificial Intelligence at the Nanjing University (NJU)

F-Principle This is an exercise problem of the digital signal processing (DSP) course at School of Artificial Intelligence at the Nanjing University (

Thyrix 5 Nov 23, 2022
Spherical Confidence Learning for Face Recognition, accepted to CVPR2021.

Sphere Confidence Face (SCF) This repository contains the PyTorch implementation of Sphere Confidence Face (SCF) proposed in the CVPR2021 paper: Shen

Maths 70 Dec 09, 2022
Code for ICLR 2020 paper "VL-BERT: Pre-training of Generic Visual-Linguistic Representations".

VL-BERT By Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai. This repository is an official implementation of the paper VL-BERT:

Weijie Su 698 Dec 18, 2022
基于Paddlepaddle复现yolov5,支持PaddleDetection接口

PaddleDetection yolov5 https://github.com/Sharpiless/PaddleDetection-Yolov5 简介 PaddleDetection飞桨目标检测开发套件,旨在帮助开发者更快更好地完成检测模型的组建、训练、优化及部署等全开发流程。 PaddleD

36 Jan 07, 2023
AI4Good project for detecting waste in the environment

Detect waste AI4Good project for detecting waste in environment. www.detectwaste.ml. Our latest results were published in Waste Management journal in

108 Dec 25, 2022
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
Learning Temporal Consistency for Low Light Video Enhancement from Single Images (CVPR2021)

StableLLVE This is a Pytorch implementation of "Learning Temporal Consistency for Low Light Video Enhancement from Single Images" in CVPR 2021, by Fan

99 Dec 19, 2022
Implementation of a Transformer using ReLA (Rectified Linear Attention)

ReLA (Rectified Linear Attention) Transformer Implementation of a Transformer using ReLA (Rectified Linear Attention). It will also contain an attempt

Phil Wang 49 Oct 14, 2022
scalingscattering

Scaling The Scattering Transform : Deep Hybrid Networks This repository contains the experiments found in the paper: https://arxiv.org/abs/1703.08961

Edouard Oyallon 78 Dec 21, 2022
This repository gives an example on how to preprocess the data of the HECKTOR challenge

HECKTOR 2021 challenge This repository gives an example on how to preprocess the data of the HECKTOR challenge. Any other preprocessing is welcomed an

56 Dec 01, 2022
Contrastive Learning with Non-Semantic Negatives

Contrastive Learning with Non-Semantic Negatives This repository is the official implementation of Robust Contrastive Learning Using Negative Samples

39 Jul 31, 2022
I3-master-layout - Simple master and stack layout script

Simple master and stack layout script | ------ | ----- | | | | | Ma

Tobias S 18 Dec 05, 2022
Context Axial Reverse Attention Network for Small Medical Objects Segmentation

CaraNet: Context Axial Reverse Attention Network for Small Medical Objects Segmentation This repository contains the implementation of a novel attenti

401 Dec 23, 2022
DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection

DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Object Detection Code for our Paper DAFNe: A One-Stage Anchor-Free Deep Model for Oriented Obje

Steven Lang 58 Dec 19, 2022