PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

Overview

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.10059), together with a toolkit of portfolio management research.

  • The deep reinforcement learning framework is the core part of the library. The method is basically the policy gradient on immediate reward. One can configurate the topology, training method or input data in a separate json file. The training process will be recorded and user can visualize the training using tensorboard. Result summary and parallel training are allowed for better hyper-parameters optimization.
  • The financial-model-based portfolio management algorithms are also embedded in this library for comparision purpose, whose implementation is based on Li and Hoi's toolkit OLPS.

Differences from the article version

Note that this library is a part of our main project, and it is several versions ahead of the article.

  • In this version, some technical bugs are fixed and improvements in hyper-parameter tuning and engineering are made.
    • The most important bug in the arxiv v2 article is that the test time-span mentioned is about 30% shorter than the actual experiment. Thus the volumn-observation interval (for asset selection) overlapped with the backtest data in the paper.
  • With new hyper-parameters, users can train the models with smaller time durations.(less than 30 mins)
  • All updates will be incorporated into future versions of the paper.
  • Original versioning history, and internal discussions, including some in-code comments, are removed in this open-sourced edition. These contains our unimplemented ideas, some of which will very likely become the foundations of our future publications

Platform Support

Python 3.5+ in windows and Python 2.7+/3.5+ in linux are supported.

Dependencies

Install Dependencies via pip install -r requirements.txt

  • tensorflow (>= 1.0.0)
  • tflearn
  • pandas
  • ...

User Guide

Please check out User Guide

Acknowledgement

This project would not have been finished without using the codes from the following open source projects:

Community Contribution

We welcome contributions from the community, including but not limited to:

  • Bug fixing
  • Interfacing to other markets such as stock, futures, options
  • Adding broker API (under marketdata)
  • More backtest strategies (under tdagent)

Risk Disclaimer (for Live-trading)

There is always risk of loss in trading. All trading strategies are used at your own risk

The volumes of many cryptocurrency markets are still low. Market impact and slippage may badly affect the results during live trading.

Donation

If you have made some profits because of this project or you just love reading our codes, please consider making a small donation to our ongoing projects via the following BTC or ETH address. All donations will be used as student stipends.

Comments
  • Question about reward function and `__pack_samples`

    Question about reward function and `__pack_samples`

    I'm having trouble reconciling what I read in the paper and what I read in the code.

    The reward function in a single period in the paper (Eq. (10)) is \log(\mu_t y_t \cdot w_{t-1}). But in the code, it seems that the reward is instead \log(mu_t y_{t+1} \cdot w_{t}). Am I correct?

    Because __pack_samples (in datamatrices.py) makes the price tensor X using M[..., :-1] and the relative price vector y using M[...,-1]/M[...,-2], so y is one period ahead of X.

    opened by ziofil 14
  • ValueError during training

    ValueError during training

    Running on Python 3.4.3, after I call python3 main.py --mode=train --processes=1, I get the following error:

    ValueError: the length of selected coins 0 is not equal to expected 11

    Perhaps this an issue with my version of Python?

    opened by jpa99 10
  • online training

    online training

    Hello

    Thanks for the wonderful work, i read your paper and almost studied most of the code. However, i don't get the concept of append_experience and agent train in the rolling_train method I have some questions if i may ask 1- what is the format of the saved experience and how does it affect the model ? 2- how is that different from training the model directly using self._agent.train() ? 3- is the experience mentioned here the same as the mini-batches mentioned in the paper for online learning section 5.3 for example ?

    thanks in advance Sarah Ahmed

    opened by zingomaster 8
  • working config

    working config

    Im trying reproduce result plotted in User Guide (10^2), but with default config getting much worse results. Which config was used in example? Thanks!

    opened by laci84 8
  • ConvLayer Filters

    ConvLayer Filters

    Figure 2 in Paper (Attached image): Shouldn't the convolutional filters be 3 dimensional? I mean, in the original convolution how do we go from 3 feature maps to 2 feature maps. I believe this would make sense if the filter was of dimension 2x1x3 (same as described but with additional depth of 2). And then the second convolution would be 2x48 to get the 20 11x1 feature maps.

    net_config.json: In ConvLayer, I don't understand how {"filter_shape":[1,2],"filter_number":3} corresponds to the filters outlined in the paper as described in my above question. (Excuse my ignorance of tflearn, but the params to conv2d() are not well explained in the documentation)

    image

    opened by LinuxIsCool 7
  • reversed_USDT vs BTC

    reversed_USDT vs BTC

    Hello,

    In the code, i don't understand what is the difference between reversed_USDT and the cash (BTC).

    I supposed (USDT_BTC) which is actually BTC/USD is a mapping to just holding some weight in BTC

    Am i wrong ?

    opened by AhmMontasser 7
  • Backtest trade by strategy, check fees vs coin value update

    Backtest trade by strategy, check fees vs coin value update

    In BackTest, "omega" seems to be the vector wT storing the recommended new portfolio distribution at each step, "_last_omega" the latest/previous portfolio screenhost wT-1. So the system assumes to be able to sell at each step all the current coins of the portfolio and buy all "omega" reco, or at least the delta between omega & last_omega. This strong hypothesis (slippage/liquidity) is in your paper but shouldn't it check at least whether any coin qty adjustment would not cost more transaction fees than the expected value adjustment ?

    opened by doxav 7
  • Poloniex API no longer accessible programmatically

    Poloniex API no longer accessible programmatically

    Looks like the Poloniex API is no longer accessible programmatically. I'll look into alternative APIs and will try to follow up with a pull request for this.

    opened by ielashi 7
  • updated Readme and User Guide

    updated Readme and User Guide

    Hey @ZhengyaoJiang I've updated the readme and user guide to reflect the current version of the library. Please have a look and let me know if I missed anything or if there are other things that need improvement.

    I mostly simplified the explanation and made it clearer where I thought there were ambiguities.

    opened by ghego 6
  • ForwardTest class

    ForwardTest class

    Hi, thank you for your excellent work, this is very interesting stuff.

    I am eager to test this on the live market, but having trouble moving from backtesting to forwardtesting. Any chance that an update with a ForwardTest class is on the way, or that you could advise on how to implement it? I understand it roughly, i.e. the generate_history_matrix( ) function needs to update the datamatrix with the newest market data (with "online" = True in the config file), and return that. And the trade_by_strategy( ) clearly needs a slight rewriting compared to BackTest as we don't know the future price. Any help on how to correctly return the newest market data would be appreciated.

    opened by einarbmag 6
  • Learning procedure

    Learning procedure

    Hello again!

    May I ask here for more details about learning procedure, because I'm not really in shape to understand all the code, may be with your guides here I'll go through it again with more success.

    1. During training phase how many times CNN learns on the same batch? Do you use epochs to learn or CNN passes through the data only once?
    2. During CV and Test phases rolling learning is used. On what data do CNN weights get updated? After all orders have been completed in current period we add price history into local DB. Do we select N periods before current period into learning batch? Or we update weights only using last price window?

    Sorry if it's newbie questions, I just want to understand how this magic works.

    opened by lytkarinskiy 6
  • KeyError: 'BTS_BTC'

    KeyError: 'BTS_BTC'

    Hi,

    I've tried several configurations of my anaconda environment, at first, I managed to make the python main.py --mode=download_data part work, but then I ran into the issues with the update of pandas mentioned in other issues. Trying to fix that I cannot come back to my initial progress even though I've made a new environment and forked the repo once again.

    The error I get is:

    Traceback (most recent call last): File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\main.py", line 132, in main() File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\main.py", line 71, in main DataMatrices(start=start, File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\datamatrices.py", line 44, in init self.__history_manager = gdm.HistoryManager(coin_number=coin_filter, end=self.__end, File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\globaldatamatrix.py", line 24, in init self._coin_list = CoinList(end, volume_average_days, volume_forward) File "C:\Users\Alexander.S.Dahlberg\source\repos\PGPortfolio\pgportfolio\marketdata\coinlist.py", line 35, in init prices.append(1.0 / float(ticker[k]['last'])) KeyError: 'BTS_BTC'

    I've no clue how to solve this. Have any others experienced the issue?

    Thanks

    opened by dalle244 4
  • How to run this agent

    How to run this agent

    Hi! i am trying to run your code on Visual Studio 2017, I have downloaded and installed all necessary libraries and dependencies. I attach the main.py file and run it and a console window opens, which I will attach below. I am not native to python so some step by step procedure would be extremely helpful.

    output
    opened by UmairKhalidKhan 0
  • Problem of dtype arguments

    Problem of dtype arguments

    Hello,

    When I run this code:

    python main.py --mode=train --processes=1

    I get this error: TypeError: init() got multiple values for argument 'dtype'

    I changed only the start and end times in the configuration file. Are there any recommendations?

    Here's the logfile:

    INFO:root:select coin online from 2021-10-12 00:00 to 2021-11-11 00:00 DEBUG:root:Selected coins are: ['reversed_USDT', 'reversed_USDC', 'ETH', 'LTC', 'XRP', 'SRM', 'DOGE', 'XMR', 'BCH', 'DOT', 'EOS'] INFO:root:fill SRM data from 2021-03-26 00:00 to 2021-06-24 11:59 INFO:root:fill SRM data from 2021-06-24 12:00 to 2021-09-22 23:59 INFO:root:fill SRM data from 2021-09-23 00:00 to 2021-12-01 00:00 INFO:root:fill DOT data from 2021-03-26 00:00 to 2021-06-24 11:59 INFO:root:fill DOT data from 2021-06-24 12:00 to 2021-09-22 23:59 INFO:root:fill DOT data from 2021-09-23 00:00 to 2021-12-01 00:00 INFO:root:fill EOS data from 2021-11-30 23:00 to 2021-12-01 00:00 INFO:root:feature type list is ['close', 'high', 'low'] DEBUG:root:buffer_bias is 0.000050 INFO:root:the number of training examples is 11008, of test examples is 929 DEBUG:root:the training set is from 0 to 11007 DEBUG:root:the test set is from 11040 to 12000

    Thanks.

    opened by duodenum96 0
  • Normalization on open price

    Normalization on open price

    when you already know I think normalization on open price is incorrect for this task. In real life, you can not buy on open price, when you already know high and low, from my point of view for real testing you should normalize for the close price (open for next candle) - if you do this - results will be significant worst. Have I made a mistake in my reasoning?

    opened by i7p9h9 1
  • Fix: train_summary.csv not generated

    Fix: train_summary.csv not generated

    Hello,

    I was not getting error in train phase however train_summary.csv not generated either. However, when i backtest i was getting error "train_summary.csv not found".

    I found the solution. The problem is related to indentation in tradertrainer.py(__log_result_csv method). At the end of the method replace the lines with indented ones as attached.

    That will produce the necessary csv file. train_summarycsvFIX

    opened by Busy2045 0
Releases(v1.0)
Owner
Zhengyao Jiang
PhD student at UCL, Interested in Deep Learning, Neuro-Symbolic Methods and Reinforcement learning
Zhengyao Jiang
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

912 Jan 08, 2023
A python library for time-series smoothing and outlier detection in a vectorized way.

tsmoothie A python library for time-series smoothing and outlier detection in a vectorized way. Overview tsmoothie computes, in a fast and efficient w

Marco Cerliani 517 Dec 28, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
This is a collection of our NAS and Vision Transformer work.

AutoML - Neural Architecture Search This is a collection of our AutoML-NAS work iRPE (NEW): Rethinking and Improving Relative Position Encoding for Vi

Microsoft 832 Jan 08, 2023
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022
DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation

DynaTune: Dynamic Tensor Program Optimization in Deep Neural Network Compilation This repository is the implementation of DynaTune paper. This folder

4 Nov 02, 2022
Semi-supervised Domain Adaptation via Minimax Entropy

Semi-supervised Domain Adaptation via Minimax Entropy (ICCV 2019) Install pip install -r requirements.txt The code is written for Pytorch 0.4.0, but s

Vision and Learning Group 243 Jan 09, 2023
Reinforcement Learning via Supervised Learning

Reinforcement Learning via Supervised Learning Installation Run pip install -e . in an environment with Python = 3.7.0, 3.9. The code depends on MuJ

Scott Emmons 49 Nov 28, 2022
implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning"

MarginGAN This repository is the implementation of the paper "MarginGAN: Adversarial Training in Semi-Supervised Learning". 1."preliminary" is the imp

Van 7 Dec 23, 2022
Use deep learning, genetic programming and other methods to predict stock and market movements

StockPredictions Use classic tricks, neural networks, deep learning, genetic programming and other methods to predict stock and market movements. Both

Linda MacPhee-Cobb 386 Jan 03, 2023
Companion repo of the UCC 2021 paper "Predictive Auto-scaling with OpenStack Monasca"

Predictive Auto-scaling with OpenStack Monasca Giacomo Lanciano*, Filippo Galli, Tommaso Cucinotta, Davide Bacciu, Andrea Passarella 2021 IEEE/ACM 14t

Giacomo Lanciano 0 Dec 07, 2022
Code for "Primitive Representation Learning for Scene Text Recognition" (CVPR 2021)

Primitive Representation Learning Network (PREN) This repository contains the code for our paper accepted by CVPR 2021 Primitive Representation Learni

Ruijie Yan 76 Jan 02, 2023
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel

Blind Image Super-resolution with Elaborate Degradation Modeling on Noise and Kernel This repository is the official PyTorch implementation of BSRDM w

Zongsheng Yue 69 Jan 05, 2023
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 08, 2023
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

27 Dec 01, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

EleutherAI 96 Dec 21, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
Data and analysis code for an MS on SK VOC genomes phenotyping/neutralisation assays

Description Summary of phylogenomic methods and analyses used in "Immunogenicity of convalescent and vaccinated sera against clinical isolates of ance

Finlay Maguire 1 Jan 06, 2022