Optimize Trading Strategies Using Freqtrade

Overview

Optimize trading strategy using Freqtrade

Short demo on building, testing and optimizing a trading strategy using Freqtrade.

The DevBootstrap YouTube screencast supporting this repo is here. Enjoy! :)

Alias Docker-Compose Command

First, I recommend to alias docker-compose to dc and docker-compose run --rm "$@" to dcr to save of typing.

Put this in your ~/.bash_profile file so that its always aliased like this!

alias dc=docker-compose
dcr() { docker-compose run --rm "$@"; }

Now run source ~/.bash_profile.

Installing Freqtrade

Install and run via Docker.

Now install the necessary dependencies to run Freqtrade:

mkdir ft_userdata
cd ft_userdata/
# Download the dc file from the repository
curl https://raw.githubusercontent.com/freqtrade/freqtrade/stable/docker-compose.yml -o docker-compose.yml

# Pull the freqtrade image
dc pull

# Create user directory structure
dcr freqtrade create-userdir --userdir user_data

# Create configuration - Requires answering interactive questions
dcr freqtrade new-config --config user_data/config.json

NOTE: Any freqtrade commands are available by running dcr freqtrade <command> <optional arguments>. So the only difference to run the command via docker-compose is to prefix the command with our new alias dcr (which runs docker-compose run --rm "$@" ... see above for details.)

Config Bot

If you used the new-config sub-command (see above) when installing the bot, the installation script should have already created the default configuration file (config.json) for you.

The params that we will set to note are (from config.json). This allows all the available balance to be distrubuted accross all possible trades. So in dry run mode we have a default paper money balance of 1000 (can be changed using dry_run_wallet param) and if we set to have a max of 10 trades then Freqtrade would distribute the funds accrosss all 10 trades aprroximatly equally (1000 / 10 = 100 / trade).

"stake_amount" : "unlimited",
"tradable_balance_ratio": 0.99,

The above are used for Dry Runs and is the 'Dynamic Stake Amount'. For live trading you might want to change this. For example, only allow bot to trade 20% of excahnge account funds and cancel open orders on exit (if market goes crazy!)

"tradable_balance_ratio": 0.2,
"cancel_open_orders_on_exit": true

For details of all available parameters, please refer to the configuration parameters docs.

Create a Strategy

So I've created a 'BBRSINaiveStrategy' based on RSI and Bollenger Bands. Take a look at the file bbrsi_naive_strategy.py file for details.

To tell your instance of Freqtrade about this strategy, open your docker-compose.yml file and update the strategy flag (last flag of the command) to --strategy BBRSINaiveStrategy

For more details on Strategy Customization, please refer to the Freqtrade Docs

Remove past trade data

If you have run the bot already, you will need to clear out any existing dry run trades from the database. The easiest way to do this is to delete the sqlite database by running the command rm user_data/tradesv3.sqlite.

Sandbox / Dry Run

As a quick sanity check, you can now immediately start the bot in a sandbox mode and it will start trading (with paper money - not real money!).

To start trading in sandbox mode, simply start the service as a daemon using Docker Compose, like so and follow the log trail as follows:

dc up -d
dc ps
dc logs -f

Setup a pairs file

We will use Binance so we create a data directory for binance and copy our pairs.json file into that directory:

mkdir -p user_data/data/binance
cp pairs.json user_data/data/binance/.

Now put whatever pairs you are interested to download into the pairs.json file. Take a look at the pairs.json file included in this repo.

Download Data

Now that we have our pairs file in place, lets download the OHLCV data for backtesting our strategy.

dcr freqtrade download-data --exchange binance -t 15m

List the available data using the list-data sub-command:

dcr freqtrade list-data --exchange binance

Manually inspect the json files to examine the data is as expected (i.e. that it contains the expected OHLCV data requested).

List the available data for backtesting

Note to list the available data you need to pass the --data-format-ohlcv jsongz flag as below:

dcr freqtrade list-data --exchange binance

Backtest

Now we have the data for 1h and 4h OHLCV data for our pairs lets Backtest this strategy:

dcr freqtrade backtesting --datadir user_data/data/binance --export trades  --stake-amount 100 -s BBRSINaiveStrategy -i 15m

For details on interpreting the result, refer to 'Understading the backtesting result'

Plotting

Plot the results to see how the bot entered and exited trades over time. Remember to change the Docker image being referenced in the docker-compose file to freqtradeorg/freqtrade:develop_plot before running the below command.

Note that the plot_config that is contained in the strategy will be applied to the chart.

dcr freqtrade plot-dataframe --strategy BBRSINaiveStrategy -p ALGO/USDT -i 15m

Once the plot is ready you will see the message Stored plot as /freqtrade/user_data/plot/freqtrade-plot-ALGO_USDT-15m.html which you can open in a browser window.

Optimize

To optimize the strategy we will use the Hyperopt module of freqtrade. First up we need to create a new hyperopt file from a template:

dcr freqtrade new-hyperopt --hyperopt BBRSIHyperopt

Now add desired definitions for buy/sell guards and triggers to the Hyperopt file. Then run the optimization like so (NOTE: set the time interval and the number of epochs to test using the -i and -e flags:

dcr freqtrade hyperopt --hyperopt BBRSIHyperopt --hyperopt-loss SharpeHyperOptLoss --strategy BBRSINaiveStrategy -i 15m

Update Strategy

Apply the suggested optimized results from the Hyperopt to the strategy. Either replace the current strategy or create a new 'optimized' strategy.

Backtest

Now we have updated our strategy based on the result from the hyperopt lets run a backtest again:

dcr freqtrade backtesting --datadir user_data/data/binance --export trades --stake-amount 100 -s BBRSIOptimizedStrategy -i 15m

Sandbox / Dry Run

Before you run the Dry Run, don't forget to check your local config.json file is configured. Particularly the dry_run is true, the dry_run_wallet is set to something reasonable (like 1000 USDT) and that the timeframe is set to the same that you have used when building and optimizing your strategy!

"max_open_trades": 10,
"stake_currency": "USDT",
"stake_amount" : "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "USD",
"timeframe": "15min",
"dry_run": true,
"dry_run_wallet": 1000,

View Dry Run via Freq UI

For use with docker you will need to enable the api server in the Freqtrade config and set listen_ip_address to "0.0.0.0", and also set the username & password so that you can login like so:

...

"api_server": {
  "enabled": true,
  "listen_ip_address": "0.0.0.0",
  "username": "Freqtrader",
  "password": "secretpass!",
P
...

In the docker-compose.yml file also map the ports like so:

ports:
  - "127.0.0.1:8080:8080"

Then you can access the Freq UI via a browser at http://127.0.0.1:8080/. You can also access and control the bot via a REST API too!

Owner
DevBootstrap
Full Stack Blockchain Developer Tutorials
DevBootstrap
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
Deep Learning to Create StepMania SM FIles

StepCOVNet Running Audio to SM File Generator Currently only produces .txt files. Use SMDataTools to convert .txt to .sm python stepmania_note_generat

Chimezie Iwuanyanwu 8 Jan 08, 2023
Official repository for "On Generating Transferable Targeted Perturbations" (ICCV 2021)

On Generating Transferable Targeted Perturbations (ICCV'21) Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Fatih Porikli Paper:

Muzammal Naseer 46 Nov 17, 2022
Pytorch implementation of the paper Time-series Generative Adversarial Networks

TimeGAN-pytorch Pytorch implementation of the paper Time-series Generative Adversarial Networks presented at NeurIPS'19. Jinsung Yoon, Daniel Jarrett

Zhiwei ZHANG 21 Nov 24, 2022
Source code for "FastBERT: a Self-distilling BERT with Adaptive Inference Time".

FastBERT Source code for "FastBERT: a Self-distilling BERT with Adaptive Inference Time". Good News 2021/10/29 - Code: Code of FastPLM is released on

Weijie Liu 584 Jan 02, 2023
PyTorch implementation of "Dataset Knowledge Transfer for Class-Incremental Learning Without Memory" (WACV2022)

Dataset Knowledge Transfer for Class-Incremental Learning Without Memory [Paper] [Slides] Summary Introduction Installation Reproducing results Citati

Habib Slim 5 Dec 05, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Boostcamp CV Serving For Python

Boostcamp-CV-Serving Prerequisites MySQL GCP Cloud Storage GCP key file Sentry Streamlit Cloud Secrets: .streamlit/secrets.toml #DO NOT SHARE THIS I

Jungwon Seo 19 Feb 22, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking

RDA: Robust Domain Adaptation via Fourier Adversarial Attacking Updates 08/2021: check out our domain adaptation for video segmentation paper Domain A

17 Nov 30, 2022
A chemical analysis of lipophilicities & molecule drawings including ML

A chemical analysis of lipophilicity & molecule drawings including a bit of ML analysis. This is a simple project that includes two Jupyter files (one

Aurimas A. NausÄ—das 7 Nov 22, 2022
Discerning Decision-Making Process of Deep Neural Networks with Hierarchical Voting Transformation

Configurations Change HOME_PATH in CONFIG.py as the current path Data Prepare CENSINCOME Download data Put census-income.data and census-income.test i

2 Aug 14, 2022
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction This repo contains the data sets and source code of our paper: Aspect-Category-Opinion-S

NUSTM 144 Jan 02, 2023
Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph

NIRPS-ETC Exposure Time Calculator (ETC) and radial velocity precision estimator for the Near InfraRed Planet Searcher (NIRPS) spectrograph February 2

Nolan Grieves 2 Sep 15, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
Code for "Causal autoregressive flows" - AISTATS, 2021

Code for "Causal Autoregressive Flow" This repository contains code to run and reproduce experiments presented in Causal Autoregressive Flows, present

Ricardo Pio Monti 35 Dec 16, 2022
Python code for loading the Aschaffenburg Pose Dataset.

Aschaffenburg Pose Dataset (APD) This repository contains Python code for loading and filtering the Aschaffenburg Pose Dataset. The dataset itself and

1 Nov 26, 2021
This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation.

ERFNet This code is a toolbox that uses Torch library for training and evaluating the ERFNet architecture for semantic segmentation. NEW!! New PyTorch

Edu 104 Jan 05, 2023
Home repository for the Regularized Greedy Forest (RGF) library. It includes original implementation from the paper and multithreaded one written in C++, along with various language-specific wrappers.

Regularized Greedy Forest Regularized Greedy Forest (RGF) is a tree ensemble machine learning method described in this paper. RGF can deliver better r

RGF-team 364 Dec 28, 2022