FinRL­-Meta: A Universe for Data­-Driven Financial Reinforcement Learning. 🔥

Overview

FinRL-Meta: A Universe of Market Environments.

Downloads Downloads Python 3.6 PyPI

FinRL-Meta is a universe of market environments for data-driven financial reinforcement learning. Users can use FinRL-Meta as the metaverse of their financial environments.

  1. FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy and provides open-source data engineering tools for financial big data.
  2. FinRL-Meta provides hundreds of market environments for various trading tasks.
  3. FinRL-Meta enables multiprocessing simulation and training by exploiting thousands of GPU cores.

Also called Neo_FinRL: Near real-market Environments for data-driven Financial Reinforcement Learning.

Outline

Our Goals

  • To reduce the simulation-reality gap: existing works use backtesting on historical data, while the real performance may be quite different when applying the algorithms to paper/live trading.
  • To reduce the data pre-processing burden, so that quants can focus on developing and optimizing strategies.
  • To provide benchmark performance and facilitate fair comparisons, providing a standardized environment will allow researchers to evaluate different strategies in the same way. Also, it would help researchers to better understand the “black-box” nature (deep neural network-based) of DRL algorithms.

Design Principles

  • Plug-and-Play (PnP): Modularity; Handle different markets (say T0 vs. T+1)
  • Completeness and universal: Multiple markets; Various data sources (APIs, Excel, etc); User-friendly variables.
  • Avoid hard-coded parameters
  • Closing the sim-real gap using the “training-testing-trading” pipeline: simulation for training and connecting real-time APIs for testing/trading.
  • Efficient data sampling: accelerate the data sampling process is the key to DRL training! From the ElegantRL project. we know that multi-processing is powerful to reduce the training time (scheduling between CPU + GPU).
  • Transparency: a virtual env that is invisible to the upper layer
  • Flexibility and extensibility: Inheritance might be helpful here

Overview

Overview image of NeoFinRL We utilize a layered structure in FinRL-metaverse, as shown in the figure above. FinRL-metaverse consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is independent. Meanwhile, layers interact through end-to-end interfaces to implement the complete workflow of algorithm trading.

DataOps

DataOps is a series of principles and practices to improve the quality and reduce the cycle time of data science. It inherits the ideas of Agile development, DevOps, and lean manufacturing and applies them to the data science and machine learning field. FinRL-Meta follows the DataOps paradigm.

Supported Data Sources:

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Yahoo! Finance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
WRDS.TAQ US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators

Plug-and-Play

In the development pipeline, we separate market environments from the data layer and the agent layer. Any DRL agent can be directly plugged into our environments, then trained and tested. Different agents/algorithms can be compared by running on the same benchmark environment for fair evaluations.

A demonstration notebook for plug-and-play with ElegantRL, Stable Baselines3 and RLlib: Play and Play with DRL Agents

"Training-Testing-Trading" Pipeline

A DRL agent learns by interacting with the training environment, is validated in the validation environment for parameter tuning. Then, the agent is tested in historical datasets (backtesting). Finally, the agent will be deployed in paper trading or live trading markets.

This pipeline solves the information leakage problem because the trading data are never leaked when training/tuning the agents.

Such a unified pipeline allows fair comparisons among different algorithms and strategies.

Our Vision

For future work, we plan to build a multi-agent-based market simulator that consists of over ten thousands of agents, namely, a FinRL-Metaverse. First, FinRL-Metaverse aims to build a universe of market environments, like the XLand environment (source) and planet-scale climate forecast (source) by DeepMind. To improve the performance for large-scale markets, we will employ GPU-based massive parallel simulation as Isaac Gym (source). Moreover, it will be interesting to explore the deep evolutionary RL framework (source) to simulate the markets. Our final goal is to provide insights into complex market phenomena and offer guidance for financial regulations through FinRL-Metaverse.

Citing FinRL-Meta

@article{finrl_meta_2021,
    author = {Liu, Xiao-Yang and Rui, Jingyang and Gao, Jiechao and Yang, Liuqing and Yang, Hongyang and Wang, Zhaoran and Wang, Christina Dan and Guo Jian},
    title   = {{FinRL-Meta}: Data-Driven Deep ReinforcementLearning in Quantitative Finance},
    journal = {Data-Centric AI Workshop, NeurIPS},
    year    = {2021}
}

Collaborators

           
Comments
  • ValueError: If using all scalar values, you must pass an index using binance processor

    ValueError: If using all scalar values, you must pass an index using binance processor

    I'm using the meta Dataprocessor

    import meta
    from meta.data_processor import DataProcessor
    

    When I run the following :

    
    #Set constants
    #
    LIST_OF_SYMBOLS = ['ADAUSDT' ,'ATOMUSDT' ,'BNBUSDT', 'BTCUSDT' ,'DOTUSDT' ,'ETCUSDT', 'ETHUSDT','LINKUSDT', 'LTCUSDT' ,'SOLUSDT' ,'XMRUSDT' ,'XRPUSDT']
    #Set time interval
    TIME_INTERVAL = '1D'
    
    #Training start
    START_TRAIN =  '2018-01-1'
    
    
    #Training end
    END_TRAIN = '2020-12-1'
    
    #Trading start
    START_TRADE = '2020-12-1'
    
    #Trading end
    END_TRADE = '2022-06-01'
    
    #List of technical indicators
    TECHNICAL_INDICATORS = ['rsi',
                            'cci',
                            'macd',
                            'macd_signal',
                            'macd_hist',
                            'dx'
                                 ]  
    
    if_vix = False
    
    processorObj = DataProcessor(data_source = 'binance', start_date= START_TRAIN, end_date =END_TRAIN, time_interval=TIME_INTERVAL) 
    processorObj.download_data(LIST_OF_SYMBOLS)
    processorObj.clean_data()
    processorObj.add_technical_indicator(TECHNICAL_INDICATORS)
    frame = processorObj.dataframe
    
    frame.head() 
    

    I get :

    
    ValueError                                Traceback (most recent call last)
    [<ipython-input-26-256be0b7d088>](https://localhost:8080/#) in <module>
         30 
         31 processorObj = DataProcessor(data_source = 'binance', start_date= START_TRAIN, end_date =END_TRAIN, time_interval=TIME_INTERVAL)
    ---> 32 processorObj.download_data(LIST_OF_SYMBOLS)
         33 processorObj.clean_data()
         34 processorObj.add_technical_indicator(TECHNICAL_INDICATORS)
    
    7 frames
    [/FinRL-Meta/meta/data_processor.py](https://localhost:8080/#) in download_data(self, ticker_list)
         84 
         85     def download_data(self, ticker_list):
    ---> 86         self.processor.download_data(ticker_list=ticker_list)
         87         self.dataframe = self.processor.dataframe
         88 
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in download_data(self, ticker_list)
         53             final_df = pd.DataFrame()
         54             for i in ticker_list:
    ---> 55                 hist_data = self.dataframe_with_limit(symbol=i)
         56                 df = hist_data.iloc[:-1].dropna()
         57                 df["tic"] = i
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in dataframe_with_limit(self, symbol)
        183         while True:
        184 
    --> 185             new_df = self.get_binance_bars(last_datetime, symbol)
        186             if new_df is None:
        187                 break
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in get_binance_bars(self, last_datetime, symbol)
        121           #r = requests.get(self.url, params=req_params)
        122           #print(r.text)
    --> 123 
        124 
        125         df = pd.DataFrame(requests.get(self.url, params=req_params).json())        if df.empty:
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/frame.py](https://localhost:8080/#) in __init__(self, data, index, columns, dtype, copy)
        612         elif isinstance(data, dict):
        613             # GH#38939 de facto copy defaults to False only in non-dict cases
    --> 614             mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
        615         elif isinstance(data, ma.MaskedArray):
        616             import numpy.ma.mrecords as mrecords
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in dict_to_mgr(data, index, columns, dtype, typ, copy)
        463 
        464     return arrays_to_mgr(
    --> 465         arrays, data_names, index, columns, dtype=dtype, typ=typ, consolidate=copy
        466     )
        467 
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity, typ, consolidate)
        117         # figure out the index, if necessary
        118         if index is None:
    --> 119             index = _extract_index(arrays)
        120         else:
        121             index = ensure_index(index)
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in _extract_index(data)
        623 
        624         if not indexes and not raw_lengths:
    --> 625             raise ValueError("If using all scalar values, you must pass an index")
        626 
        627         if have_series:
    
    ValueError: If using all scalar values, you must pass an index
    

    Even when I try defining the list as ['ADAUSDT'] ,I still get the error

    @zhumingpassional @BruceYanghy @XiaoYangLiu-FinRL

    Please assist

    opened by Daiiszuki 14
  • actions on plug and play DRL notebook

    actions on plug and play DRL notebook

    Hello, I'm a beginner of RL. I wonder that if we can see the actions with the ElegentRL's DRL_prediction function ? (Return actions in [-1,0 1], just like the sb3's DRL_prediction function). It will be helpful if someone share their idea, thanks!

    Besides, I wonder that is it suitable to trade index (like DJI) as single stock with the env in plug_and_play_DRL notebook ?

    discussion 
    opened by wezzyzzs 8
  • Add tests

    Add tests

    I have added several tests:

    • Added testcase for Binance
    • Added testcase for clean_data()
    • added testcase for add_technical_indicator(both stockstats and TALib)
    opened by eyast 7
  • AttributeError: 'NoneType' object has no attribute 'config'

    AttributeError: 'NoneType' object has no attribute 'config'

    Hi, train RLlib come up with following error

    ~/anaconda3/lib/python3.7/site-packages/ray/worker.py in get(object_refs, timeout) 1625 raise value.as_instanceof_cause() 1626 else: -> 1627 raise value 1628 1629 if is_individual_id:

    RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81) AttributeError: 'NoneType' object has no attribute 'config'

    During handling of the above exception, another exception occurred:

    ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81) File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 565, in init devices = get_tf_gpu_devices() File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/utils/tf_ops.py", line 54, in get_gpu_devices devices = tf.config.experimental.list_physical_devices() AttributeError: 'NoneType' object has no attribute 'config'

    it seems that is due to Tensorflow TensorBoard ... any idea, how can we sort it out? regards

    bug 
    opened by uh-reza 7
  • ModuleNotFoundError not found in FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    ModuleNotFoundError not found in FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    While running 2.3. Import Packages got this error

    ModuleNotFoundError                       Traceback (most recent call last)
    [<ipython-input-2-65f21698d972>](https://localhost:8080/#) in <module>()
          9 from finrl import config
         10 from finrl import config_tickers
    ---> 11 from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
         12 from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
         13 from finrl.finrl_meta.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
    
    ModuleNotFoundError: No module named 'finrl.finrl_meta'
    

    Actually, it appears that all of these statements are failing.

    from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
    from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
    from finrl.finrl_meta.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
    from finrl.finrl_meta.data_processor import DataProcessor
    from finrl.finrl_meta.data_processors.processor_yahoofinance import YahooFinanceProcessor
    

    I've tried running this image

    and changing the from statement to

    from finrl_meta.preprocessor.yahoodownloader import YahooDownloader
    

    and

    from finrl-meta.preprocessor.yahoodownloader import YahooDownloader
    

    still no luck. 😞

    Finally tried

    from finrl.meta.preprocessor.yahoodownloader import YahooDownloader
    

    That seemed to have done the trick. 😄

    bug 
    opened by kabua 6
  • Create create_env.py

    Create create_env.py

    I highly suggest for simplifying and easier debugging and better readability and understandability of the codes, the environment creation gets it's own function and gets separated from the train and test functions. One of the prominent advantages of FinRL-Meta is it's modularity and having different independent layers, which the proposed change helps going further in this direction.

    opened by mhdmyz 5
  • Vix

    Vix

    • removed the clean data code from the basic_processor because it's a duplicate. The cleaning steps are individual to each processor and should be / are done there.
    • made it possible to use download_data multiple times (for vix) uses append if self.dataframe is not empty.
    • changed some pandas operation to inplace
    • reimplemented / prepared the add_vix function.

    Tested it with yahoo finance processor. There is a problem though: The clean_data function removes the vix column again. Wanted to fix that, the code is pretty complicate though and it's probably better the original coder of it fixes it. Also the clean_data from yahoo finance uses backward fill? This introduces lookahead. Should be addressed too.

    # if close on start date is NaN, fill data with first valid close
                # and set volume to 0.
    

    So this PR isn't ready and needs more work. The other processor needs to be tested. It provides a solid foundation for making the vix work again though. Hope it helps.

    suggestion 
    opened by cryptocoinserver 5
  • Can not find the Demo_China_A_share_market.ipynb

    Can not find the Demo_China_A_share_market.ipynb

    Thank you for your good work.

    For some reason, I could not find the link for "Demo_China_A_share_market.ipynb".

    The full link was: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/Demo_China_A_share_market.ipynb

    Can you please provide some hints?

    Thank you.

    bug 
    opened by tairen99 4
  • Error in Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb

    Error in Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb

    Hello,

    First issue

    if I run the jupyter notebook Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb on my local computer, I receive warnings in 2 cells. In the cell # !gdown --id "1sp11dtAJGGqC-3UdSn774ZD1zWCsqbn4" !gdown --id "1m63ncE-BYlS77u5ejYTte9Nmh35DWhzp"

    I receive the warning that the command "gdown" is either written wrongly or couldnt be found. And for the cell

    !unzip "/content/Pytrends.zip"

    I get the warning

    unzip: cannot find or open /content/Pytrends.zip, /content/Pytrends.zip.zip or /content/Pytrends.zip.ZIP. I assume its due to these warnings that later on in the cell

    user_df = get_user_df() len(user_df)

    I get the error FileNotFoundError: [WinError 3] Das System kann den angegebenen Pfad nicht finden: 'Pytrends_Data'

    which means that "Pytrends_data" could not be found by the system.

    Second Issue

    I tried running the code on Google Colab instead, however here I received a different error. In the cell

    train_env_instance = get_train_env(TRAIN_START_DATE, TRAIN_END_DATE, ticker_list, data_source, time_interval,model_name, env,info_col) val_env_instance = get_test_env(VAL_START_DATE, VAL_END_DATE, ticker_list, data_source, time_interval, info_col, env, model_name)

    I got the error

    ` in download_data(self, ticker_list, start_date, end_date, time_interval) 33 end_date = end_date, 34 time_interval = time_interval) ---> 35 self.dataframe = self.processor.dataframe 36 37 def clean_data(self):

    AttributeError: 'YahooFinanceProcessor' object has no attribute 'dataframe'`

    I would be very grateful for help for either the errors on my local computer or the error in the google colab. Thank you in advance for any help!

    bug 
    opened by Kartolon 4
  • NameError: name 'MACD' is not defined

    NameError: name 'MACD' is not defined

    Demo_MultiCrypto_Trading.ipynb

    Binance successfully connected
    Adding self-defined technical indicators is NOT supported yet.
    Use default: MACD, RSI, CCI, DX.
    
    ---------------------------------------------------------------------------
    
    NameError                                 Traceback (most recent call last)
    
    <ipython-input-15-29348b501caa> in <module>()
         11       erl_params=ERL_PARAMS,
         12       break_step=5e4,
    ---> 13       if_vix=False
         14       )
    
    3 frames
    
    /FinRL-Meta/finrl_meta/data_processors/processor_binance.py in add_technical_indicator(self, df, tech_indicator_list)
         48         for i in df.tic.unique():
         49             tic_df = df[df.tic==i]
    ---> 50             tic_df['macd'], tic_df['macd_signal'], tic_df['macd_hist'] = MACD(tic_df['close'], fastperiod=12, 
         51                                                                                 slowperiod=26, signalperiod=9)
         52             tic_df['rsi'] = RSI(tic_df['close'], timeperiod=14)
    
    NameError: name 'MACD' is not defined
    

    Caused by the commented line here: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/3a8fdcd7ad1e3ad8e3dbbc9a647cb5eef3769507/finrl_meta/data_processors/processor_binance.py#L6

    good first issue discussion 
    opened by cryptocoinserver 4
  • Error with TRAIN and TEST interval

    Error with TRAIN and TEST interval

    In "Demo_MultiCrypto_Trading" notebook I try to change the interval of data eg: TRAIN_START_DATE = '2020-08-01' -- original START DATE=2021-09-01 TRAIN_END_DATE = '2021-09-20'

    TEST_START_DATE = '2021-09-21' TEST_END_DATE = '2021-09-30'

    with a different timeframe of "60m" instead of "5m". It gives me back the following error: ValueError: If using all scalar values, you must pass an index

    How can I solve it please? Thx

    bug 
    opened by matti410 4
  • FinRL_PaperTrading_Demo np.hstack()

    FinRL_PaperTrading_Demo np.hstack()

    When executing the example here without any modifications except for including Alpaca API key, I get the following error:

    ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 1192 and the array at index 1 has size 1177

    which is coming from this line here. Anyone else run into this issue?

    opened by josh0tt 0
  • How to connect one of FinRL algorithm to the real market ? (and execute trades)

    How to connect one of FinRL algorithm to the real market ? (and execute trades)

    Hello,

    Thanks for this wonderful library.

    I would like to connect one of the RL algos (say the Stock_NeurIPS2018_SB3) to a real broker.

    Do you recommend any Python Library to do so ? Or any other way ?

    I was thinking of using the metatrader 5 Python module.

    help_wanted 
    opened by aymeric75 1
  • [DEBUGGING HELP] ValueError: could not broadcast input array from shape (14,6) into shape (22,14)

    [DEBUGGING HELP] ValueError: could not broadcast input array from shape (14,6) into shape (22,14)

    In refernce to https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials/1-Introduction/FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    The train data is shaped as (7812, 19)

    Passing the data to the env runs without any errors

    cryptoEnv = cryptoPortfolioAllocationEnvironment(dataFrame=trainData, **envKwargs)

    And when I call cryptoEnv.observation_space, the shape is (22, 14), which assume is a combination of the price and indicators:

    14 tickers, 8 indicators

    running activeEnv, _ = cryptoEnv.stableBaselineEnv()

    returns

    
    ValueError                                Traceback (most recent call last)
    [<ipython-input-68-f822f4852cfe>](https://localhost:8080/#) in <module>
    ----> 1 activeEnv, _ = cryptoEnv.stableBaselineEnv()
    
    2 frames
    [<ipython-input-63-fd656b920b2f>](https://localhost:8080/#) in stableBaselineEnv(self)
        189     def stableBaselineEnv(self):
        190       sb = DummyVecEnv([lambda: self])
    --> 191       obs = sb.reset()
        192       return sb, obs
        193 
    
    [/usr/local/lib/python3.7/dist-packages/stable_baselines3/common/vec_env/dummy_vec_env.py](https://localhost:8080/#) in reset(self)
         62         for env_idx in range(self.num_envs):
         63             obs = self.envs[env_idx].reset()
    ---> 64             self._save_obs(env_idx, obs)
         65         return self._obs_from_buf()
         66 
    
    [/usr/local/lib/python3.7/dist-packages/stable_baselines3/common/vec_env/dummy_vec_env.py](https://localhost:8080/#) in _save_obs(self, env_idx, obs)
         92         for key in self.keys:
         93             if key is None:
    ---> 94                 self.buf_obs[key][env_idx] = obs
         95             else:
         96                 self.buf_obs[key][env_idx] = obs[key]
    
    ValueError: could not broadcast input array from shape (14,6) into shape (22,14)
    
    
    

    What am I missing ?

    Please let me know if you require any additional info.The function to generate the env is as follows

    def stableBaselineEnv(self):
          sb = DummyVecEnv([lambda: self])
          obs = sb.reset()
          return sb, obs
    
    bug 
    opened by Daiiszuki 48
  • FinRL_PaperTrading_Demo.ipynb - errors on trading operation

    FinRL_PaperTrading_Demo.ipynb - errors on trading operation

    Environment: Win10 Chrome Colab File at https://colab.research.google.com/github/AI4Finance-Foundation/FinRL-Tutorials/blob/master/3-Practical/FinRL_PaperTrading_Demo.ipynb

    No change to this file other than Alpaca credentials required.

    Function call: paper_trading_erl = AlpacaPaperTrading(ticker_list = DOW_30_TICKER, time_interval = '1Min', drl_lib = 'elegantrl', agent = 'ppo', cwd = './papertrading_erl_retrain', net_dim = ERL_PARAMS['net_dimension'], state_dim = state_dim, action_dim= action_dim, API_KEY = API_KEY, API_SECRET = API_SECRET, API_BASE_URL = API_BASE_URL, tech_indicator_list = INDICATORS, turbulence_thresh=30, max_stock=1e2) paper_trading_erl.run()

    Error Output: load actor from: ./papertrading_erl_retrain/actor.pth Waiting for market to open... 0 minutes til market open. Market opened. Exception in thread Thread-12: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-13: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-14: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-15: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-17: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-18: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-19: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-20: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-21: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-22: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-23: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-24: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-25: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-26: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-27: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-28: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-29: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-30: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    bug 
    opened by marcipops 9
Releases(v0.3.5)
Owner
AI4Finance Foundation
An open-source community sharing AI tools for finance.
AI4Finance Foundation
Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Pretrained models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet.

Matthias Wright 169 Dec 26, 2022
An open-access benchmark and toolbox for electricity price forecasting

epftoolbox The epftoolbox is the first open-access library for driving research in electricity price forecasting. Its main goal is to make available a

97 Dec 05, 2022
Official implementation for "Low-light Image Enhancement via Breaking Down the Darkness"

Low-light Image Enhancement via Breaking Down the Darkness by Qiming Hu, Xiaojie Guo. 1. Dependencies Python3 PyTorch=1.0 OpenCV-Python, TensorboardX

Qiming Hu 30 Jan 01, 2023
Continual Learning of Long Topic Sequences in Neural Information Retrieval

ContinualPassageRanking Repository for the paper "Continual Learning of Long Topic Sequences in Neural Information Retrieval". In this repository you

0 Apr 12, 2022
Submodular Subset Selection for Active Domain Adaptation (ICCV 2021)

S3VAADA: Submodular Subset Selection for Virtual Adversarial Active Domain Adaptation ICCV 2021 Harsh Rangwani, Arihant Jain*, Sumukh K Aithal*, R. Ve

Video Analytics Lab -- IISc 13 Dec 28, 2022
Decorators for maximizing memory utilization with PyTorch & CUDA

torch-max-mem This package provides decorators for memory utilization maximization with PyTorch and CUDA by starting with a maximum parameter size and

Max Berrendorf 10 May 02, 2022
Neural Turing Machines (NTM) - PyTorch Implementation

PyTorch Neural Turing Machine (NTM) PyTorch implementation of Neural Turing Machines (NTM). An NTM is a memory augumented neural network (attached to

Guy Zana 519 Dec 21, 2022
Crowd-Kit is a powerful Python library that implements commonly-used aggregation methods for crowdsourced annotation and offers the relevant metrics and datasets

Crowd-Kit: Computational Quality Control for Crowdsourcing Documentation Crowd-Kit is a powerful Python library that implements commonly-used aggregat

Toloka 125 Dec 30, 2022
This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (ICLR 2022)

Equivariant Subgraph Aggregation Networks (ESAN) This repository contains the official code of the paper Equivariant Subgraph Aggregation Networks (IC

Beatrice Bevilacqua 59 Dec 13, 2022
Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

Group-CAM By Zhang, Qinglong and Rao, Lu and Yang, Yubin [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the o

zhql 98 Nov 16, 2022
This repository contains the source code for the paper First Order Motion Model for Image Animation

!!! Check out our new paper and framework improved for articulated objects First Order Motion Model for Image Animation This repository contains the s

13k Jan 09, 2023
Chinese Mandarin tts text-to-speech 中文 (普通话) 语音 合成 , by fastspeech 2 , implemented in pytorch, using waveglow as vocoder,

Chinese mandarin text to speech based on Fastspeech2 and Unet This is a modification and adpation of fastspeech2 to mandrin(普通话). Many modifications t

291 Jan 02, 2023
AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations

AugLy is a data augmentations library that currently supports four modalities (audio, image, text & video) and over 100 augmentations. Each modality’s augmentations are contained within its own sub-l

Facebook Research 4.6k Jan 09, 2023
Fully-automated scripts for collecting AI-related papers

AI-Paper-collector Fully-automated scripts for collecting AI-related papers List of Conferences to crawel ACL: 21-19 (including findings) EMNLP: 21-19

Gordon Lee 776 Jan 08, 2023
Text to Image Generation with Semantic-Spatial Aware GAN

text2image This repository includes the implementation for Text to Image Generation with Semantic-Spatial Aware GAN This repo is not completely. Netwo

CVDDL 124 Dec 30, 2022
[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences

Garment4D [PDF] | [OpenReview] | [Project Page] Overview This is the codebase for our NeurIPS 2021 paper Garment4D: Garment Reconstruction from Point

Fangzhou Hong 112 Dec 23, 2022
JAXDL: JAX (Flax) Deep Learning Library

JAXDL: JAX (Flax) Deep Learning Library Simple and clean JAX/Flax deep learning algorithm implementations: Soft-Actor-Critic (arXiv:1812.05905) Transf

Patrick Hart 4 Nov 27, 2022
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022
Joint parameterization and fitting of stroke clusters

StrokeStrip: Joint Parameterization and Fitting of Stroke Clusters Dave Pagurek van Mossel1, Chenxi Liu1, Nicholas Vining1,2, Mikhail Bessmeltsev3, Al

Dave Pagurek 44 Dec 01, 2022
4K videos with annotated masks in our ICCV2021 paper 'Internal Video Inpainting by Implicit Long-range Propagation'.

Annotated 4K Videos paper | project website | code | demo video 4K videos with annotated object masks in our ICCV2021 paper: Internal Video Inpainting

Tengfei Wang 21 Nov 05, 2022