FinRL­-Meta: A Universe for Data­-Driven Financial Reinforcement Learning. 🔥

Overview

FinRL-Meta: A Universe of Market Environments.

Downloads Downloads Python 3.6 PyPI

FinRL-Meta is a universe of market environments for data-driven financial reinforcement learning. Users can use FinRL-Meta as the metaverse of their financial environments.

  1. FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy and provides open-source data engineering tools for financial big data.
  2. FinRL-Meta provides hundreds of market environments for various trading tasks.
  3. FinRL-Meta enables multiprocessing simulation and training by exploiting thousands of GPU cores.

Also called Neo_FinRL: Near real-market Environments for data-driven Financial Reinforcement Learning.

Outline

Our Goals

  • To reduce the simulation-reality gap: existing works use backtesting on historical data, while the real performance may be quite different when applying the algorithms to paper/live trading.
  • To reduce the data pre-processing burden, so that quants can focus on developing and optimizing strategies.
  • To provide benchmark performance and facilitate fair comparisons, providing a standardized environment will allow researchers to evaluate different strategies in the same way. Also, it would help researchers to better understand the “black-box” nature (deep neural network-based) of DRL algorithms.

Design Principles

  • Plug-and-Play (PnP): Modularity; Handle different markets (say T0 vs. T+1)
  • Completeness and universal: Multiple markets; Various data sources (APIs, Excel, etc); User-friendly variables.
  • Avoid hard-coded parameters
  • Closing the sim-real gap using the “training-testing-trading” pipeline: simulation for training and connecting real-time APIs for testing/trading.
  • Efficient data sampling: accelerate the data sampling process is the key to DRL training! From the ElegantRL project. we know that multi-processing is powerful to reduce the training time (scheduling between CPU + GPU).
  • Transparency: a virtual env that is invisible to the upper layer
  • Flexibility and extensibility: Inheritance might be helpful here

Overview

Overview image of NeoFinRL We utilize a layered structure in FinRL-metaverse, as shown in the figure above. FinRL-metaverse consists of three layers: data layer, environment layer, and agent layer. Each layer executes its functions and is independent. Meanwhile, layers interact through end-to-end interfaces to implement the complete workflow of algorithm trading.

DataOps

DataOps is a series of principles and practices to improve the quality and reduce the cycle time of data science. It inherits the ideas of Agile development, DevOps, and lean manufacturing and applies them to the data science and machine learning field. FinRL-Meta follows the DataOps paradigm.

Supported Data Sources:

Data Source Type Range and Frequency Request Limits Raw Data Preprocessed Data
Yahoo! Finance US Securities Frequency-specific, 1min 2,000/hour OHLCV Prices&Indicators
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV Prices&Indicators
WRDS.TAQ US Securities 2003-now, 1ms 5 requests each time Intraday Trades Prices&Indicators
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV Prices&Indicators
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV Prices&Indicators
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV Prices&Indicators
QuantConnect US Securities 1998-now, 1s NA OHLCV Prices&Indicators

Plug-and-Play

In the development pipeline, we separate market environments from the data layer and the agent layer. Any DRL agent can be directly plugged into our environments, then trained and tested. Different agents/algorithms can be compared by running on the same benchmark environment for fair evaluations.

A demonstration notebook for plug-and-play with ElegantRL, Stable Baselines3 and RLlib: Play and Play with DRL Agents

"Training-Testing-Trading" Pipeline

A DRL agent learns by interacting with the training environment, is validated in the validation environment for parameter tuning. Then, the agent is tested in historical datasets (backtesting). Finally, the agent will be deployed in paper trading or live trading markets.

This pipeline solves the information leakage problem because the trading data are never leaked when training/tuning the agents.

Such a unified pipeline allows fair comparisons among different algorithms and strategies.

Our Vision

For future work, we plan to build a multi-agent-based market simulator that consists of over ten thousands of agents, namely, a FinRL-Metaverse. First, FinRL-Metaverse aims to build a universe of market environments, like the XLand environment (source) and planet-scale climate forecast (source) by DeepMind. To improve the performance for large-scale markets, we will employ GPU-based massive parallel simulation as Isaac Gym (source). Moreover, it will be interesting to explore the deep evolutionary RL framework (source) to simulate the markets. Our final goal is to provide insights into complex market phenomena and offer guidance for financial regulations through FinRL-Metaverse.

Citing FinRL-Meta

@article{finrl_meta_2021,
    author = {Liu, Xiao-Yang and Rui, Jingyang and Gao, Jiechao and Yang, Liuqing and Yang, Hongyang and Wang, Zhaoran and Wang, Christina Dan and Guo Jian},
    title   = {{FinRL-Meta}: Data-Driven Deep ReinforcementLearning in Quantitative Finance},
    journal = {Data-Centric AI Workshop, NeurIPS},
    year    = {2021}
}

Collaborators

           
Comments
  • ValueError: If using all scalar values, you must pass an index using binance processor

    ValueError: If using all scalar values, you must pass an index using binance processor

    I'm using the meta Dataprocessor

    import meta
    from meta.data_processor import DataProcessor
    

    When I run the following :

    
    #Set constants
    #
    LIST_OF_SYMBOLS = ['ADAUSDT' ,'ATOMUSDT' ,'BNBUSDT', 'BTCUSDT' ,'DOTUSDT' ,'ETCUSDT', 'ETHUSDT','LINKUSDT', 'LTCUSDT' ,'SOLUSDT' ,'XMRUSDT' ,'XRPUSDT']
    #Set time interval
    TIME_INTERVAL = '1D'
    
    #Training start
    START_TRAIN =  '2018-01-1'
    
    
    #Training end
    END_TRAIN = '2020-12-1'
    
    #Trading start
    START_TRADE = '2020-12-1'
    
    #Trading end
    END_TRADE = '2022-06-01'
    
    #List of technical indicators
    TECHNICAL_INDICATORS = ['rsi',
                            'cci',
                            'macd',
                            'macd_signal',
                            'macd_hist',
                            'dx'
                                 ]  
    
    if_vix = False
    
    processorObj = DataProcessor(data_source = 'binance', start_date= START_TRAIN, end_date =END_TRAIN, time_interval=TIME_INTERVAL) 
    processorObj.download_data(LIST_OF_SYMBOLS)
    processorObj.clean_data()
    processorObj.add_technical_indicator(TECHNICAL_INDICATORS)
    frame = processorObj.dataframe
    
    frame.head() 
    

    I get :

    
    ValueError                                Traceback (most recent call last)
    [<ipython-input-26-256be0b7d088>](https://localhost:8080/#) in <module>
         30 
         31 processorObj = DataProcessor(data_source = 'binance', start_date= START_TRAIN, end_date =END_TRAIN, time_interval=TIME_INTERVAL)
    ---> 32 processorObj.download_data(LIST_OF_SYMBOLS)
         33 processorObj.clean_data()
         34 processorObj.add_technical_indicator(TECHNICAL_INDICATORS)
    
    7 frames
    [/FinRL-Meta/meta/data_processor.py](https://localhost:8080/#) in download_data(self, ticker_list)
         84 
         85     def download_data(self, ticker_list):
    ---> 86         self.processor.download_data(ticker_list=ticker_list)
         87         self.dataframe = self.processor.dataframe
         88 
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in download_data(self, ticker_list)
         53             final_df = pd.DataFrame()
         54             for i in ticker_list:
    ---> 55                 hist_data = self.dataframe_with_limit(symbol=i)
         56                 df = hist_data.iloc[:-1].dropna()
         57                 df["tic"] = i
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in dataframe_with_limit(self, symbol)
        183         while True:
        184 
    --> 185             new_df = self.get_binance_bars(last_datetime, symbol)
        186             if new_df is None:
        187                 break
    
    [/FinRL-Meta/meta/data_processors/binance.py](https://localhost:8080/#) in get_binance_bars(self, last_datetime, symbol)
        121           #r = requests.get(self.url, params=req_params)
        122           #print(r.text)
    --> 123 
        124 
        125         df = pd.DataFrame(requests.get(self.url, params=req_params).json())        if df.empty:
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/frame.py](https://localhost:8080/#) in __init__(self, data, index, columns, dtype, copy)
        612         elif isinstance(data, dict):
        613             # GH#38939 de facto copy defaults to False only in non-dict cases
    --> 614             mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
        615         elif isinstance(data, ma.MaskedArray):
        616             import numpy.ma.mrecords as mrecords
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in dict_to_mgr(data, index, columns, dtype, typ, copy)
        463 
        464     return arrays_to_mgr(
    --> 465         arrays, data_names, index, columns, dtype=dtype, typ=typ, consolidate=copy
        466     )
        467 
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity, typ, consolidate)
        117         # figure out the index, if necessary
        118         if index is None:
    --> 119             index = _extract_index(arrays)
        120         else:
        121             index = ensure_index(index)
    
    [/usr/local/lib/python3.7/dist-packages/pandas/core/internals/construction.py](https://localhost:8080/#) in _extract_index(data)
        623 
        624         if not indexes and not raw_lengths:
    --> 625             raise ValueError("If using all scalar values, you must pass an index")
        626 
        627         if have_series:
    
    ValueError: If using all scalar values, you must pass an index
    

    Even when I try defining the list as ['ADAUSDT'] ,I still get the error

    @zhumingpassional @BruceYanghy @XiaoYangLiu-FinRL

    Please assist

    opened by Daiiszuki 14
  • actions on plug and play DRL notebook

    actions on plug and play DRL notebook

    Hello, I'm a beginner of RL. I wonder that if we can see the actions with the ElegentRL's DRL_prediction function ? (Return actions in [-1,0 1], just like the sb3's DRL_prediction function). It will be helpful if someone share their idea, thanks!

    Besides, I wonder that is it suitable to trade index (like DJI) as single stock with the env in plug_and_play_DRL notebook ?

    discussion 
    opened by wezzyzzs 8
  • Add tests

    Add tests

    I have added several tests:

    • Added testcase for Binance
    • Added testcase for clean_data()
    • added testcase for add_technical_indicator(both stockstats and TALib)
    opened by eyast 7
  • AttributeError: 'NoneType' object has no attribute 'config'

    AttributeError: 'NoneType' object has no attribute 'config'

    Hi, train RLlib come up with following error

    ~/anaconda3/lib/python3.7/site-packages/ray/worker.py in get(object_refs, timeout) 1625 raise value.as_instanceof_cause() 1626 else: -> 1627 raise value 1628 1629 if is_individual_id:

    RayActorError: The actor died because of an error raised in its creation task, ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81) AttributeError: 'NoneType' object has no attribute 'config'

    During handling of the above exception, another exception occurred:

    ray::RolloutWorker.init() (pid=43634, ip=192.168.1.81) File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 565, in init devices = get_tf_gpu_devices() File "/home/reza/anaconda3/lib/python3.7/site-packages/ray/rllib/utils/tf_ops.py", line 54, in get_gpu_devices devices = tf.config.experimental.list_physical_devices() AttributeError: 'NoneType' object has no attribute 'config'

    it seems that is due to Tensorflow TensorBoard ... any idea, how can we sort it out? regards

    bug 
    opened by uh-reza 7
  • ModuleNotFoundError not found in FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    ModuleNotFoundError not found in FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    While running 2.3. Import Packages got this error

    ModuleNotFoundError                       Traceback (most recent call last)
    [<ipython-input-2-65f21698d972>](https://localhost:8080/#) in <module>()
          9 from finrl import config
         10 from finrl import config_tickers
    ---> 11 from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
         12 from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
         13 from finrl.finrl_meta.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
    
    ModuleNotFoundError: No module named 'finrl.finrl_meta'
    

    Actually, it appears that all of these statements are failing.

    from finrl.finrl_meta.preprocessor.yahoodownloader import YahooDownloader
    from finrl.finrl_meta.preprocessor.preprocessors import FeatureEngineer, data_split
    from finrl.finrl_meta.env_portfolio_allocation.env_portfolio import StockPortfolioEnv
    from finrl.finrl_meta.data_processor import DataProcessor
    from finrl.finrl_meta.data_processors.processor_yahoofinance import YahooFinanceProcessor
    

    I've tried running this image

    and changing the from statement to

    from finrl_meta.preprocessor.yahoodownloader import YahooDownloader
    

    and

    from finrl-meta.preprocessor.yahoodownloader import YahooDownloader
    

    still no luck. 😞

    Finally tried

    from finrl.meta.preprocessor.yahoodownloader import YahooDownloader
    

    That seemed to have done the trick. 😄

    bug 
    opened by kabua 6
  • Create create_env.py

    Create create_env.py

    I highly suggest for simplifying and easier debugging and better readability and understandability of the codes, the environment creation gets it's own function and gets separated from the train and test functions. One of the prominent advantages of FinRL-Meta is it's modularity and having different independent layers, which the proposed change helps going further in this direction.

    opened by mhdmyz 5
  • Vix

    Vix

    • removed the clean data code from the basic_processor because it's a duplicate. The cleaning steps are individual to each processor and should be / are done there.
    • made it possible to use download_data multiple times (for vix) uses append if self.dataframe is not empty.
    • changed some pandas operation to inplace
    • reimplemented / prepared the add_vix function.

    Tested it with yahoo finance processor. There is a problem though: The clean_data function removes the vix column again. Wanted to fix that, the code is pretty complicate though and it's probably better the original coder of it fixes it. Also the clean_data from yahoo finance uses backward fill? This introduces lookahead. Should be addressed too.

    # if close on start date is NaN, fill data with first valid close
                # and set volume to 0.
    

    So this PR isn't ready and needs more work. The other processor needs to be tested. It provides a solid foundation for making the vix work again though. Hope it helps.

    suggestion 
    opened by cryptocoinserver 5
  • Can not find the Demo_China_A_share_market.ipynb

    Can not find the Demo_China_A_share_market.ipynb

    Thank you for your good work.

    For some reason, I could not find the link for "Demo_China_A_share_market.ipynb".

    The full link was: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/Demo_China_A_share_market.ipynb

    Can you please provide some hints?

    Thank you.

    bug 
    opened by tairen99 4
  • Error in Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb

    Error in Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb

    Hello,

    First issue

    if I run the jupyter notebook Demo_FinRL_Meta_Integrate_Trends_data_to_DOW_Jones.ipynb on my local computer, I receive warnings in 2 cells. In the cell # !gdown --id "1sp11dtAJGGqC-3UdSn774ZD1zWCsqbn4" !gdown --id "1m63ncE-BYlS77u5ejYTte9Nmh35DWhzp"

    I receive the warning that the command "gdown" is either written wrongly or couldnt be found. And for the cell

    !unzip "/content/Pytrends.zip"

    I get the warning

    unzip: cannot find or open /content/Pytrends.zip, /content/Pytrends.zip.zip or /content/Pytrends.zip.ZIP. I assume its due to these warnings that later on in the cell

    user_df = get_user_df() len(user_df)

    I get the error FileNotFoundError: [WinError 3] Das System kann den angegebenen Pfad nicht finden: 'Pytrends_Data'

    which means that "Pytrends_data" could not be found by the system.

    Second Issue

    I tried running the code on Google Colab instead, however here I received a different error. In the cell

    train_env_instance = get_train_env(TRAIN_START_DATE, TRAIN_END_DATE, ticker_list, data_source, time_interval,model_name, env,info_col) val_env_instance = get_test_env(VAL_START_DATE, VAL_END_DATE, ticker_list, data_source, time_interval, info_col, env, model_name)

    I got the error

    ` in download_data(self, ticker_list, start_date, end_date, time_interval) 33 end_date = end_date, 34 time_interval = time_interval) ---> 35 self.dataframe = self.processor.dataframe 36 37 def clean_data(self):

    AttributeError: 'YahooFinanceProcessor' object has no attribute 'dataframe'`

    I would be very grateful for help for either the errors on my local computer or the error in the google colab. Thank you in advance for any help!

    bug 
    opened by Kartolon 4
  • NameError: name 'MACD' is not defined

    NameError: name 'MACD' is not defined

    Demo_MultiCrypto_Trading.ipynb

    Binance successfully connected
    Adding self-defined technical indicators is NOT supported yet.
    Use default: MACD, RSI, CCI, DX.
    
    ---------------------------------------------------------------------------
    
    NameError                                 Traceback (most recent call last)
    
    <ipython-input-15-29348b501caa> in <module>()
         11       erl_params=ERL_PARAMS,
         12       break_step=5e4,
    ---> 13       if_vix=False
         14       )
    
    3 frames
    
    /FinRL-Meta/finrl_meta/data_processors/processor_binance.py in add_technical_indicator(self, df, tech_indicator_list)
         48         for i in df.tic.unique():
         49             tic_df = df[df.tic==i]
    ---> 50             tic_df['macd'], tic_df['macd_signal'], tic_df['macd_hist'] = MACD(tic_df['close'], fastperiod=12, 
         51                                                                                 slowperiod=26, signalperiod=9)
         52             tic_df['rsi'] = RSI(tic_df['close'], timeperiod=14)
    
    NameError: name 'MACD' is not defined
    

    Caused by the commented line here: https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/3a8fdcd7ad1e3ad8e3dbbc9a647cb5eef3769507/finrl_meta/data_processors/processor_binance.py#L6

    good first issue discussion 
    opened by cryptocoinserver 4
  • Error with TRAIN and TEST interval

    Error with TRAIN and TEST interval

    In "Demo_MultiCrypto_Trading" notebook I try to change the interval of data eg: TRAIN_START_DATE = '2020-08-01' -- original START DATE=2021-09-01 TRAIN_END_DATE = '2021-09-20'

    TEST_START_DATE = '2021-09-21' TEST_END_DATE = '2021-09-30'

    with a different timeframe of "60m" instead of "5m". It gives me back the following error: ValueError: If using all scalar values, you must pass an index

    How can I solve it please? Thx

    bug 
    opened by matti410 4
  • FinRL_PaperTrading_Demo np.hstack()

    FinRL_PaperTrading_Demo np.hstack()

    When executing the example here without any modifications except for including Alpaca API key, I get the following error:

    ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 1192 and the array at index 1 has size 1177

    which is coming from this line here. Anyone else run into this issue?

    opened by josh0tt 0
  • How to connect one of FinRL algorithm to the real market ? (and execute trades)

    How to connect one of FinRL algorithm to the real market ? (and execute trades)

    Hello,

    Thanks for this wonderful library.

    I would like to connect one of the RL algos (say the Stock_NeurIPS2018_SB3) to a real broker.

    Do you recommend any Python Library to do so ? Or any other way ?

    I was thinking of using the metatrader 5 Python module.

    help_wanted 
    opened by aymeric75 1
  • [DEBUGGING HELP] ValueError: could not broadcast input array from shape (14,6) into shape (22,14)

    [DEBUGGING HELP] ValueError: could not broadcast input array from shape (14,6) into shape (22,14)

    In refernce to https://github.com/AI4Finance-Foundation/FinRL-Meta/blob/master/tutorials/1-Introduction/FinRL_PortfolioAllocation_NeurIPS_2020.ipynb

    The train data is shaped as (7812, 19)

    Passing the data to the env runs without any errors

    cryptoEnv = cryptoPortfolioAllocationEnvironment(dataFrame=trainData, **envKwargs)

    And when I call cryptoEnv.observation_space, the shape is (22, 14), which assume is a combination of the price and indicators:

    14 tickers, 8 indicators

    running activeEnv, _ = cryptoEnv.stableBaselineEnv()

    returns

    
    ValueError                                Traceback (most recent call last)
    [<ipython-input-68-f822f4852cfe>](https://localhost:8080/#) in <module>
    ----> 1 activeEnv, _ = cryptoEnv.stableBaselineEnv()
    
    2 frames
    [<ipython-input-63-fd656b920b2f>](https://localhost:8080/#) in stableBaselineEnv(self)
        189     def stableBaselineEnv(self):
        190       sb = DummyVecEnv([lambda: self])
    --> 191       obs = sb.reset()
        192       return sb, obs
        193 
    
    [/usr/local/lib/python3.7/dist-packages/stable_baselines3/common/vec_env/dummy_vec_env.py](https://localhost:8080/#) in reset(self)
         62         for env_idx in range(self.num_envs):
         63             obs = self.envs[env_idx].reset()
    ---> 64             self._save_obs(env_idx, obs)
         65         return self._obs_from_buf()
         66 
    
    [/usr/local/lib/python3.7/dist-packages/stable_baselines3/common/vec_env/dummy_vec_env.py](https://localhost:8080/#) in _save_obs(self, env_idx, obs)
         92         for key in self.keys:
         93             if key is None:
    ---> 94                 self.buf_obs[key][env_idx] = obs
         95             else:
         96                 self.buf_obs[key][env_idx] = obs[key]
    
    ValueError: could not broadcast input array from shape (14,6) into shape (22,14)
    
    
    

    What am I missing ?

    Please let me know if you require any additional info.The function to generate the env is as follows

    def stableBaselineEnv(self):
          sb = DummyVecEnv([lambda: self])
          obs = sb.reset()
          return sb, obs
    
    bug 
    opened by Daiiszuki 48
  • FinRL_PaperTrading_Demo.ipynb - errors on trading operation

    FinRL_PaperTrading_Demo.ipynb - errors on trading operation

    Environment: Win10 Chrome Colab File at https://colab.research.google.com/github/AI4Finance-Foundation/FinRL-Tutorials/blob/master/3-Practical/FinRL_PaperTrading_Demo.ipynb

    No change to this file other than Alpaca credentials required.

    Function call: paper_trading_erl = AlpacaPaperTrading(ticker_list = DOW_30_TICKER, time_interval = '1Min', drl_lib = 'elegantrl', agent = 'ppo', cwd = './papertrading_erl_retrain', net_dim = ERL_PARAMS['net_dimension'], state_dim = state_dim, action_dim= action_dim, API_KEY = API_KEY, API_SECRET = API_SECRET, API_BASE_URL = API_BASE_URL, tech_indicator_list = INDICATORS, turbulence_thresh=30, max_stock=1e2) paper_trading_erl.run()

    Error Output: load actor from: ./papertrading_erl_retrain/actor.pth Waiting for market to open... 0 minutes til market open. Market opened. Exception in thread Thread-12: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-13: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-14: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-15: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-17: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-18: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-19: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-20: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-21: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-22: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-23: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-24: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Exception in thread Thread-25: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 182, in trade state = self.get_state() File "", line 247, in get_state tech_indicator_list=self.tech_indicator_list) File "/usr/local/lib/python3.7/dist-packages/finrl/meta/data_processors/processor_alpaca.py", line 355, in fetch_latest_data raise ValueError ValueError

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-26: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-27: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-28: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-29: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    Succesfully add technical indicators Successfully transformed into array 30 Exception in thread Thread-30: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "", line 186, in trade s_tensor = torch.as_tensor((state,), device=self.device) AttributeError: 'AlpacaPaperTrading' object has no attribute 'device'

    bug 
    opened by marcipops 9
Releases(v0.3.5)
Owner
AI4Finance Foundation
An open-source community sharing AI tools for finance.
AI4Finance Foundation
Realtime YOLO Monster Detection With Non Maximum Supression

Realtime-YOLO-Monster-Detection-With-Non-Maximum-Supression Table of Contents In

5 Oct 07, 2022
Towards Part-Based Understanding of RGB-D Scans

Towards Part-Based Understanding of RGB-D Scans (CVPR 2021) We propose the task of part-based scene understanding of real-world 3D environments: from

26 Nov 23, 2022
This is a simple framework to make object detection dataset very quickly

FastAnnotation Table of contents General info Requirements Setup General info This is a simple framework to make object detection dataset very quickly

Serena Tetart 1 Jan 24, 2022
Tiny Object Detection in Aerial Images.

AI-TOD AI-TOD is a dataset for tiny object detection in aerial images. [Paper] [Dataset] Description AI-TOD comes with 700,621 object instances for ei

jwwangchn 116 Dec 30, 2022
Direct Multi-view Multi-person 3D Human Pose Estimation

Implementation of NeurIPS-2021 paper: Direct Multi-view Multi-person 3D Human Pose Estimation [paper] [video-YouTube, video-Bilibili] [slides] This is

Sea AI Lab 251 Dec 30, 2022
一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。

captcha_server 一个免费开源一键搭建的通用验证码识别平台,大部分常见的中英数验证码识别都没啥问题。 使用方法 python = 3.8 以上环境 pip install -r requirements.txt -i https://pypi.douban.com/simple gun

Sml2h3 189 Dec 02, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
DecoupledNet is semantic segmentation system which using heterogeneous annotations

DecoupledNet: Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation Created by Seunghoon Hong, Hyeonwoo Noh and Bohyung Han at POSTE

Hyeonwoo Noh 74 Sep 22, 2021
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

107 Dec 02, 2022
Qlib is an AI-oriented quantitative investment platform

Qlib is an AI-oriented quantitative investment platform, which aims to realize the potential, empower the research, and create the value of AI technologies in quantitative investment.

Microsoft 10.1k Dec 30, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 21 Nov 23, 2022
Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Facebook Research 68 Dec 29, 2022
Various operations like path tracking, counting, etc by using yolov5

Object-tracing-with-YOLOv5 Various operations like path tracking, counting, etc by using yolov5

Pawan Valluri 5 Nov 28, 2022
MMFlow is an open source optical flow toolbox based on PyTorch

Documentation: https://mmflow.readthedocs.io/ Introduction English | 简体中文 MMFlow is an open source optical flow toolbox based on PyTorch. It is a part

OpenMMLab 688 Jan 06, 2023
Implementation of "Semi-supervised Domain Adaptive Structure Learning"

Semi-supervised Domain Adaptive Structure Learning - ASDA This repo contains the source code and dataset for our ASDA paper. Illustration of the propo

3 Dec 13, 2021
StyleGAN2-ADA - Official PyTorch implementation

Need Help? If you’re new to StyleGAN2-ADA and looking to get started, please check out this video series from a course Lia Coleman and I taught in Oct

Derrick Schultz 217 Jan 04, 2023
A Simplied Framework of GAN Inversion

Framework of GAN Inversion Introcuction You can implement your own inversion idea using our repo. We offer a full range of tuning settings (in hparams

Kangneng Zhou 13 Sep 27, 2022
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Taehoon Kim 1k Jan 04, 2023
ThunderSVM: A Fast SVM Library on GPUs and CPUs

What's new We have recently released ThunderGBM, a fast GBDT and Random Forest library on GPUs. add scikit-learn interface, see here Overview The miss

Xtra Computing Group 1.4k Dec 22, 2022
Tensorflow port of a full NetVLAD network

netvlad_tf The main intention of this repo is deployment of a full NetVLAD network, which was originally implemented in Matlab, in Python. We provide

Robotics and Perception Group 225 Nov 08, 2022